aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1907.12413
2965312352
Argumentation Mining addresses the challenging tasks of identifying boundaries of argumentative text fragments and extracting their relationships. Fully automated solutions do not reach satisfactory accuracy due to their insufficient incorporation of semantics and domain knowledge. Therefore, experts currently rely on time-consuming manual annotations. In this paper, we present a visual analytics system that augments the manual annotation process by automatically suggesting which text fragments to annotate next. The accuracy of those suggestions is improved over time by incorporating linguistic knowledge and language modeling to learn a measure of argument similarity from user interactions. Based on a long-term collaboration with domain experts, we identify and model five high-level analysis tasks. We enable close reading and note-taking, annotation of arguments, argument reconstruction, extraction of argument relations, and exploration of argument graphs. To avoid context switches, we transition between all views through seamless morphing, visually anchoring all text- and graph-based layers. We evaluate our system with a two-stage expert user study based on a corpus of presidential debates. The results show that experts prefer our system over existing solutions due to the speedup provided by the automatic suggestions and the tight integration between text and graph views.
Recent years have seen a rise of interactive machine learning @cite_41 and such techniques are now commonly integrated into visual analytics systems, as recently surveyed by @cite_50 . Often, they are used to learn model refinements from user interaction @cite_42 or provide @cite_40 . Semantic interactions are typically performed with the intent of refining or steering a machine-learning model. In , expert users perform semantic interactions, as their primary goal is the annotation of argumentation. The result is a concealed machine teaching process @cite_14 that is not an end in itself, but a by-product'' of the annotation.
{ "cite_N": [ "@cite_14", "@cite_41", "@cite_42", "@cite_40", "@cite_50" ], "mid": [ "2738647029", "2003238113", "2753907577", "2051088039", "2602814102" ], "abstract": [ "The current processes for building machine learning systems require practitioners with deep knowledge of machine learning. This significantly limits the number of machine learning systems that can be created and has led to a mismatch between the demand for machine learning systems and the ability for organizations to build them. We believe that in order to meet this growing demand for machine learning systems we must significantly increase the number of individuals that can teach machines. We postulate that we can achieve this goal by making the process of teaching machines easy, fast and above all, universally accessible. While machine learning focuses on creating new algorithms and improving the accuracy of \"learners\", the machine teaching discipline focuses on the efficacy of the \"teachers\". Machine teaching as a discipline is a paradigm shift that follows and extends principles of software engineering and programming languages. We put a strong emphasis on the teacher and the teacher's interaction with data, as well as crucial components such as techniques and design principles of interaction and visualization. In this paper, we present our position regarding the discipline of machine teaching and articulate fundamental machine teaching principles. We also describe how, by decoupling knowledge about machine learning algorithms from the process of teaching, we can accelerate innovation and empower millions of new uses for machine learning models.", "Perceptual user interfaces (PUIs) are an important part of ubiquitous computing. Creating such interfaces is difficult because of the image and signal processing knowledge required for creating classifiers. We propose an interactive machine-learning (IML) model that allows users to train, classify view and correct the classifications. The concept and implementation details of IML are discussed and contrasted with classical machine learning models. Evaluations of two algorithms are also presented. We also briefly describe Image Processing with Crayons (Crayons), which is a tool for creating new camera-based interfaces using a simple painting metaphor. The Crayons tool embodies our notions of interactive machine learning", "Topic modeling algorithms are widely used to analyze the thematic composition of text corpora but remain difficult to interpret and adjust. Addressing these limitations, we present a modular visual analytics framework, tackling the understandability and adaptability of topic models through a user-driven reinforcement learning process which does not require a deep understanding of the underlying topic modeling algorithms. Given a document corpus, our approach initializes two algorithm configurations based on a parameter space analysis that enhances document separability. We abstract the model complexity in an interactive visual workspace for exploring the automatic matching results of two models, investigating topic summaries, analyzing parameter distributions, and reviewing documents. The main contribution of our work is an iterative decision-making technique in which users provide a document-based relevance feedback that allows the framework to converge to a user-endorsed topic distribution. We also report feedback from a two-stage study which shows that our technique results in topic model quality improvements on two independent measures.", "Visual analytics emphasizes sensemaking of large, complex datasets through interactively exploring visualizations generated by statistical models. For example, dimensionality reduction methods use various similarity metrics to visualize textual document collections in a spatial metaphor, where similarities between documents are approximately represented through their relative spatial distances to each other in a 2D layout. This metaphor is designed to mimic analysts' mental models of the document collection and support their analytic processes, such as clustering similar documents together. However, in current methods, users must interact with such visualizations using controls external to the visual metaphor, such as sliders, menus, or text fields, to directly control underlying model parameters that they do not understand and that do not relate to their analytic process occurring within the visual metaphor. In this paper, we present the opportunity for a new design space for visual analytic interaction, called semantic interaction, which seeks to enable analysts to spatially interact with such models directly within the visual metaphor using interactions that derive from their analytic process, such as searching, highlighting, annotating, and repositioning documents. Further, we demonstrate how semantic interactions can be implemented using machine learning techniques in a visual analytic tool, called ForceSPIRE, for interactive analysis of textual data within a spatial visualization. Analysts can express their expert domain knowledge about the documents by simply moving them, which guides the underlying model to improve the overall layout, taking the user's feedback into account.", "Visual analytics systems combine machine learning or other analytic techniques with interactive data visualization to promote sensemaking and analytical reasoning. It is through such techniques that people can make sense of large, complex data. While progress has been made, the tactful combination of machine learning and data visualization is still under-explored. This state-of-the-art report presents a summary of the progress that has been made by highlighting and synthesizing select research advances. Further, it presents opportunities and challenges to enhance the synergy between machine learning and visual analytics for impactful future research directions." ] }
1907.12413
2965312352
Argumentation Mining addresses the challenging tasks of identifying boundaries of argumentative text fragments and extracting their relationships. Fully automated solutions do not reach satisfactory accuracy due to their insufficient incorporation of semantics and domain knowledge. Therefore, experts currently rely on time-consuming manual annotations. In this paper, we present a visual analytics system that augments the manual annotation process by automatically suggesting which text fragments to annotate next. The accuracy of those suggestions is improved over time by incorporating linguistic knowledge and language modeling to learn a measure of argument similarity from user interactions. Based on a long-term collaboration with domain experts, we identify and model five high-level analysis tasks. We enable close reading and note-taking, annotation of arguments, argument reconstruction, extraction of argument relations, and exploration of argument graphs. To avoid context switches, we transition between all views through seamless morphing, visually anchoring all text- and graph-based layers. We evaluate our system with a two-stage expert user study based on a corpus of presidential debates. The results show that experts prefer our system over existing solutions due to the speedup provided by the automatic suggestions and the tight integration between text and graph views.
Several systems combine the close and distant reading metaphors to provide deeper insights into textual data, such as @cite_36 or @cite_43 . @cite_25 have developed a tool called , which combines focus- and context-techniques to support the analysis of large text documents. The tool enables exploration of text through novel navigation methods and allows the extraction of entities and other concepts. places all close and distant-reading views next to each other, following the metaphor by W "orner and Ertl @cite_51 . instead stacks'' the different views into task-dependent layers.
{ "cite_N": [ "@cite_36", "@cite_43", "@cite_51", "@cite_25" ], "mid": [ "", "2137561086", "173034076", "2012118336" ], "abstract": [ "", "Digital information displays are becoming more common in public spaces such as museums, galleries, and libraries. However, the public nature of these locations requires special considerations concerning the design of information visualization in terms of visual representations and interaction techniques. We discuss the potential for, and challenges of, information visualization in the museum context based on our practical experience with EMDialog, an interactive information presentation that was part of the Emily Carr exhibition at the Glenbow Museum in Calgary. EMDialog visualizes the diverse and multi-faceted discourse about this Canadian artist with the goal to both inform and provoke discussion. It provides a visual exploration environment that offers interplay between two integrated visualizations, one for information access along temporal, and the other along contextual dimensions. We describe the results of an observational study we conducted at the museum that revealed the different ways visitors approached and interacted with EMDialog, as well as how they perceived this form of information presentation in the museum context. Our results include the need to present information in a manner sufficiently attractive to draw attention and the importance of rewarding passive observation as well as both short- and longer term information exploration.", "The SmoothScroll control is a multi-scale, multi-layer slider for the navigation in large one-dimensional datasets such as time series data. Presenting multiple data layers at gradually varying scales provides both detail and context information and allows for both fine-grained and coarse navigation and scrolling at different granularities. Visual data aggregation allows for multi-level navigation while the clear visual relation of the data layers aids the user in retaining a sense of both the current detail position and the immediate and global context. We describe SmoothScroll as well as related controls and discuss its application with the help of several usage examples.", "Interactive visualization provides valuable support for exploring, analyzing, and understanding textual documents. Certain tasks, however, require that insights derived from visual abstractions are verified by a human expert perusing the source text. So far, this problem is typically solved by offering overview-detail techniques, which present different views with different levels of abstractions. This often leads to problems with visual continuity. Focus-context techniques, on the other hand, succeed in accentuating interesting subsections of large text documents but are normally not suited for integrating visual abstractions. With VarifocalReader we present a technique that helps to solve some of these approaches' problems by combining characteristics from both. In particular, our method simplifies working with large and potentially complex text documents by simultaneously offering abstract representations of varying detail, based on the inherent structure of the document, and access to the text itself. In addition, VarifocalReader supports intra-document exploration through advanced navigation concepts and facilitates visual analysis tasks. The approach enables users to apply machine learning techniques and search mechanisms as well as to assess and adapt these techniques. This helps to extract entities, concepts and other artifacts from texts. In combination with the automatic generation of intermediate text levels through topic segmentation for thematic orientation, users can test hypotheses or develop interesting new research questions. To illustrate the advantages of our approach, we provide usage examples from literature studies." ] }
1907.12413
2965312352
Argumentation Mining addresses the challenging tasks of identifying boundaries of argumentative text fragments and extracting their relationships. Fully automated solutions do not reach satisfactory accuracy due to their insufficient incorporation of semantics and domain knowledge. Therefore, experts currently rely on time-consuming manual annotations. In this paper, we present a visual analytics system that augments the manual annotation process by automatically suggesting which text fragments to annotate next. The accuracy of those suggestions is improved over time by incorporating linguistic knowledge and language modeling to learn a measure of argument similarity from user interactions. Based on a long-term collaboration with domain experts, we identify and model five high-level analysis tasks. We enable close reading and note-taking, annotation of arguments, argument reconstruction, extraction of argument relations, and exploration of argument graphs. To avoid context switches, we transition between all views through seamless morphing, visually anchoring all text- and graph-based layers. We evaluate our system with a two-stage expert user study based on a corpus of presidential debates. The results show that experts prefer our system over existing solutions due to the speedup provided by the automatic suggestions and the tight integration between text and graph views.
In recent years, several web-based interfaces have been created to support users in various text annotation tasks. For example, @cite_46 can be used for the annotation of POS tags or named entities. In this interface, annotations are made directly in the text by dragging the mouse over multiple words or clicking on a single word. employs the same interactions for text annotation. Another web-based annotation tool is called @cite_65 ; it allows annotations of named entities and their relations. @cite_49 use automatic entity extraction for annotating relationships between media streams. TimeLineCurator @cite_28 automatically extracts temporal events from unstructured text data and enables users to curate them in a visual, annotated timeline. @cite_54 have presented a collaborative text annotation framework and emphasize the importance of pre-annotation to significantly reduce annotation costs. have presented a framework that creates BRAT-compatible pre-annotations @cite_22 and discuss (dis-)advantages of pre-annotation. The initial suggestions of could be seen as pre-annotations, but are automatically updated after each interaction.
{ "cite_N": [ "@cite_22", "@cite_28", "@cite_54", "@cite_65", "@cite_49", "@cite_46" ], "mid": [ "", "1935970548", "2050793659", "2250522125", "2755457990", "8550301" ], "abstract": [ "", "We present TimeLineCurator, a browser-based authoring tool that automatically extracts event data from temporal references in unstructured text documents using natural language processing and encodes them along a visual timeline. Our goal is to facilitate the timeline creation process for journalists and others who tell temporal stories online. Current solutions involve manually extracting and formatting event data from source documents, a process that tends to be tedious and error prone. With TimeLineCurator, a prospective timeline author can quickly identify the extent of time encompassed by a document, as well as the distribution of events occurring along this timeline. Authors can speculatively browse possible documents to quickly determine whether they are appropriate sources of timeline material. TimeLineCurator provides controls for curating and editing events on a timeline, the ability to combine timelines from multiple source documents, and export curated timelines for online deployment. We evaluate TimeLineCurator through a benchmark comparison of entity extraction error against a manual timeline curation process, a preliminary evaluation of the user experience of timeline authoring, a brief qualitative analysis of its visual output, and a discussion of prospective use cases suggested by members of the target author communities following its deployment.", "This paper presents GATE Teamware--an open-source, web-based, collaborative text annotation framework. It enables users to carry out complex corpus annotation projects, involving distributed annotator teams. Different user roles are provided (annotator, manager, administrator) with customisable user interface functionalities, in order to support the complex workflows and user interactions that occur in corpus annotation projects. Documents may be pre-processed automatically, so that human annotators can begin with text that has already been pre-annotated and thus making them more efficient. The user interface is simple to learn, aimed at non-experts, and runs in an ordinary web browser, without need of additional software installation. GATE Teamware has been evaluated through the creation of several gold standard corpora and internal projects, as well as through external evaluation in commercial and EU text annotation projects. It is available as on-demand service on GateCloud.net, as well as open-source for self-installation.", "Anafora is a newly-developed open source web-based text annotation tool built to be lightweight, flexible, easy to use and capable of annotating with a variety of schemas, simple and complex. Anafora allows secure web-based annotation of any plaintext file with both spanned (e.g. named entity or markable) and relation annotations, as well as adjudication for both types of annotation. Anafora offers automatic set assignment and progress-tracking, centralized and humaneditable XML annotation schemas, and filebased storage and organization of data in a human-readable single-file XML format.", "Media data has been the subject of large scale analysis with applications of text mining being used to provide overviews of media themes and information flows. Such information extracted from media articles has also shown its contextual value of being integrated with other data, such as criminal records and stock market pricing. In this work, we explore linking textual media data with curated secondary textual data sources through user-guided semantic lexical matching for identifying relationships and data links. In this manner, critical information can be identified and used to annotate media timelines in order to provide a more detailed overview of events that may be driving media topics and frames. These linked events are further analyzed through an application of causality modeling to model temporal drivers between the data series. Such causal links are then annotated through automatic entity extraction which enables the analyst to explore persons, locations, and organizations that may be pertinent to the media topic of interest. To demonstrate the proposed framework, two media datasets and an armed conflict event dataset are explored.", "We introduce the brat rapid annotation tool (BRAT), an intuitive web-based tool for text annotation supported by Natural Language Processing (NLP) technology. BRAT has been developed for rich structured annotation for a variety of NLP tasks and aims to support manual curation efforts and increase annotator productivity using NLP techniques. We discuss several case studies of real-world annotation projects using pre-release versions of BRAT and present an evaluation of annotation assisted by semantic class disambiguation on a multicategory entity mention annotation task, showing a 15 decrease in total annotation time. BRAT is available under an open-source license from: http: brat.nlplab.org" ] }
1907.12336
2965762627
Automatic data abstraction is an important capability for both benchmarking machine intelligence and supporting summarization applications. In the former one asks whether a machine can ‘understand’ enough about the meaning of input data to produce a meaningful but more compact abstraction. In the latter this capability is exploited for saving space or human time by summarizing the essence of input data. In this paper we study a general reinforcement learning based framework for learning to abstract sequential data in a goal-driven way. The ability to define different abstraction goals uniquely allows different aspects of the input data to be preserved according to the ultimate purpose of the abstraction. Our reinforcement learning objective does not require human-defined examples of ideal abstraction. Importantly our model processes the input sequence holistically without being constrained by the original input order. Our framework is also domain agnostic – we demonstrate applications to sketch, video and text data and achieve promising results in all domains.
Sketch recognition Early sketch recognition methods were developed to deal with professionally drawn sketches as in CAD or artistic drawings @cite_43 @cite_45 @cite_44 . The more challenging task of free-hand sketch recognition was first tackled in @cite_36 along with the release of the first large-scale dataset of amateur sketches. Since then the task has been well-studied using both classic vision @cite_3 @cite_24 as well as deep learning approaches @cite_53 . Recent successful deep learning approaches have spanned both primarily non-sequential CNN @cite_53 @cite_30 and sequential RNN @cite_39 @cite_10 recognizers. We use both CNN and RNN-based multi-class classifiers to provide rewards for our RL based sketch abstraction framework.
{ "cite_N": [ "@cite_30", "@cite_36", "@cite_53", "@cite_3", "@cite_39", "@cite_44", "@cite_43", "@cite_24", "@cite_45", "@cite_10" ], "mid": [ "2493181180", "1972420097", "1503798456", "1976664910", "2743832495", "", "2137026559", "", "", "2786377825" ], "abstract": [ "We propose a deep learning approach to free-hand sketch recognition that achieves state-of-the-art performance, significantly surpassing that of humans. Our superior performance is a result of modelling and exploiting the unique characteristics of free-hand sketches, i.e., consisting of an ordered set of strokes but lacking visual cues such as colour and texture, being highly iconic and abstract, and exhibiting extremely large appearance variations due to different levels of abstraction and deformation. Specifically, our deep neural network, termed Sketch-a-Net has the following novel components: (i) we propose a network architecture designed for sketch rather than natural photo statistics. (ii) Two novel data augmentation strategies are developed which exploit the unique sketch-domain properties to modify and synthesise sketch training data at multiple abstraction levels. Based on this idea we are able to both significantly increase the volume and diversity of sketches for training, and address the challenge of varying levels of sketching detail commonplace in free-hand sketches. (iii) We explore different network ensemble fusion strategies, including a re-purposed joint Bayesian scheme, to further improve recognition performance. We show that state-of-the-art deep networks specifically engineered for photos of natural objects fail to perform well on sketch recognition, regardless whether they are trained using photos or sketches. Furthermore, through visualising the learned filters, we offer useful insights in to where the superior performance of our network comes from.", "Humans have used sketching to depict our visual world since prehistoric times. Even today, sketching is possibly the only rendering technique readily available to all humans. This paper is the first large scale exploration of human sketches. We analyze the distribution of non-expert sketches of everyday objects such as 'teapot' or 'car'. We ask humans to sketch objects of a given category and gather 20,000 unique sketches evenly distributed over 250 object categories. With this dataset we perform a perceptual study and find that humans can correctly identify the object category of a sketch 73 of the time. We compare human performance against computational recognition methods. We develop a bag-of-features sketch representation and use multi-class support vector machines, trained on our sketch dataset, to classify sketches. The resulting recognition method is able to identify unknown sketches with 56 accuracy (chance is 0.4 ). Based on the computational model, we demonstrate an interactive sketch recognition system. We release the complete crowd-sourced dataset of sketches to the community.", "We propose a multi-scale multi-channel deep neural network framework that, for the first time, yields sketch recognition performance surpassing that of humans. Our superior performance is a result of explicitly embedding the unique characteristics of sketches in our model: (i) a network architecture designed for sketch rather than natural photo statistics, (ii) a multi-channel generalisation that encodes sequential ordering in the sketching process, and (iii) a multi-scale network ensemble with joint Bayesian fusion that accounts for the different levels of abstraction exhibited in free-hand sketches. We show that state-of-the-art deep networks specifically engineered for photos of natural objects fail to perform well on sketch recognition, regardless whether they are trained using photo or sketch. Our network on the other hand not only delivers the best performance on the largest human sketch dataset to date, but also is small in size making efficient training possible using just CPUs.", "We introduce an approach for sketch classification based on Fisher vectors that significantly outperforms existing techniques. For the TU-Berlin sketch benchmark [ 2012a], our recognition rate is close to human performance on the same task. Motivated by these results, we propose a different benchmark for the evaluation of sketch classification algorithms. Our key idea is that the relevant aspect when recognizing a sketch is not the intention of the person who made the drawing, but the information that was effectively expressed. We modify the original benchmark to capture this concept more precisely and, as such, to provide a more adequate tool for the evaluation of sketch classification techniques. Finally, we perform a classification-driven analysis which is able to recover semantic aspects of the individual sketches, such as the quality of the drawing and the importance of each part of the sketch for the recognition.", "Recognizing freehand sketches with high arbitrariness is greatly challenging. Most existing methods either ignore the geometric characteristics or treat sketches as handwritten characters with fixed structural ordering. Consequently, they can hardly yield high recognition performance even though sophisticated learning techniques are employed. In this paper, we propose a sequential deep learning strategy that combines both shape and texture features. A coded shape descriptor is exploited to characterize the geometry of sketch strokes with high flexibility, while the outputs of constitutional neural networks (CNN) are taken as the abstract texture feature. We develop dual deep networks with memorable gated recurrent units (GRUs), and sequentially feed these two types of features into the dual networks, respectively. These dual networks enable the feature fusion by another gated recurrent unit (GRU), and thus accurately recognize sketches invariant to stroke ordering. The experiments on the TU-Berlin data set show that our method outperforms the average of human and state-of-the-art algorithms even when significant shape and appearance variations occur.", "", "In recent years, various researchers have put in great effort to produce an efficient method of drawing extraction. This paper will focus on CAD data extraction from CAD drawing and study the method that has been proposed by previous researchers. CAD data extraction became a popular research since the early 80’s. Nowadays, most applications in engineering field are already computerized. This includes the CAD application system, the systems used by engineers to design their products. As the use of computerized application became important tool in engineering field, the production field is also affected. This raises the issue of integrating CAD with manufacture systems. For that reason, most researchers try to create a system that can extract meaningful information from the CAD drawing and create a connection between CAD and manufacture system. For example in manufacturing field, manufacture system is a machine system where it is also known as CAM systems. However, there is no direct connection from CAD system to CAM system. Therefore, many approaches have been proposed by the previous researchers to solve the issues. Focus on this paper is to study the approaches and make comparison among it. Finding from this paper is suitable approach can be used for next stage in this research.", "", "", "The ability of intelligent agents to play games in human-like fashion is popularly considered a benchmark of progress in Artificial Intelligence. Similarly, performance on multi-disciplinary tasks such as Visual Question Answering (VQA) is considered a marker for gauging progress in Computer Vision. In our work, we bring games and VQA together. Specifically, we introduce the first computational model aimed at Pictionary, the popular word-guessing social game. We first introduce Sketch-QA, an elementary version of Visual Question Answering task. Styled after Pictionary, Sketch-QA uses incrementally accumulated sketch stroke sequences as visual data. Notably, Sketch-QA involves asking a fixed question (\"What object is being drawn?\") and gathering open-ended guess-words from human guessers. We analyze the resulting dataset and present many interesting findings therein. To mimic Pictionary-style guessing, we subsequently propose a deep neural model which generates guess-words in response to temporally evolving human-drawn sketches. Our model even makes human-like mistakes while guessing, thus amplifying the human mimicry factor. We evaluate our model on the large-scale guess-word dataset generated via Sketch-QA task and compare with various baselines. We also conduct a Visual Turing Test to obtain human impressions of the guess-words generated by humans and our model. Experimental results demonstrate the promise of our approach for Pictionary and similarly themed games." ] }
1907.12383
2966710536
Efficient transfers to many recipients present a host of issues on Ethereum. First, accounts are identified by long and incompressible constants. Second, these constants have to be stored and communicated for each payment. Third, the standard interface for token transfers does not support lists of recipients, adding repeated communication to the overhead. Since Ethereum charges resource usage, even small optimizations translate to cost savings. Airdrops, a popular marketing tool used to boost coin uptake, present a relevant example for the value of optimizing bulk transfers. Therefore, we review technical solutions for airdrops of Ethereum-based tokens, discuss features and prerequisites, and compare the operational costs by simulating 35 scenarios. We find that cost savings of factor two are possible, but require specific provisions in the smart contract implementing the token system. Pull-based approaches, which use on-chain interaction with the recipients, promise moderate savings for the distributor while imposing a disproportional cost on each recipient. Total costs are broadly linear in the number of recipients independent of the technical approach. We publish the code of the simulation framework for reproducibility, to support future airdrop decisions, and to benchmark innovative bulk payment solutions.
Howell @cite_5 study the success factors of 440 ICOs on based on propriety transaction data, presumably acquired from exchanges and other intermediaries, and manual labeling. Their main interest is in the relationship between issuer characteristics and indicators of success. The regression analyses find highly significant positive effects on liquidity and volume of the token for independent variables measuring the existence of a white paper, the availability of code on Github, the support by venture capitalists, the entrepreneurs' experience, the acceptance of Bitcoin, and the organization of a pre-sale. No significant effect is found for airdrops.
{ "cite_N": [ "@cite_5" ], "mid": [ "2811170310" ], "abstract": [ "Initial coin offerings (ICOs) have emerged as a new mechanism for entrepreneurial finance, with parallels to initial public offerings, venture capital, and pre-sale crowdfunding. In a sample of more than 1,500 ICOs that collectively raise $12.9 billion, we examine which issuer and ICO characteristics predict success, measured using real outcomes (employment and issuer failure) and financial outcomes (token liquidity and volume). Success is associated with disclosure, credible commitment to the project, and quality signals. An instrumental variables analysis finds that ICO token exchange listing causes higher future employment, indicating that access to liquidity has important real consequences for the enterprise." ] }
1907.12383
2966710536
Efficient transfers to many recipients present a host of issues on Ethereum. First, accounts are identified by long and incompressible constants. Second, these constants have to be stored and communicated for each payment. Third, the standard interface for token transfers does not support lists of recipients, adding repeated communication to the overhead. Since Ethereum charges resource usage, even small optimizations translate to cost savings. Airdrops, a popular marketing tool used to boost coin uptake, present a relevant example for the value of optimizing bulk transfers. Therefore, we review technical solutions for airdrops of Ethereum-based tokens, discuss features and prerequisites, and compare the operational costs by simulating 35 scenarios. We find that cost savings of factor two are possible, but require specific provisions in the smart contract implementing the token system. Pull-based approaches, which use on-chain interaction with the recipients, promise moderate savings for the distributor while imposing a disproportional cost on each recipient. Total costs are broadly linear in the number of recipients independent of the technical approach. We publish the code of the simulation framework for reproducibility, to support future airdrop decisions, and to benchmark innovative bulk payment solutions.
Chen @cite_12 identify underpriced instructions (even after the 2016 gas price adjustment) and propose an adaptive pricing scheme. Their main interest is to raise economic barriers against congestion, which in the worst case enables denial of service attacks on the systemic level.
{ "cite_N": [ "@cite_12" ], "mid": [ "2963220038" ], "abstract": [ "The gas mechanism in Ethereum charges the execution of every operation to ensure that smart contracts running in EVM (Ethereum Virtual Machine) will be eventually terminated. Failing to properly set the gas costs of EVM operations allows attackers to launch DoS attacks on Ethereum. Although Ethereum recently adjusted the gas costs of EVM operations to defend against known DoS attacks, it remains unknown whether the new setting is proper and how to configure it to defend against unknown DoS attacks. In this paper, we make the first step to address this challenging issue by first proposing an emulation-based framework to automatically measure the resource consumptions of EVM operations. The results reveal that Ethereum’s new setting is still not proper. Moreover, we obtain an insight that there may always exist exploitable under-priced operations if the cost is fixed. Hence, we propose a novel gas cost mechanism, which dynamically adjusts the costs of EVM operations according to the number of executions, to thwart DoS attacks. This method punishes the operations that are executed much more frequently than before and lead to high gas costs. To make our solution flexible and secure and avoid frequent update of Ethereum client, we design a special smart contract that collaborates with the updated EVM for dynamic parameter adjustment. Experimental results demonstrate that our method can effectively thwart both known and unknown DoS attacks with flexible parameter settings. Moreover, our method only introduces negligible additional gas consumption for benign users." ] }
1907.12383
2966710536
Efficient transfers to many recipients present a host of issues on Ethereum. First, accounts are identified by long and incompressible constants. Second, these constants have to be stored and communicated for each payment. Third, the standard interface for token transfers does not support lists of recipients, adding repeated communication to the overhead. Since Ethereum charges resource usage, even small optimizations translate to cost savings. Airdrops, a popular marketing tool used to boost coin uptake, present a relevant example for the value of optimizing bulk transfers. Therefore, we review technical solutions for airdrops of Ethereum-based tokens, discuss features and prerequisites, and compare the operational costs by simulating 35 scenarios. We find that cost savings of factor two are possible, but require specific provisions in the smart contract implementing the token system. Pull-based approaches, which use on-chain interaction with the recipients, promise moderate savings for the distributor while imposing a disproportional cost on each recipient. Total costs are broadly linear in the number of recipients independent of the technical approach. We publish the code of the simulation framework for reproducibility, to support future airdrop decisions, and to benchmark innovative bulk payment solutions.
Airdrops are a rather new topic. We are aware of one academic paper only. Harrigan @cite_18 raises awareness for privacy implications of airdrops when identifiers of one chain () are reused to distribute coins on another chain (). Sharing identifiers between chains in general gives additional clues for address clustering.
{ "cite_N": [ "@cite_18" ], "mid": [ "2891594175" ], "abstract": [ "Airdrops are a popular method of distributing cryptocurrencies and tokens. While often considered risk-free from the point of view of recipients, their impact on privacy is easily overlooked. We examine the Clam airdrop of 2014, a forerunner to many of today's airdrops, that distributed a new cryptocurrency to every address with a non-dust balance on the Bitcoin, Litecoin and Dogecoin blockchains. Specifically, we use address clustering to try to construct the one-to-many mappings from entities to addresses on the blockchains, individually and in combination. We show that the sharing of addresses between the blockchains is a privacy risk. We identify instances where an entity has disclosed information about their address ownership on the Bitcoin, Litecoin and Dogecoin blockchains, exclusively via their activity on the Clam blockchain." ] }
1907.12240
2966580058
Business success of companies heavily depends on the availability and performance of their client applications. Due to modern development paradigms such as DevOps and microservice architectural styles, applications are decoupled into services with complex interactions and dependencies. Although these paradigms enable individual development cycles with reduced delivery times, they cause several challenges to manage the services in distributed systems. One major challenge is to observe and monitor such distributed systems. This paper provides a qualitative study to understand the challenges and good practices in the field of observability and monitoring of distributed systems. In 28 semi-structured interviews with software professionals we discovered increasing complexity and dynamics in that field. Especially observability becomes an essential prerequisite to ensure stable services and further development of client applications. However, the participants mentioned a discrepancy in the awareness regarding the importance of the topic, both from the management as well as from the developer perspective. Besides technical challenges, we identified a strong need for an organizational concept including strategy, roles and responsibilities. Our results support practitioners in developing and implementing systematic observability and monitoring for distributed systems.
The term originates in control system theory and measures the degree to which a system's internal state can be determined from its output @cite_21 . In cloud environments, observability indicates to what degree infrastructure and applications and their interactions can be monitored. Outputs used are for example logs, metrics and traces @cite_0 .
{ "cite_N": [ "@cite_0", "@cite_21" ], "mid": [ "2899113701", "1540541511" ], "abstract": [ "Cloud infrastructures can provide resource sharing between many applications and usually can meet the requirements of most of them. However, in order to enable an efficient usage of these resources, automatic orchestration is required. Commonly, automatic orchestration tools are based on the observability of the infrastructure itself, but that is not enough in some cases. Certain classes of applications have specific requirements that are difficult to meet, such as low latency, high bandwidth and high computational power. To properly meet these requirements, orchestration must be based on multilevel observability, which means collecting data from both the application and the infrastructure levels. Thus in this work we developed a platform aiming to show how multilevel observability can be implemented and how it can be used to improve automatic orchestration in cloud environments. As a case study, an application of computer vision and robotics, with very demanding requirements, was used to perform two experiments and illustrate the issues addressed in this paper. The results confirm that cloud orchestration can largely benefit from multilevel observability by allowing specific application requirements to be met, as well as improving the allocation of infrastructure resources.", "From the Publisher: The book provides an integrated treatment of continuous-time and discrete-time systems for two courses at postgraduate level, or one course at undergraduate and one course at postgraduate level. It covers mainly two areas of modern control theory, namely: system theory, and multivariable and optimal control. The coverage of the former is quite exhaustive while that of latter is adequate with significant provision of the necessary topics that enables a research student to comprehend various technical papers. The stress is on the interdisciplinary nature of the subject. Practical control problems from various engineering disciplines have been drawn to illustrate the potential concepts. Most of the theoretical results have been presented in a manner suitable for digital computer programming along with the necessary algorithms for numerical computations." ] }
1907.12240
2966580058
Business success of companies heavily depends on the availability and performance of their client applications. Due to modern development paradigms such as DevOps and microservice architectural styles, applications are decoupled into services with complex interactions and dependencies. Although these paradigms enable individual development cycles with reduced delivery times, they cause several challenges to manage the services in distributed systems. One major challenge is to observe and monitor such distributed systems. This paper provides a qualitative study to understand the challenges and good practices in the field of observability and monitoring of distributed systems. In 28 semi-structured interviews with software professionals we discovered increasing complexity and dynamics in that field. Especially observability becomes an essential prerequisite to ensure stable services and further development of client applications. However, the participants mentioned a discrepancy in the awareness regarding the importance of the topic, both from the management as well as from the developer perspective. Besides technical challenges, we identified a strong need for an organizational concept including strategy, roles and responsibilities. Our results support practitioners in developing and implementing systematic observability and monitoring for distributed systems.
@cite_12 investigate the capturing of service execution paths in distributed systems. While capturing the execution path is challenging, as each request may cross many components of several servers, they introduce a generic end-to-end methodology to capture the entire request. During our interviews we found a need for transparency of execution paths as well as more generally interdependencies between services.
{ "cite_N": [ "@cite_12" ], "mid": [ "2899648294" ], "abstract": [ "Distributed platforms are widely deployed to provide services in various trades. With the increasing scale and complexity of these distributed platforms, it is becoming more and more challenging to understand and diagnose a service request’s processing in a distributed platform, as even one simple service request may traverse numerous heterogeneous components across multiple hosts. Thus, it is highly demanded to capture the complete end-to-end execution path of service requests among all involved components accurately. This paper presents REPTrace, a generic methodology for capturing the complete request execution path (REP) in a transparent fashion. We propose principles for identifying causal relationships among events for a comprehensive list of execution scenarios, and stitch all events to generate complete request execution paths based on library system calls tracing and network labelling. The experiments on different distributed platforms with different workloads show that REPTrace transparently captures the accurate request execution path with reasonable latency and negligible network overhead." ] }
1907.12240
2966580058
Business success of companies heavily depends on the availability and performance of their client applications. Due to modern development paradigms such as DevOps and microservice architectural styles, applications are decoupled into services with complex interactions and dependencies. Although these paradigms enable individual development cycles with reduced delivery times, they cause several challenges to manage the services in distributed systems. One major challenge is to observe and monitor such distributed systems. This paper provides a qualitative study to understand the challenges and good practices in the field of observability and monitoring of distributed systems. In 28 semi-structured interviews with software professionals we discovered increasing complexity and dynamics in that field. Especially observability becomes an essential prerequisite to ensure stable services and further development of client applications. However, the participants mentioned a discrepancy in the awareness regarding the importance of the topic, both from the management as well as from the developer perspective. Besides technical challenges, we identified a strong need for an organizational concept including strategy, roles and responsibilities. Our results support practitioners in developing and implementing systematic observability and monitoring for distributed systems.
The current trend towards more flexible and modular distributed systems is characterized by using independent services, such as micro- or web services. While systems consisting of web services provide better observability than monolithic systems, services have the potential to enhance their observability and monitoring by giving relevant information about their internal behaviour. @cite_2 deal with the challenge that web service definitions do not have any information about their behaviour. They extend the web service definition by adding a behaviour logic description based on a constraint-based model-driven testing approach. During our interviews we identified that the behaviour especially of third-party services needs to be more clearly communicated to assess the impact on service levels and to detect and diagnose faults.
{ "cite_N": [ "@cite_2" ], "mid": [ "2899645656" ], "abstract": [ "In the current Web Service Description Language (WSDL), only the interface information of a web service is provided without any indication on its behavior logic. Naturally, it is difficult for the service user and developer to achieve a shared understanding of the service behavior through such a description. A particular challenge is how to make explicit the various behavior assumptions and restrictions of a service (for the user), and make sure that the service implementation conforms to them (for the developer). In order to improve the behavior conformance of services, in this paper we propose a constraint-based model-driven testing approach for web services. In our approach, constraints are introduced in an extended WSDL, called CxWSDL, to formally and explicitly express the implicit restrictions and assumptions on the behavior of web services, and then the predefined constraints are used to derive test cases in a model-driven manner to test the service implementation’s conformance to these behavior constraints from the user’s perspective. We have conducted an empirical study with three real-life web services as subject programs, and the experimental results have shown that our approach can effectively validate the service’s conformance to the behavior constraints." ] }
1907.12240
2966580058
Business success of companies heavily depends on the availability and performance of their client applications. Due to modern development paradigms such as DevOps and microservice architectural styles, applications are decoupled into services with complex interactions and dependencies. Although these paradigms enable individual development cycles with reduced delivery times, they cause several challenges to manage the services in distributed systems. One major challenge is to observe and monitor such distributed systems. This paper provides a qualitative study to understand the challenges and good practices in the field of observability and monitoring of distributed systems. In 28 semi-structured interviews with software professionals we discovered increasing complexity and dynamics in that field. Especially observability becomes an essential prerequisite to ensure stable services and further development of client applications. However, the participants mentioned a discrepancy in the awareness regarding the importance of the topic, both from the management as well as from the developer perspective. Besides technical challenges, we identified a strong need for an organizational concept including strategy, roles and responsibilities. Our results support practitioners in developing and implementing systematic observability and monitoring for distributed systems.
Besides monitoring individual service calls, it is important to predict the runtime performance of distributed systems. @cite_4 show that two techniques, benchmarking and simulation, have shortcomings if they are used separately and introduce and validate a complementary approach. Their approach presents a process which maps benchmark ontologies of simulations. This prove to be inexpensive, fast and reliable. Similarly, Lin et. al @cite_17 propose a novel way of root cause detection in microservice architectures utilizing causal graphs. In our interviews we found that performance is often only known when a system goes live, as the interdependencies between different services and their individual performance are not assessed beforehand.
{ "cite_N": [ "@cite_4", "@cite_17" ], "mid": [ "2899973604", "2900100055" ], "abstract": [ "Estimating future runtime performance and cost is an essential task for Chief Information Officers in deciding whether to adopt a Cloud-based system. Benchmarking and simulation are two techniques that have long been practiced towards reliable estimation. Benchmarking involves (potentially) high cost and time consumption, but oftentimes yields more reliable estimates than simulation, while the simulation is much cheaper and faster than benchmarking, but less reliable. In order to deal with this dichotomy, we propose a complementary approach to estimating the performance of Cloud-based systems, whereby performance estimates can be obtained in a fast, inexpensive, and also reliable way. In this approach, the ontological concepts of a benchmark model, whose benchmark results have already been obtained, are mapped into those of a simulation model, while the mismatches and similarities between the two models are taken care of, through measures of similarity between the two. This ontology-driven construction of simulation models is intended not only to yield more reliable simulation results but also to help better explain why the simulation results may, or may not, be reliable. To validate our complementary approach, simulation models are constructed using CloudSim, and the simulation results are compared against the corresponding benchmark results, by using our prototype tool, collected from Amazon Web Service (AWS) and Google Compute Engine (GCE) by using the Yahoo! Cloud Serving Benchmark (YCSB) tool. These experiments show that the simulation results show about 90 accuracy with respect to the benchmark results, and additionally we feel we could better explain why this happens.", "Driven by the emerging business models (e.g., digital sales) and IT technologies (e.g., DevOps and Cloud computing), the architecture of software is shifting from monolithic to microservice rapidly. Benefit from microservice, software development, and delivery processes are accelerated significantly. However, along with many micro services running in the dynamic cloud environment with complex interactions, identifying and locating the abnormal services are extraordinarily difficult. This paper presents a novel system named “Microscope” to identify and locate the abnormal services with a ranked list of possible root causes in Micro-service environments. Without instrumenting the source code of micro services, Microscope can efficiently construct a service causal graph and infer the causes of performance problems in real time. Experimental evaluations in a micro-service benchmark environment show that Microscope achieves a good diagnosis result, i.e., 88 in precision and 80 in recall, which is higher than several state-of-the-art methods. Meanwhile, it has a good scalability to adapt to large-scale micro-service systems." ] }
1907.12240
2966580058
Business success of companies heavily depends on the availability and performance of their client applications. Due to modern development paradigms such as DevOps and microservice architectural styles, applications are decoupled into services with complex interactions and dependencies. Although these paradigms enable individual development cycles with reduced delivery times, they cause several challenges to manage the services in distributed systems. One major challenge is to observe and monitor such distributed systems. This paper provides a qualitative study to understand the challenges and good practices in the field of observability and monitoring of distributed systems. In 28 semi-structured interviews with software professionals we discovered increasing complexity and dynamics in that field. Especially observability becomes an essential prerequisite to ensure stable services and further development of client applications. However, the participants mentioned a discrepancy in the awareness regarding the importance of the topic, both from the management as well as from the developer perspective. Besides technical challenges, we identified a strong need for an organizational concept including strategy, roles and responsibilities. Our results support practitioners in developing and implementing systematic observability and monitoring for distributed systems.
@cite_8 addresses runtime monitoring on continuous deployment in software development as a crucial task, especially in rapidly changing software solutions. While current runtime monitoring approaches of previous and newly deployed versions lack in capturing and monitoring differences at runtime, they present an approach which automatically discovers an execution behaviour model by mining execution logs. Approaches like this that gather information automatically instead of necessitating manual definition are crucial with growing complexity and dynamics of distributed systems.
{ "cite_N": [ "@cite_8" ], "mid": [ "2899695116" ], "abstract": [ "Continuous deployment techniques support rapid deployment of new software versions. Usually a new version is deployed on a limited scale, its behavior is monitored and compared against the previously deployed version and either the deployment of the new version is broadened, or one reverts to the previous version. The existing monitoring approaches, however, do not capture the differences in the execution behavior between the new and the previously deployed versions. We propose an approach to automatically discover execution behavior models for the deployed and the new version using the execution logs. Differences between the two models are identified and enriched such that spurious differences, e.g., due to logging statement modifications, are mitigated. The remaining differences are visualized as cohesive diff regions within the discovered behavior model, allowing one to effectively analyze them for, e.g., anomaly detection and release decision making. To evaluate the proposed approach, we conducted case study on Nutch, an open source application, and an industrial application. We discovered the execution behavior models for the two versions of applications and identified the diff regions between them. By analyzing the regions, we detected bugs introduced in the new versions of these applications. The bugs have been reported and later fixed by the developers, thus, confirming the effectiveness of our approach." ] }
1907.12240
2966580058
Business success of companies heavily depends on the availability and performance of their client applications. Due to modern development paradigms such as DevOps and microservice architectural styles, applications are decoupled into services with complex interactions and dependencies. Although these paradigms enable individual development cycles with reduced delivery times, they cause several challenges to manage the services in distributed systems. One major challenge is to observe and monitor such distributed systems. This paper provides a qualitative study to understand the challenges and good practices in the field of observability and monitoring of distributed systems. In 28 semi-structured interviews with software professionals we discovered increasing complexity and dynamics in that field. Especially observability becomes an essential prerequisite to ensure stable services and further development of client applications. However, the participants mentioned a discrepancy in the awareness regarding the importance of the topic, both from the management as well as from the developer perspective. Besides technical challenges, we identified a strong need for an organizational concept including strategy, roles and responsibilities. Our results support practitioners in developing and implementing systematic observability and monitoring for distributed systems.
@cite_14 conducted a survey of 62 multinational companies on public cloud adoption. While use of public cloud infrastructure is on the rise, barriers like security, regulatory compliance, and monitoring remain. Regarding monitoring, the survey has shown that half of the companies rely solely on their cloud providers' monitoring dashboard. Participants noted a crucial need for quality of service monitoring integrated with their monitoring tool.
{ "cite_N": [ "@cite_14" ], "mid": [ "2891766162" ], "abstract": [ "Cloud Computing is radically changing the way of providing and managing IT services. Big enterprises are continuously investing on Cloud technologies to streamline IT processes and substantially reduce the time to market of new services. The current Cloud service model enables companies, with a low initial investment, to easily test new services and technologies, like IoT and Big Data, on a \"ready to go\" virtualized infrastructure. However, large organizations are still facing multiple challenges in migrating business-critical services and sensitive data to Public Cloud environments. To investigate the current adoption of Public Cloud services, we interviewed IT managers and cloud architects of over sixty multinational organizations. The survey assesses both business and technical issues and requirements of current and future Cloud strategies. Our analysis shows that Cloud Service Providers (CSPs) are not yet perceived as fully able to address critical points in security, regulatory constraints and performance management. Hence, to control their public cloud services and to overcome such limitations, multinational organizations must adopt structured SLM approaches." ] }
1907.12240
2966580058
Business success of companies heavily depends on the availability and performance of their client applications. Due to modern development paradigms such as DevOps and microservice architectural styles, applications are decoupled into services with complex interactions and dependencies. Although these paradigms enable individual development cycles with reduced delivery times, they cause several challenges to manage the services in distributed systems. One major challenge is to observe and monitor such distributed systems. This paper provides a qualitative study to understand the challenges and good practices in the field of observability and monitoring of distributed systems. In 28 semi-structured interviews with software professionals we discovered increasing complexity and dynamics in that field. Especially observability becomes an essential prerequisite to ensure stable services and further development of client applications. However, the participants mentioned a discrepancy in the awareness regarding the importance of the topic, both from the management as well as from the developer perspective. Besides technical challenges, we identified a strong need for an organizational concept including strategy, roles and responsibilities. Our results support practitioners in developing and implementing systematic observability and monitoring for distributed systems.
Similarly, Knoche and Hasselbring @cite_1 conducted a survey of German experts on microservice adoption. Drivers for microservice adoption are scalability, maintainability and development speed. On the other hand, barriers to adoption are mainly operational in nature. Operations department resist microservices due to the change in their tasks. On the technical level, running distributed applications prone to partial failures and monitoring them is a significant challenge.
{ "cite_N": [ "@cite_1" ], "mid": [ "2907869797" ], "abstract": [ "Microservices are an architectural style for software which currently receives a lot of attention in both industry and academia. Several companies employ microservice architectures with great success, and there is a wealth of blog posts praising their advantages. Especially so-called Internet-scale systems use them to satisfy their enormous scalability requirements and to rapidly deliver new features to their users. But microservices are not only popular with large, Internet-scale systems. Many traditional companies are also considering whether microservices are a viable option for their applications. However, these companies may have other motivations to employ microservices, and see other barriers which could prevent them from adopting microservices. Furthermore, these drivers and barriers possibly differ among industry sectors. In this article, we present the results of a survey on drivers and barriers for microservice adoption among professionals in Germany. In addition to overall drivers and barriers, we particularly focus on the use of microservices to modernize existing software, with special emphasis on implications for runtime performance and transactionality. We observe interesting differences between early adopters who emphasize scalability of their Internet-scale systems, compared to traditional companies which emphasize maintainability of their IT systems." ] }
1907.12240
2966580058
Business success of companies heavily depends on the availability and performance of their client applications. Due to modern development paradigms such as DevOps and microservice architectural styles, applications are decoupled into services with complex interactions and dependencies. Although these paradigms enable individual development cycles with reduced delivery times, they cause several challenges to manage the services in distributed systems. One major challenge is to observe and monitor such distributed systems. This paper provides a qualitative study to understand the challenges and good practices in the field of observability and monitoring of distributed systems. In 28 semi-structured interviews with software professionals we discovered increasing complexity and dynamics in that field. Especially observability becomes an essential prerequisite to ensure stable services and further development of client applications. However, the participants mentioned a discrepancy in the awareness regarding the importance of the topic, both from the management as well as from the developer perspective. Besides technical challenges, we identified a strong need for an organizational concept including strategy, roles and responsibilities. Our results support practitioners in developing and implementing systematic observability and monitoring for distributed systems.
Gamez- @cite_9 performed an analysis of RestFUL APIs of cloud providers, identifying requirements for API governance and noting a lack of standardization.
{ "cite_N": [ "@cite_9" ], "mid": [ "2767103530" ], "abstract": [ "As distribution models of information systems are moving to XaaS paradigms, microservices architectures are rapidly emerging, having the RESTful principles as the API model of choice. In this context, the term of API Economy is being used to describe the increasing movement of the industries in order to take advantage of exposing their APIs as part of their service offering and expand its business model." ] }
1907.12240
2966580058
Business success of companies heavily depends on the availability and performance of their client applications. Due to modern development paradigms such as DevOps and microservice architectural styles, applications are decoupled into services with complex interactions and dependencies. Although these paradigms enable individual development cycles with reduced delivery times, they cause several challenges to manage the services in distributed systems. One major challenge is to observe and monitor such distributed systems. This paper provides a qualitative study to understand the challenges and good practices in the field of observability and monitoring of distributed systems. In 28 semi-structured interviews with software professionals we discovered increasing complexity and dynamics in that field. Especially observability becomes an essential prerequisite to ensure stable services and further development of client applications. However, the participants mentioned a discrepancy in the awareness regarding the importance of the topic, both from the management as well as from the developer perspective. Besides technical challenges, we identified a strong need for an organizational concept including strategy, roles and responsibilities. Our results support practitioners in developing and implementing systematic observability and monitoring for distributed systems.
While not an empirical study, @cite_16 show monitoring challenges of holistic cloud applications. Scale and complexity of applications is identified as a main challenge. Related to observability, incomplete and inaccurate views of the total system as well as fault localization are other identified challenges.
{ "cite_N": [ "@cite_16" ], "mid": [ "2289646227" ], "abstract": [ "Effective monitoring solutions are critical to the smooth running of enterprise systems. However, real-world constraints present several challenges in designing such solutions. With the increasing scale and complexity of today's enterprise IT systems and their increasing use for business-critical applications, traditional approaches to monitoring must be reconsidered. This article stresses the need for a paradigm-shift from manual intuition-led approaches to an automated analytics-driven approach to monitor the IT systems. The authors propose that analytics-led solutions can provide powerful levers to design monitoring and event management solutions for next-generation enterprise IT systems." ] }
1907.12240
2966580058
Business success of companies heavily depends on the availability and performance of their client applications. Due to modern development paradigms such as DevOps and microservice architectural styles, applications are decoupled into services with complex interactions and dependencies. Although these paradigms enable individual development cycles with reduced delivery times, they cause several challenges to manage the services in distributed systems. One major challenge is to observe and monitor such distributed systems. This paper provides a qualitative study to understand the challenges and good practices in the field of observability and monitoring of distributed systems. In 28 semi-structured interviews with software professionals we discovered increasing complexity and dynamics in that field. Especially observability becomes an essential prerequisite to ensure stable services and further development of client applications. However, the participants mentioned a discrepancy in the awareness regarding the importance of the topic, both from the management as well as from the developer perspective. Besides technical challenges, we identified a strong need for an organizational concept including strategy, roles and responsibilities. Our results support practitioners in developing and implementing systematic observability and monitoring for distributed systems.
@cite_10 give an overview of the state-of-the-art in application performance monitoring (APM), describing typical capabilities and available APM software. They found APM to be a solution to monitoring and analyzing cloud environments, but note future challenges in root cause detection, setup effort and interoperability. APM cannot be understood as a purely technical topic anymore but needs to incorporate business and organizational aspects as well.
{ "cite_N": [ "@cite_10" ], "mid": [ "2606883211" ], "abstract": [ "The performance of application systems has a direct impact on business metrics. For example, companies lose customers and revenue in case of poor performance such as high response times. Application performance management (APM) aims to provide the required processes and tools to have a continuous and up-to-date picture of relevant performance measures during operations, as well as to support the detection and resolution of performance-related incidents. In this tutorial paper, we provide an overview of the state of the art in APM in industrial practice and academic research, highlight current challenges, and outline future research directions." ] }
1907.12240
2966580058
Business success of companies heavily depends on the availability and performance of their client applications. Due to modern development paradigms such as DevOps and microservice architectural styles, applications are decoupled into services with complex interactions and dependencies. Although these paradigms enable individual development cycles with reduced delivery times, they cause several challenges to manage the services in distributed systems. One major challenge is to observe and monitor such distributed systems. This paper provides a qualitative study to understand the challenges and good practices in the field of observability and monitoring of distributed systems. In 28 semi-structured interviews with software professionals we discovered increasing complexity and dynamics in that field. Especially observability becomes an essential prerequisite to ensure stable services and further development of client applications. However, the participants mentioned a discrepancy in the awareness regarding the importance of the topic, both from the management as well as from the developer perspective. Besides technical challenges, we identified a strong need for an organizational concept including strategy, roles and responsibilities. Our results support practitioners in developing and implementing systematic observability and monitoring for distributed systems.
@cite_3 give an insight into commercial cloud monitoring tools, showing state-of-the-art features, identifying shortcomings and, connected with that, future areas of research. Information aggregation across different layers of abstraction, a broad range of measurable metrics and extensibility are seen as critical success factors. Tools were found to be lacking in standardization regarding monitoring processes and metrics.
{ "cite_N": [ "@cite_3" ], "mid": [ "2107557955" ], "abstract": [ "Cloud monitoring activity involves dynamically tracking the Quality of Service (QoS) parameters related to virtualized resources (e.g., VM, storage, network, appliances, etc.), the physical resources they share, the applications running on them and data hosted on them. Applications and resources configuration in cloud computing environment is quite challenging considering a large number of heterogeneous cloud resources. Further, considering the fact that at given point of time, there may be need to change cloud resource configuration (number of VMs, types of VMs, number of appliance instances, etc.) for meet application QoS requirements under uncertainties (resource failure, resource overload, workload spike, etc.). Hence, cloud monitoring tools can assist a cloud providers or application developers in: (i) keeping their resources and applications operating at peak efficiency, (ii) detecting variations in resource and application performance, (iii) accounting the service level agreement violations of certain QoS parameters, and (iv) tracking the leave and join operations of cloud resources due to failures and other dynamic configuration changes. In this paper, we identify and discuss the major research dimensions and design issues related to engineering cloud monitoring tools. We further discuss how the aforementioned research dimensions and design issues are handled by current academic research as well as by commercial monitoring tools." ] }
1907.12253
2965663253
One major challenge in 3D reconstruction is to infer the complete shape geometry from partial foreground occlusions. In this paper, we propose a method to reconstruct the complete 3D shape of an object from a single RGB image, with robustness to occlusion. Given the image and a silhouette of the visible region, our approach completes the silhouette of the occluded region and then generates a point cloud. We show improvements for reconstruction of non-occluded and partially occluded objects by providing the predicted complete silhouette as guidance. We also improve state-of-the-art for 3D shape prediction with a 2D reprojection loss from multiple synthetic views and a surface-based smoothing and refinement step. Experiments demonstrate the efficacy of our approach both quantitatively and qualitatively on synthetic and real scene datasets.
Most of these approaches are applied to non-occluded objects with clean backgrounds and no occlusions, which may prevent their application to natural images. Sun al @cite_41 conduct experiments on real images from Pix3D, a large-scale dataset with aligned ground-truth 3D shapes, but do not consider the problem of occlusion. We are concerned with predicting shape of objects in natural scenes, which may be partly occluded. Our approach improves the state-of-the-art for object point set generation, and is extended to reconstruct beyond occlusion with the guidance of completed silhouettes. Our silhouettes guidance is closely related to the human depth estimation by Rematas al @cite_29 . However, Rematas al use the visible silhouette (semantic segmentation) rather than a complete silhouette, making it hard to predict overlapped (occluded) regions. Differently, our approach conditions on predicted silhouette to resolve occlusion ambiguity, and is able to predict complete 3D shape rather than 2.5D depth points.
{ "cite_N": [ "@cite_41", "@cite_29" ], "mid": [ "2962988048", "2798505423" ], "abstract": [ "We study 3D shape modeling from a single image and make contributions to it in three aspects. First, we present Pix3D, a large-scale benchmark of diverse image-shape pairs with pixel-level 2D-3D alignment. Pix3D has wide applications in shape-related tasks including reconstruction, retrieval, viewpoint estimation, etc. Building such a large-scale dataset, however, is highly challenging; existing datasets either contain only synthetic data, or lack precise alignment between 2D images and 3D shapes, or only have a small number of images. Second, we calibrate the evaluation criteria for 3D shape reconstruction through behavioral studies, and use them to objectively and systematically benchmark cutting-edge reconstruction algorithms on Pix3D. Third, we design a novel model that simultaneously performs 3D reconstruction and pose estimation; our multi-task learning approach achieves state-of-the-art performance on both tasks.", "We present a system that transforms a monocular video of a soccer game into a moving 3D reconstruction, in which the players and field can be rendered interactively with a 3D viewer or through an Augmented Reality device. At the heart of our paper is an approach to estimate the depth map of each player, using a CNN that is trained on 3D player data extracted from soccer video games. We compare with state of the art body pose and depth estimation techniques, and show results on both synthetic ground truth benchmarks, and real YouTube soccer footage." ] }
1907.12253
2965663253
One major challenge in 3D reconstruction is to infer the complete shape geometry from partial foreground occlusions. In this paper, we propose a method to reconstruct the complete 3D shape of an object from a single RGB image, with robustness to occlusion. Given the image and a silhouette of the visible region, our approach completes the silhouette of the occluded region and then generates a point cloud. We show improvements for reconstruction of non-occluded and partially occluded objects by providing the predicted complete silhouette as guidance. We also improve state-of-the-art for 3D shape prediction with a 2D reprojection loss from multiple synthetic views and a surface-based smoothing and refinement step. Experiments demonstrate the efficacy of our approach both quantitatively and qualitatively on synthetic and real scene datasets.
Occlusions have long been an obstacle in multi-view reconstruction. Solutions have been proposed to recover portions of surfaces from single views, with synthetic apertures @cite_6 @cite_31 , or to otherwise improve robustness of matching and completion functions from multiple views @cite_38 @cite_23 @cite_13 . Other work decompose a scene into layered depth maps from RGBD @cite_27 images or video @cite_9 and then seek to complete the occluded portions of the maps. But errors in layered segmentation can severely degrade the recovery of the occluded region. Learning-based approaches @cite_32 @cite_25 @cite_15 have posed recovery from occlusion as a 2D semantic segmentation completion task. Ehsani al @cite_10 propose to complete the silhouette and texture of an occluded object. Our silhouette completion network is most similar to Ehsani al, but we ease the task by predicting the complete silhouette rather than the full texture. We demonstrate better performance with our up-sampling based convolution decoder instead of fully connected layers used in Ehsani al. Moreover, We go further to try to predict the complete 3D shape of the occluded object.
{ "cite_N": [ "@cite_38", "@cite_15", "@cite_10", "@cite_9", "@cite_32", "@cite_6", "@cite_27", "@cite_23", "@cite_31", "@cite_13", "@cite_25" ], "mid": [ "2963660453", "", "2604176797", "2119781527", "2042754179", "2136733503", "2464328412", "", "2120286708", "", "" ], "abstract": [ "Common visual recognition tasks such as classification, object detection, and semantic segmentation are rapidly reaching maturity, and given the recent rate of progress, it is not unreasonable to conjecture that techniques for many of these problems will approach human levels of performance in the next few years. In this paper we look to the future: what is the next frontier in visual recognition? We offer one possible answer to this question. We propose a detailed image annotation that captures information beyond the visible pixels and requires complex reasoning about full scene structure. Specifically, we create an amodal segmentation of each image: the full extent of each region is marked, not just the visible pixels. Annotators outline and name all salient regions in the image and specify a partial depth order. The result is a rich scene structure, including visible and occluded portions of each region, figure-ground edge information, semantic labels, and object overlap. We create two datasets for semantic amodal segmentation. First, we label 500 images in the BSDS dataset with multiple annotators per image, allowing us to study the statistics of human annotations. We show that the proposed full scene annotation is surprisingly consistent between annotators, including for regions and edges. Second, we annotate 5000 images from COCO. This larger dataset allows us to explore a number of algorithmic ideas for amodal segmentation and depth ordering. We introduce novel metrics for these tasks, and along with our strong baselines, define concrete new challenges for the community.", "", "Objects often occlude each other in scenes; Inferring their appearance beyond their visible parts plays an important role in scene understanding, depth estimation, object interaction and manipulation. In this paper, we study the challenging problem of completing the appearance of occluded objects. Doing so requires knowing which pixels to paint (segmenting the invisible parts of objects) and what color to paint them (generating the invisible parts). Our proposed novel solution, SeGAN, jointly optimizes for both segmentation and generation of the invisible parts of objects. Our experimental results show that: (a) SeGAN can learn to generate the appearance of the occluded parts of objects; (b) SeGAN outperforms state-of-the-art segmentation baselines for the invisible parts of objects; (c) trained on synthetic photo realistic images, SeGAN can reliably segment natural images; (d) by reasoning about occluder-occludee relations, our method can infer depth layering.", "The ability to interactively control viewpoint while watching a video is an exciting application of image-based rendering. The goal of our work is to render dynamic scenes with interactive viewpoint control using a relatively small number of video cameras. In this paper, we show how high-quality video-based rendering of dynamic scenes can be accomplished using multiple synchronized video streams combined with novel image-based modeling and rendering algorithms. Once these video streams have been processed, we can synthesize any intermediate view between cameras at any time, with the potential for space-time manipulation.In our approach, we first use a novel color segmentation-based stereo algorithm to generate high-quality photoconsistent correspondences across all camera views. Mattes for areas near depth discontinuities are then automatically extracted to reduce artifacts during view synthesis. Finally, a novel temporal two-layer compressed representation that handles matting is developed for rendering at interactive rates.", "Scene understanding requires reasoning about both what we can see and what is occluded. We offer a simple and general approach to infer labels of occluded background regions. Our approach incorporates estimates of visible surrounding background, detected objects, and shape priors from transferred training regions. We demonstrate the ability to infer the labels of occluded background regions in three datasets: the outdoor StreetScenes dataset, IndoorScene dataset and SUN09 dataset, all using the same approach. Furthermore, the proposed approach is extended to 3D space to find layered support surfaces in RGB-Depth scenes. Our experiments and analysis show that our method outperforms competent baselines.", "Most algorithms for 3D reconstruction from images use cost functions based on SSD, which assume that the surfaces being reconstructed are visible to all cameras. This makes it difficult to reconstruct objects which are partially occluded. Recently, researchers working with large camera arrays have shown it is possible to \"see through\" occlusions using a technique called synthetic aperture focusing. This suggests that we can design alternative cost functions that are robust to occlusions using synthetic apertures. Our paper explores this design space. We compare classical shape from stereo with shape from synthetic aperture focus. We also describe two variants of multi-view stereo based on color medians and entropy that increase robustness to occlusions. We present an experimental comparison of these cost functions on complex light fields, measuring their accuracy against the amount of occlusion.", "This paper addresses the challenging problem of perceiving the hidden or occluded geometry of the scene depicted in any given RGBD image. Unlike other image labeling problems such as image segmentation where each pixel needs to be assigned a single label, layered decomposition requires us to assign multiple labels to pixels. We propose a novel \"Occlusion-CRF\" model that allows for the integration of sophisticated priors to regularize the solution space and enables the automatic inference of the layer decomposition. We use a generalization of the Fusion Move algorithm to perform Maximum a Posterior (MAP) inference on the model that can handle the large label sets needed to represent multiple surface assignments to each pixel. We have evaluated the proposed model and the inference algorithm on many RGBD images of cluttered indoor scenes. Our experiments show that not only is our model able to explain occlusions but it also enables automatic inpainting of occluded invisible surfaces.", "", "We present a novel algorithm to reconstruct the geometry and photometry of a scene with occlusions from a collection of defocused images. The presence of a finite lens aperture allows us to recover portions of the scene that would be occluded in a pin-hole projection, thus \"uncovering\" the occlusion. We estimate the shape of each object (a surface, including the occluding boundaries), and its radiance (a positive function defined on the surface, including portions that are occluded by other objects).", "", "" ] }
1907.12398
2965570822
User authentication can rely on various factors (e.g., a password, a cryptographic key, biometric data) but should not reveal any secret or private information. This seemingly paradoxical feat can be achieved through zero-knowledge proofs. Unfortunately, naive password-based approaches still prevail on the web. Multi-factor authentication schemes address some of the weaknesses of the traditional login process, but generally have deployability issues or degrade usability even further as they assume users do not possess adequate hardware. This assumption no longer holds: smartphones with biometric sensors, cameras, short-range communication capabilities, and unlimited data plans have become ubiquitous. In this paper, we show that, assuming the user has such a device, both security and usability can be drastically improved using an augmented password-authenticated key agreement (PAKE) protocol and message authentication codes.
The last couple of decades has seen a plethora of proposals for user authentication. In general, existing schemes suffer from at least one of the following drawbacks: (a) they require a dedicated device, (b) they are proprietary, (c) they involve a shared secret, and or (d) they still require a traditional password. Herein we only discuss a small subset of schemes and refer to the paper by @cite_9 for an extensive evaluation of related work. Using their framework we evaluated ZeroTwo and present the results in Table .
{ "cite_N": [ "@cite_9" ], "mid": [ "2030112111" ], "abstract": [ "We evaluate two decades of proposals to replace text passwords for general-purpose user authentication on the web using a broad set of twenty-five usability, deployability and security benefits that an ideal scheme might provide. The scope of proposals we survey is also extensive, including password management software, federated login protocols, graphical password schemes, cognitive authentication schemes, one-time passwords, hardware tokens, phone-aided schemes and biometrics. Our comprehensive approach leads to key insights about the difficulty of replacing passwords. Not only does no known scheme come close to providing all desired benefits: none even retains the full set of benefits that legacy passwords already provide. In particular, there is a wide range from schemes offering minor security benefits beyond legacy passwords, to those offering significant security benefits in return for being more costly to deploy or more difficult to use. We conclude that many academic proposals have failed to gain traction because researchers rarely consider a sufficiently wide range of real-world constraints. Beyond our analysis of current schemes, our framework provides an evaluation methodology and benchmark for future web authentication proposals." ] }
1907.12398
2965570822
User authentication can rely on various factors (e.g., a password, a cryptographic key, biometric data) but should not reveal any secret or private information. This seemingly paradoxical feat can be achieved through zero-knowledge proofs. Unfortunately, naive password-based approaches still prevail on the web. Multi-factor authentication schemes address some of the weaknesses of the traditional login process, but generally have deployability issues or degrade usability even further as they assume users do not possess adequate hardware. This assumption no longer holds: smartphones with biometric sensors, cameras, short-range communication capabilities, and unlimited data plans have become ubiquitous. In this paper, we show that, assuming the user has such a device, both security and usability can be drastically improved using an augmented password-authenticated key agreement (PAKE) protocol and message authentication codes.
Bonneau @cite_17 had previously proposed a password-based authentication protocol, designed to avoid revealing the password to the server (using Javascript), which requires neither a software update on the client side nor a separate authentication device. @cite_8 similarly focused on restrictions imposed by legacy systems to address the issues of weak passwords.
{ "cite_N": [ "@cite_8", "@cite_17" ], "mid": [ "1531500955", "12767681" ], "abstract": [ "We explore the extent to which we can address three issues with passwords today: the weakness of user-chosen passwords, reuse of passwords across security domains, and the revocation of credentials. We do so while restricting ourselves to changing the password verification function on the server, introducing the use of existing key-servers, and providing users with a password management tool. Our aim is to improve the security and revocation of authentication actions with devices and end-points, while minimising changes which reduce ease of use and ease of deployment. We achieve this using one time tokens derived using public-key cryptography and propose two protocols for use with and without an online rendezvous point.", "We outline an end-to-end password authentication protocol for the web designed to be stateless and as secure as possible given legacy limitations of the web browser and performance constraints of commercial web servers. Our scheme is secure against very strong but passive attackers able to observe both network traffic and the server's database state. At the same time, our scheme is simple for web servers to implement and requires no changes to modern, HTML5-compliant browsers. We assume TLS is available for initial login and no other public-key cryptographic operations, but successfully defend against cookie-stealing and cookie-forging attackers and provide strong resistance to password guessing attacks." ] }
1907.12398
2965570822
User authentication can rely on various factors (e.g., a password, a cryptographic key, biometric data) but should not reveal any secret or private information. This seemingly paradoxical feat can be achieved through zero-knowledge proofs. Unfortunately, naive password-based approaches still prevail on the web. Multi-factor authentication schemes address some of the weaknesses of the traditional login process, but generally have deployability issues or degrade usability even further as they assume users do not possess adequate hardware. This assumption no longer holds: smartphones with biometric sensors, cameras, short-range communication capabilities, and unlimited data plans have become ubiquitous. In this paper, we show that, assuming the user has such a device, both security and usability can be drastically improved using an augmented password-authenticated key agreement (PAKE) protocol and message authentication codes.
Sound-Proof @cite_10 is a recent system that relies on sound for the smartphone and browser to communicate. One of the main goals of Sound-Proof is to provide a seamless experience to users, i.e., the phone need not even be handled for the authentication process to complete. However, the user still has to type a password in the browser, which comes with the issues we discussed previously. Moreover, the complete seamlessness of Sound-Proof is not compatible with our view that certain actions should be explicitly authorized on a trusted device.
{ "cite_N": [ "@cite_10" ], "mid": [ "2953351357" ], "abstract": [ "Two-factor authentication protects online accounts even if passwords are leaked. Most users, however, prefer password-only authentication. One reason why two-factor authentication is so unpopular is the extra steps that the user must complete in order to log in. Currently deployed two-factor authentication mechanisms require the user to interact with his phone to, for example, copy a verification code to the browser. Two-factor authentication schemes that eliminate user-phone interaction exist, but require additional software to be deployed. In this paper we propose Sound-Proof, a usable and deployable two-factor authentication mechanism. Sound-Proof does not require interaction between the user and his phone. In Sound-Proof the second authentication factor is the proximity of the user's phone to the device being used to log in. The proximity of the two devices is verified by comparing the ambient noise recorded by their microphones. Audio recording and comparison are transparent to the user, so that the user experience is similar to the one of password-only authentication. Sound-Proof can be easily deployed as it works with current phones and major browsers without plugins. We build a prototype for both Android and iOS. We provide empirical evidence that ambient noise is a robust discriminant to determine the proximity of two devices both indoors and outdoors, and even if the phone is in a pocket or purse. We conduct a user study designed to compare the perceived usability of Sound-Proof with Google 2-Step Verification. Participants ranked Sound-Proof as more usable and the majority would be willing to use Sound-Proof even for scenarios in which two-factor authentication is optional." ] }
1907.12353
2966108228
We present recursive cascaded networks, a general architecture that enables learning deep cascades, for deformable image registration. The proposed architecture is simple in design and can be built on any base network. The moving image is warped successively by each cascade and finally aligned to the fixed image; this procedure is recursive in a way that every cascade learns to perform a progressive deformation for the current warped image. The entire system is end-to-end and jointly trained in an unsupervised manner. In addition, enabled by the recursive architecture, one cascade can be iteratively applied for multiple times during testing, which approaches a better fit between each of the image pairs. We evaluate our method on 3D medical images, where deformable registration is most commonly applied. We demonstrate that recursive cascaded networks achieve consistent, significant gains and outperform state-of-the-art methods. The performance reveals an increasing trend as long as more cascades are trained, while the limit is not observed. Our code will be made publicly available.
Cascade approaches have been involved in a variety of domains of computer vision, e.g., cascaded pose regression progressively refines a pose estimation learned from supervised training data @cite_32 , cascaded classifiers speed up the process of object detection @cite_37 .
{ "cite_N": [ "@cite_37", "@cite_32" ], "mid": [ "2036989445", "2136000821" ], "abstract": [ "We describe a general method for building cascade classifiers from part-based deformable models such as pictorial structures. We focus primarily on the case of star-structured models and show how a simple algorithm based on partial hypothesis pruning can speed up object detection by more than one order of magnitude without sacrificing detection accuracy. In our algorithm, partial hypotheses are pruned with a sequence of thresholds. In analogy to probably approximately correct (PAC) learning, we introduce the notion of probably approximately admissible (PAA) thresholds. Such thresholds provide theoretical guarantees on the performance of the cascade method and can be computed from a small sample of positive examples. Finally, we outline a cascade detection algorithm for a general class of models defined by a grammar formalism. This class includes not only tree-structured pictorial structures but also richer models that can represent each part recursively as a mixture of other parts.", "We present a fast and accurate algorithm for computing the 2D pose of objects in images called cascaded pose regression (CPR). CPR progressively refines a loosely specified initial guess, where each refinement is carried out by a different regressor. Each regressor performs simple image measurements that are dependent on the output of the previous regressors; the entire system is automatically learned from human annotated training examples. CPR is not restricted to rigid transformations: ‘pose’ is any parameterized variation of the object's appearance such as the degrees of freedom of deformable and articulated objects. We compare CPR against both standard regression techniques and human performance (computed from redundant human annotations). Experiments on three diverse datasets (mice, faces, fish) suggest CPR is fast (2–3ms per pose estimate), accurate (approaching human performance), and easy to train from small amounts of labeled data." ] }
1907.12353
2966108228
We present recursive cascaded networks, a general architecture that enables learning deep cascades, for deformable image registration. The proposed architecture is simple in design and can be built on any base network. The moving image is warped successively by each cascade and finally aligned to the fixed image; this procedure is recursive in a way that every cascade learns to perform a progressive deformation for the current warped image. The entire system is end-to-end and jointly trained in an unsupervised manner. In addition, enabled by the recursive architecture, one cascade can be iteratively applied for multiple times during testing, which approaches a better fit between each of the image pairs. We evaluate our method on 3D medical images, where deformable registration is most commonly applied. We demonstrate that recursive cascaded networks achieve consistent, significant gains and outperform state-of-the-art methods. The performance reveals an increasing trend as long as more cascades are trained, while the limit is not observed. Our code will be made publicly available.
Deep learning also benefits from cascade architectures. For example, deep deformation network @cite_4 cascades two stages and predicts a deformation for landmark localization. Other applications include object detection @cite_9 , semantic segmentation @cite_47 , and image super-resolution @cite_51 . There are also several works specified to medical images, e.g., 3D image reconstruction for MRIs @cite_42 @cite_10 , liver segmentation @cite_44 and mitosis detection @cite_45 . Note that shallow, non-recursive network cascades are usually proposed in those works.
{ "cite_N": [ "@cite_4", "@cite_10", "@cite_9", "@cite_42", "@cite_44", "@cite_45", "@cite_47", "@cite_51" ], "mid": [ "2952074561", "2594014149", "2964241181", "2750807812", "2753924563", "2560920277", "2216125271", "135113724" ], "abstract": [ "We propose a novel cascaded framework, namely deep deformation network (DDN), for localizing landmarks in non-rigid objects. The hallmarks of DDN are its incorporation of geometric constraints within a convolutional neural network (CNN) framework, ease and efficiency of training, as well as generality of application. A novel shape basis network (SBN) forms the first stage of the cascade, whereby landmarks are initialized by combining the benefits of CNN features and a learned shape basis to reduce the complexity of the highly nonlinear pose manifold. In the second stage, a point transformer network (PTN) estimates local deformation parameterized as thin-plate spline transformation for a finer refinement. Our framework does not incorporate either handcrafted features or part connectivity, which enables an end-to-end shape prediction pipeline during both training and testing. In contrast to prior cascaded networks for landmark localization that learn a mapping from feature space to landmark locations, we demonstrate that the regularization induced through geometric priors in the DDN makes it easier to train, yet produces superior results. The efficacy and generality of the architecture is demonstrated through state-of-the-art performances on several benchmarks for multiple tasks such as facial landmark localization, human body pose estimation and bird part localization.", "Inspired by recent advances in deep learning, we propose a framework for reconstructing dynamic sequences of 2-D cardiac magnetic resonance (MR) images from undersampled data using a deep cascade of convolutional neural networks (CNNs) to accelerate the data acquisition process. In particular, we address the case where data are acquired using aggressive Cartesian undersampling. First, we show that when each 2-D image frame is reconstructed independently, the proposed method outperforms state-of-the-art 2-D compressed sensing approaches, such as dictionary learning-based MR image reconstruction, in terms of reconstruction error and reconstruction speed. Second, when reconstructing the frames of the sequences jointly, we demonstrate that CNNs can learn spatio-temporal correlations efficiently by combining convolution and data sharing approaches. We show that the proposed method consistently outperforms state-of-the-art methods and is capable of preserving anatomical structure more faithfully up to 11-fold undersampling. Moreover, reconstruction is very fast: each complete dynamic sequence can be reconstructed in less than 10 s and, for the 2-D case, each image frame can be reconstructed in 23 ms, enabling real-time applications.", "In object detection, an intersection over union (IoU) threshold is required to define positives and negatives. An object detector, trained with low IoU threshold, e.g. 0.5, usually produces noisy detections. However, detection performance tends to degrade with increasing the IoU thresholds. Two main factors are responsible for this: 1) overfitting during training, due to exponentially vanishing positive samples, and 2) inference-time mismatch between the IoUs for which the detector is optimal and those of the input hypotheses. A multi-stage object detection architecture, the Cascade R-CNN, is proposed to address these problems. It consists of a sequence of detectors trained with increasing IoU thresholds, to be sequentially more selective against close false positives. The detectors are trained stage by stage, leveraging the observation that the output of a detector is a good distribution for training the next higher quality detector. The resampling of progressively improved hypotheses guarantees that all detectors have a positive set of examples of equivalent size, reducing the overfitting problem. The same cascade procedure is applied at inference, enabling a closer match between the hypotheses and the detector quality of each stage. A simple implementation of the Cascade R-CNN is shown to surpass all single-model object detectors on the challenging COCO dataset. Experiments also show that the Cascade R-CNN is widely applicable across detector architectures, achieving consistent gains independently of the baseline detector strength. The code is available at https: github.com zhaoweicai cascade-rcnn.", "7T MRI scanner provides MR images with higher resolution and better contrast than 3T MR scanners. This helps many medical analysis tasks, including tissue segmentation. However, currently there is a very limited number of 7T MRI scanners worldwide. This motivates us to propose a novel image post-processing framework that can jointly generate high-resolution 7T-like images and their corresponding high-quality 7T-like tissue segmentation maps, solely from the routine 3T MR images. Our proposed framework comprises two parallel components, namely (1) reconstruction and (2) segmentation. The reconstruction component includes the multi-step cascaded convolutional neural networks (CNNs) that map the input 3T MR image to a 7T-like MR image, in terms of both resolution and contrast. Similarly, the segmentation component involves another paralleled cascaded CNNs, with a different architecture, to generate high-quality segmentation maps. These cascaded feedbacks between the two designed paralleled CNNs allow both tasks to mutually benefit from each another when learning the respective reconstruction and segmentation mappings. For evaluation, we have tested our framework on 15 subjects (with paired 3T and 7T images) using a leave-one-out cross-validation. The experimental results show that our estimated 7T-like images have richer anatomical details and better segmentation results, compared to the 3T MRI. Furthermore, our method also achieved better results in both reconstruction and segmentation tasks, compared to the state-of-the-art methods.", "Semantic segmentation has been popularly addressed using Fully convolutional networks (FCN) (e.g. U-Net) with impressive results and has been the forerunner in recent segmentation challenges. However, FCN approaches do not necessarily incorporate local geometry such as smoothness and shape, whereas traditional image analysis techniques have benefitted greatly by them in solving segmentation and tracking problems. In this work, we address the problem of incorporating shape priors within the FCN segmentation framework. We demonstrate the utility of such a shape prior in robust handling of scenarios such as loss of contrast and artifacts. Our experiments show ( 5 ) improvement over U-Net for the challenging problem of ultrasound kidney segmentation.", "The number of mitoses per tissue area gives an important aggressiveness indication of the invasive breast carcinoma. However, automatic mitosis detection in histology images remains a challenging problem. Traditional methods either employ hand-crafted features to discriminate mitoses from other cells or construct a pixel-wise classifier to label every pixel in a sliding window way. While the former suffers from the large shape variation of mitoses and the existence of many mimics with similar appearance, the slow speed of the later prohibits its use in clinical practice. In order to overcome these shortcomings, we propose a fast and accurate method to detect mitosis by designing a novel deep cascaded convolutional neural network, which is composed of two components. First, by leveraging the fully convolutional neural network, we propose a coarse retrieval model to identify and locate the candidates of mitosis while preserving a high sensitivity. Based on these candidates, a fine discrimination model utilizing knowledge transferred from cross-domain is developed to further single out mitoses from hard mimics. Our approach outperformed other methods by a large margin in 2014 ICPR MITOS-ATYPIA challenge in terms of detection accuracy. When compared with the state-of-the-art methods on the 2012 ICPR MITOSIS data (a smaller and less challenging dataset), our method achieved comparable or better results with a roughly 60 times faster speed.", "Semantic segmentation research has recently witnessed rapid progress, but many leading methods are unable to identify object instances. In this paper, we present Multitask Network Cascades for instance-aware semantic segmentation. Our model consists of three networks, respectively differentiating instances, estimating masks, and categorizing objects. These networks form a cascaded structure, and are designed to share their convolutional features. We develop an algorithm for the nontrivial end-to-end training of this causal, cascaded structure. Our solution is a clean, single-step training framework and can be generalized to cascades that have more stages. We demonstrate state-of-the-art instance-aware semantic segmentation accuracy on PASCAL VOC. Meanwhile, our method takes only 360ms testing an image using VGG-16, which is two orders of magnitude faster than previous systems for this challenging problem. As a by product, our method also achieves compelling object detection results which surpass the competitive Fast Faster R-CNN systems. The method described in this paper is the foundation of our submissions to the MS COCO 2015 segmentation competition, where we won the 1st place.", "In this paper, we propose a new model called deep network cascade (DNC) to gradually upscale low-resolution images layer by layer, each layer with a small scale factor. DNC is a cascade of multiple stacked collaborative local auto-encoders. In each layer of the cascade, non-local self-similarity search is first performed to enhance high-frequency texture details of the partitioned patches in the input image. The enhanced image patches are then input into a collaborative local auto-encoder (CLA) to suppress the noises as well as collaborate the compatibility of the overlapping patches. By closing the loop on non-local self-similarity search and CLA in a cascade layer, we can refine the super-resolution result, which is further fed into next layer until the required image scale. Experiments on image super-resolution demonstrate that the proposed DNC can gradually upscale a low-resolution image with the increase of network layers and achieve more promising results in visual quality as well as quantitative performance." ] }
1907.12353
2966108228
We present recursive cascaded networks, a general architecture that enables learning deep cascades, for deformable image registration. The proposed architecture is simple in design and can be built on any base network. The moving image is warped successively by each cascade and finally aligned to the fixed image; this procedure is recursive in a way that every cascade learns to perform a progressive deformation for the current warped image. The entire system is end-to-end and jointly trained in an unsupervised manner. In addition, enabled by the recursive architecture, one cascade can be iteratively applied for multiple times during testing, which approaches a better fit between each of the image pairs. We evaluate our method on 3D medical images, where deformable registration is most commonly applied. We demonstrate that recursive cascaded networks achieve consistent, significant gains and outperform state-of-the-art methods. The performance reveals an increasing trend as long as more cascades are trained, while the limit is not observed. Our code will be made publicly available.
In respect of registration, traditional algorithms iteratively optimize some energy functions in common @cite_39 @cite_53 @cite_23 @cite_2 @cite_33 @cite_13 @cite_54 @cite_56 . Those methods are also recursive in general, i.e., similarly functioned alignments with respect to the current warped images are performed during iterations. Iterative Closest Point is an iterative, recursive approach for registering point clouds @cite_18 @cite_31 , where the closest pairs of points are matched at each iteration and a rigid transformation that minimizes the difference is solved. In deformable image registration, most traditional algorithms basically works like this but in a much more complex way. Standard symmetric normalization (SyN) @cite_23 maximizes the cross-correlation within the space of diffeomorphic maps during iterations. Optimizing free-form deformations using B-spline @cite_17 is another standard approach.
{ "cite_N": [ "@cite_18", "@cite_33", "@cite_53", "@cite_54", "@cite_39", "@cite_56", "@cite_23", "@cite_2", "@cite_31", "@cite_13", "@cite_17" ], "mid": [ "2049981393", "2170167891", "", "2118961645", "2155298532", "1998710995", "", "2007153649", "2118104180", "2065646479", "2113576511" ], "abstract": [ "The authors describe a general-purpose, representation-independent method for the accurate and computationally efficient registration of 3-D shapes including free-form curves and surfaces. The method handles the full six degrees of freedom and is based on the iterative closest point (ICP) algorithm, which requires only a procedure to find the closest point on a geometric entity to a given point. The ICP algorithm always converges monotonically to the nearest local minimum of a mean-square distance metric, and the rate of convergence is rapid during the first few iterations. Therefore, given an adequate set of initial rotations and translations for a particular class of objects with a certain level of 'shape complexity', one can globally minimize the mean-square distance metric over all six degrees of freedom by testing each initial registration. One important application of this method is to register sensed data from unfixtured rigid objects with an ideal geometric model, prior to shape inspection. Experimental results show the capabilities of the registration algorithm on point sets, curves, and surfaces. >", "This paper examine the Euler-Lagrange equations for the solution of the large deformation diffeomorphic metric mapping problem studied in (1998) and Trouve (1995) in which two images I 0, I 1 are given and connected via the diffeomorphic change of coordinates I 0???1=I 1 where ?=?1 is the end point at t= 1 of curve ? t , t?[0, 1] satisfying .? t =v t (? t ), t? [0,1] with ?0=id. The variational problem takes the form @math where ?v t? V is an appropriate Sobolev norm on the velocity field v t(·), and the second term enforces matching of the images with ?·?L 2 representing the squared-error norm. In this paper we derive the Euler-Lagrange equations characterizing the minimizing vector fields v t, t?[0, 1] assuming sufficient smoothness of the norm to guarantee existence of solutions in the space of diffeomorphisms. We describe the implementation of the Euler equations using semi-lagrangian method of computing particle flows and show the solutions for various examples. As well, we compute the metric distance on several anatomical configurations as measured by ?0 1?v t? V dt on the geodesic shortest paths.", "", "In this paper, we introduce a novel and efficient approach to dense image registration, which does not require a derivative of the employed cost function. In such a context, the registration problem is formulated using a discrete Markov random field objective function. First, towards dimensionality reduction on the variables we assume that the dense deformation field can be expressed using a small number of control points (registration grid) and an interpolation strategy. Then, the registration cost is expressed using a discrete sum over image costs (using an arbitrary similarity measure) projected on the control points, and a smoothness term that penalizes local deviations on the deformation field according to a neighborhood system on the grid. Towards a discrete approach, the search space is quantized resulting in a fully discrete model. In order to account for large deformations and produce results on a high resolution level, a multi-scale incremental approach is considered where the optimal solution is iteratively updated. This is done through successive morphings of the source towards the target image. Efficient linear programming using the primal dual principles is considered to recover the lowest potential of the cost function. Very promising results using synthetic data with known deformations and real data demonstrate the potentials of our approach.", "Abstract This paper describes DARTEL, which is an algorithm for diffeomorphic image registration. It is implemented for both 2D and 3D image registration and has been formulated to include an option for estimating inverse consistent deformations. Nonlinear registration is considered as a local optimisation problem, which is solved using a Levenberg–Marquardt strategy. The necessary matrix solutions are obtained in reasonable time using a multigrid method. A constant Eulerian velocity framework is used, which allows a rapid scaling and squaring method to be used in the computations. DARTEL has been applied to intersubject registration of 471 whole brain images, and the resulting deformations were evaluated in terms of how well they encode the shape information necessary to separate male and female subjects and to predict the ages of the subjects.", "A new approach is presented for elastic registration of medical images, and is applied to magnetic resonance images of the brain. Experimental results demonstrate very high accuracy in superposition of images from different subjects. There are two major novelties in the proposed algorithm. First, it uses an attribute vector, i.e., a set of geometric moment invariants (GMIs) that are defined on each voxel in an image and are calculated from the tissue maps, to reflect the underlying anatomy at different scales. The attribute vector, if rich enough, can distinguish between different parts of an image, which helps establish anatomical correspondences in the deformation procedure; it also helps reduce local minima, by reducing ambiguity in potential matches. This is a fundamental deviation of our method, referred to as the hierarchical attribute matching mechanism for elastic registration (HAMMER), from other volumetric deformation methods, which are typically based on maximizing image similarity. Second, in order to avoid being trapped by local minima, i.e., suboptimal poor matches, HAMMER uses a successive approximation of the energy function being optimized by lower dimensional smooth energy functions, which are constructed to have significantly fewer local minima. This is achieved by hierarchically selecting the driving features that have distinct attribute vectors, thus, drastically reducing ambiguity in finding correspondence. A number of experiments demonstrate that the proposed algorithm results in accurate superposition of image data from individuals with significant anatomical differences.", "", "Matching of locally variant data to an explicit 3-dimensional pictorial model is developed for X-ray computed tomography scans of the human brain, where the model is a voxel representation of an anatomical human brain atlas. The matching process is 3-dimensional without any preference given to the slicing plane. After global alignment the brain atlas is deformed like a piece of rubber, without tearing or folding. Deformation proceeds step-by-step in a coarse-to-fine strategy, increasing the local similarity and global coherence. The assumption underlying this approach is that all normal brains, at least at a certain level of representation, have the same topological structure, but may differ in shape details. Results show that we can account for these differences.", "A heuristic method has been developed for registering two sets of 3-D curves obtained by using an edge-based stereo system, or two dense 3-D maps obtained by using a correlation-based stereo system. Geometric matching in general is a difficult unsolved problem in computer vision. Fortunately, in many practical applications, some a priori knowledge exists which considerably simplifies the problem. In visual navigation, for example, the motion between successive positions is usually approximately known. From this initial estimate, our algorithm computes observer motion with very good precision, which is required for environment modeling (e.g., building a Digital Elevation Map). Objects are represented by a set of 3-D points, which are considered as the samples of a surface. No constraint is imposed on the form of the objects. The proposed algorithm is based on iteratively matching points in one set to the closest points in the other. A statistical method based on the distance distribution is used to deal with outliers, occlusion, appearance and disappearance, which allows us to do subset-subset matching. A least-squares technique is used to estimate 3-D motion from the point correspondences, which reduces the average distance between points in the two sets. Both synthetic and real data have been used to test the algorithm, and the results show that it is efficient and robust, and yields an accurate motion estimate.", "The development of algorithms for the spatial transformation and registration of tomographic brain images is a key issue in several clinical and basic science medical applications, including computer-aided neurosurgery, functional image analysis, and morphometrics. This paper describes a technique for the spatial transformation of brain images, which is based on elastically deformable models. A deformable surface algorithm is used to find a parametric representation of the outer cortical surface and then to define a map between corresponding cortical regions in two brain images. Based on the resulting map, a three-dimensional elastic warping transformation is then determined, which brings two images into register. This transformation models images as inhomogeneous elastic objects which are deformed into registration with each other by external force fields. The elastic properties of the images can vary from one region to the other, allowing more variable brain regions, such as the ventricles, to deform more freely than less variable ones. Finally, the framework of prestrained elasticity is used to model structural irregularities, and in particular the ventricular expansion occurring with aging or diseases, and the growth of tumors. Performance measurements are obtained using magnetic resonance images.", "In this paper the authors present a new approach for the nonrigid registration of contrast-enhanced breast MRI. A hierarchical transformation model of the motion of the breast has been developed. The global motion of the breast is modeled by an affine transformation while the local breast motion is described by a free-form deformation (FFD) based on B-splines. Normalized mutual information is used as a voxel-based similarity measure which is insensitive to intensity changes as a result of the contrast enhancement. Registration is achieved by minimizing a cost function, which represents a combination of the cost associated with the smoothness of the transformation and the cost associated with the image similarity. The algorithm has been applied to the fully automated registration of three-dimensional (3-D) breast MRI in volunteers and patients. In particular, the authors have compared the results of the proposed nonrigid registration algorithm to those obtained using rigid and affine registration techniques. The results clearly indicate that the nonrigid registration algorithm is much better able to recover the motion and deformation of the breast than rigid or affine registration algorithms." ] }
1907.12353
2966108228
We present recursive cascaded networks, a general architecture that enables learning deep cascades, for deformable image registration. The proposed architecture is simple in design and can be built on any base network. The moving image is warped successively by each cascade and finally aligned to the fixed image; this procedure is recursive in a way that every cascade learns to perform a progressive deformation for the current warped image. The entire system is end-to-end and jointly trained in an unsupervised manner. In addition, enabled by the recursive architecture, one cascade can be iteratively applied for multiple times during testing, which approaches a better fit between each of the image pairs. We evaluate our method on 3D medical images, where deformable registration is most commonly applied. We demonstrate that recursive cascaded networks achieve consistent, significant gains and outperform state-of-the-art methods. The performance reveals an increasing trend as long as more cascades are trained, while the limit is not observed. Our code will be made publicly available.
Learning-based methods are presented recently. Supervised methods entail much effort on the labeled data that can hardly meet the realistic demands, resulting in the limited performance @cite_55 @cite_15 @cite_38 @cite_57 . Unsupervised methods are proposed to solve this problem. Several initial works shows the possibility of unsupervised learning @cite_35 @cite_43 @cite_1 @cite_49 , among which DLIR @cite_43 performs on par with the B-spline method implemented in SimpleElastix @cite_29 (a multi-language extension of Elastix @cite_0 , which is selected as one of our baseline methods). VoxelMorph @cite_3 and VTN @cite_16 achieve better performance by predicting a dense flow field using deconvolutional layers @cite_21 , whereas DLIR only predicts a sparse displacement grid interpolated by a third order B-spline kernel. VoxelMorph only evaluates their method on brain MRI datasets @cite_3 @cite_41 , but shown deficiency on other datasets such as liver CT scans by later work @cite_16 . Additionally, VTN proposes an initial convolutional network which performs an affine transformation before predicting deformation fields, leading to a truly end-to-end framework by substituting the traditional affine stage.
{ "cite_N": [ "@cite_38", "@cite_35", "@cite_41", "@cite_55", "@cite_29", "@cite_1", "@cite_21", "@cite_3", "@cite_57", "@cite_43", "@cite_0", "@cite_49", "@cite_15", "@cite_16" ], "mid": [ "2753461941", "2891590469", "2891631795", "2752246523", "2507987841", "2750760359", "1745334888", "2963123114", "2751297520", "2608822622", "2133287637", "2889905929", "2604920239", "2913161686" ], "abstract": [ "Robust image registration in medical imaging is essential for comparison or fusion of images, acquired from various perspectives, modalities or at different times. Typically, an objective function needs to be minimized assuming specific a priori deformation models and predefined or learned similarity measures. However, these approaches have difficulties to cope with large deformations or a large variability in appearance. Using modern deep learning (DL) methods with automated feature design, these limitations could be resolved by learning the intrinsic mapping solely from experience. We investigate in this paper how DL could help organ-specific (ROI-specific) deformable registration, to solve motion compensation or atlas-based segmentation problems for instance in prostate diagnosis. An artificial agent is trained to solve the task of non-rigid registration by exploring the parametric space of a statistical deformation model built from training data. Since it is difficult to extract trustworthy ground-truth deformation fields, we present a training scheme with a large number of synthetically deformed image pairs requiring only a small number of real inter-subject pairs. Our approach was tested on inter-subject registration of prostate MR data and reached a median DICE score of .88 in 2-D and .76 in 3-D, therefore showing improved results compared to state-of-the-art registration algorithms.", "Image registration, the process of aligning two or more images, is the core technique of many (semi-)automatic medical image analysis tasks. Recent studies have shown that deep learning methods, notably convolutional neural networks (ConvNets), can be used for image registration. Thus far training of ConvNets for registration was supervised using predefined example registrations. However, obtaining example registrations is not trivial. To circumvent the need for predefined examples, and thereby to increase convenience of training ConvNets for image registration, we propose the Deep Learning Image Registration (DLIR) framework for unsupervised affine and deformable image registration. In the DLIR framework ConvNets are trained for image registration by exploiting image similarity analogous to conventional intensity-based image registration. After a ConvNet has been trained with the DLIR framework, it can be used to register pairs of unseen images in one shot. We propose flexible ConvNets designs for affine image registration and for deformable image registration. By stacking multiple of these ConvNets into a larger architecture, we are able to perform coarse-to-fine image registration. We show for registration of cardiac cine MRI and registration of chest CT that performance of the DLIR framework is comparable to conventional image registration while being several orders of magnitude faster.", "We present VoxelMorph, a fast learning-based framework for deformable, pairwise medical image registration. Traditional registration methods optimize an objective function for each pair of images, which can be time-consuming for large datasets or rich deformation models. In contrast to this approach and building on recent learning-based methods, we formulate registration as a function that maps an input image pair to a deformation field that aligns these images. We parameterize the function via a convolutional neural network and optimize the parameters of the neural network on a set of images. Given a new pair of scans, VoxelMorph rapidly computes a deformation field by directly evaluating the function. In this paper, we explore two different training strategies. In the first (unsupervised) setting, we train the model to maximize standard image matching objective functions that are based on the image intensities. In the second setting, we leverage auxiliary segmentations available in the training data. We demonstrate that the unsupervised model’s accuracy is comparable to the state-of-the-art methods while operating orders of magnitude faster. We also show that VoxelMorph trained with auxiliary data improves registration accuracy at test time and evaluate the effect of training set size on registration. Our method promises to speed up medical image analysis and processing pipelines while facilitating novel directions in learning-based registration and its applications. Our code is freely available at https: github.com voxelmorph voxelmorph .", "Existing deformable registration methods require exhaustively iterative optimization, along with careful parameter tuning, to estimate the deformation field between images. Although some learning-based methods have been proposed for initiating deformation estimation, they are often template-specific and not flexible in practical use. In this paper, we propose a convolutional neural network (CNN) based regression model to directly learn the complex mapping from the input image pair (i.e., a pair of template and subject) to their corresponding deformation field. Specifically, our CNN architecture is designed in a patch-based manner to learn the complex mapping from the input patch pairs to their respective deformation field. First, the equalized active-points guided sampling strategy is introduced to facilitate accurate CNN model learning upon a limited image dataset. Then, the similarity-steered CNN architecture is designed, where we propose to add the auxiliary contextual cue, i.e., the similarity between input patches, to more directly guide the learning process. Experiments on different brain image datasets demonstrate promising registration performance based on our CNN model. Furthermore, it is found that the trained CNN model from one dataset can be successfully transferred to another dataset, although brain appearances across datasets are quite variable.", "In this paper we present SimpleElastix, an extension of SimpleITK designed to bring the Elastix medical image registration library to a wider audience. Elastix is a modular collection of robust C++ image registration algorithms that is widely used in the literature. However, its command-line interface introduces overhead during prototyping, experimental setup, and tuning of registration algorithms. By integrating Elastix with SimpleITK, Elastix can be used as a native library in Python, Java, R, Octave, Ruby, Lua, Tcl and C# on Linux, Mac and Windows. This allows Elastix to intregrate naturally with many development environments so the user can focus more on the registration problem and less on the underlying C++ implementation. As means of demonstration, we show how to register MR images of brains and natural pictures of faces using minimal amount of code. SimpleElastix is open source, licensed under the permissive Apache License Version 2.0 and available at https: github.com kaspermarstal SimpleElastix.", "We propose a novel non-rigid image registration algorithm that is built upon fully convolutional networks (FCNs) to optimize and learn spatial transformations between pairs of images to be registered. Different from most existing deep learning based image registration methods that learn spatial transformations from training data with known corresponding spatial transformations, our method directly estimates spatial transformations between pairs of images by maximizing an image-wise similarity metric between fixed and deformed moving images, similar to conventional image registration algorithms. At the same time, our method also learns FCNs for encoding the spatial transformations at the same spatial resolution of images to be registered, rather than learning coarse-grained spatial transformation information. The image registration is implemented in a multi-resolution image registration framework to jointly optimize and learn spatial transformations and FCNs at different resolutions with deep self-supervision through typical feedforward and backpropagation computation. Since our method simultaneously optimizes and learns spatial transformations for the image registration, our method can be directly used to register a pair of images, and the registration of a set of images is also a training procedure for FCNs so that the trained FCNs can be directly adopted to register new images by feedforward computation of the learned FCNs without any optimization. The proposed method has been evaluated for registering 3D structural brain magnetic resonance (MR) images and obtained better performance than state-of-the-art image registration algorithms.", "We propose a novel semantic segmentation algorithm by learning a deep deconvolution network. We learn the network on top of the convolutional layers adopted from VGG 16-layer net. The deconvolution network is composed of deconvolution and unpooling layers, which identify pixelwise class labels and predict segmentation masks. We apply the trained network to each proposal in an input image, and construct the final semantic segmentation map by combining the results from all proposals in a simple manner. The proposed algorithm mitigates the limitations of the existing methods based on fully convolutional networks by integrating deep deconvolution network and proposal-wise prediction, our segmentation method typically identifies detailed structures and handles objects in multiple scales naturally. Our network demonstrates outstanding performance in PASCAL VOC 2012 dataset, and we achieve the best accuracy (72.5 ) among the methods trained without using Microsoft COCO dataset through ensemble with the fully convolutional network.", "We present a fast learning-based algorithm for deformable, pairwise 3D medical image registration. Current registration methods optimize an objective function independently for each pair of images, which can be time-consuming for large data. We define registration as a parametric function, and optimize its parameters given a set of images from a collection of interest. Given a new pair of scans, we can quickly compute a registration field by directly evaluating the function using the learned parameters. We model this function using a CNN, and use a spatial transform layer to reconstruct one image from another while imposing smoothness constraints on the registration field. The proposed method does not require supervised information such as ground truth registration fields or anatomical landmarks. We demonstrate registration accuracy comparable to state-of-the-art 3D image registration, while operating orders of magnitude faster in practice. Our method promises to significantly speed up medical image analysis and processing pipelines, while facilitating novel directions in learning-based registration and its applications. Our code is available at https: github.com balakg voxelmorph.", "In this paper we propose a method to solve nonrigid image registration through a learning approach, instead of via iterative optimization of a predefined dissimilarity metric. We design a Convolutional Neural Network (CNN) architecture that, in contrast to all other work, directly estimates the displacement vector field (DVF) from a pair of input images. The proposed RegNet is trained using a large set of artificially generated DVFs, does not explicitly define a dissimilarity metric, and integrates image content at multiple scales to equip the network with contextual information. At testing time nonrigid registration is performed in a single shot, in contrast to current iterative methods. We tested RegNet on 3D chest CT follow-up data. The results show that the accuracy of RegNet is on par with a conventional B-spline registration, for anatomy within the capture range. Training RegNet with artificially generated DVFs is therefore a promising approach for obtaining good results on real clinical data, thereby greatly simplifying the training problem. Deformable image registration can therefore be successfully casted as a learning problem.", "In this work we propose a deep learning network for deformable image registration (DIRNet). The DIRNet consists of a convolutional neural network (ConvNet) regressor, a spatial transformer, and a resampler. The ConvNet analyzes a pair of fixed and moving images and outputs parameters for the spatial transformer, which generates the displacement vector field that enables the resampler to warp the moving image to the fixed image. The DIRNet is trained end-to-end by unsupervised optimization of a similarity metric between input image pairs. A trained DIRNet can be applied to perform registration on unseen image pairs in one pass, thus non-iteratively. Evaluation was performed with registration of images of handwritten digits (MNIST) and cardiac cine MR scans (Sunnybrook Cardiac Data). The results demonstrate that registration with DIRNet is as accurate as a conventional deformable image registration method with short execution times.", "Medical image registration is an important task in medical image processing. It refers to the process of aligning data sets, possibly from different modalities (e.g., magnetic resonance and computed tomography), different time points (e.g., follow-up scans), and or different subjects (in case of population studies). A large number of methods for image registration are described in the literature. Unfortunately, there is not one method that works for all applications. We have therefore developed elastix, a publicly available computer program for intensity-based medical image registration. The software consists of a collection of algorithms that are commonly used to solve medical image registration problems. The modular design of elastix allows the user to quickly configure, test, and compare different registration methods for a specific application. The command-line interface enables automated processing of large numbers of data sets, by means of scripting. The usage of elastix for comparing different registration methods is illustrated with three example experiments, in which individual components of the registration method are varied.", "Deformable image registration (DIR) in thoracic 4D CT image data is integral for, e.g., radiotherapy treatment planning, but time consuming. Deep learning (DL)-based DIR promises speed-up, but present solutions are limited to small image sizes. In this paper, we propose a General Deep Learning-based Fast Image Registration framework suitable for application to clinical 4D CT data (GDL-FIRE (^ 4D )). Open source DIR frameworks are selected to build GDL-FIRE (^ 4D ) variants. In-house-acquired 4D CT images serve as training and open 4D CT data repositories as external evaluation cohorts. Taking up current attempts to DIR uncertainty estimation, dropout-based uncertainty maps for GDL-FIRE (^ 4D ) variants are analyzed. We show that (1) registration accuracy of GDL-FIRE (^ 4D ) and standard DIR are in the same order; (2) computation time is reduced to a few seconds (here: 60-fold speed-up); and (3) dropout-based uncertainty maps do not correlate to across-DIR vector field differences, raising doubts about applicability in the given context.", "Abstract This paper introduces Quicksilver, a fast deformable image registration method. Quicksilver registration for image-pairs works by patch-wise prediction of a deformation model based directly on image appearance. A deep encoder-decoder network is used as the prediction model. While the prediction strategy is general, we focus on predictions for the Large Deformation Diffeomorphic Metric Mapping (LDDMM) model. Specifically, we predict the momentum-parameterization of LDDMM, which facilitates a patch-wise prediction strategy while maintaining the theoretical properties of LDDMM, such as guaranteed diffeomorphic mappings for sufficiently strong regularization. We also provide a probabilistic version of our prediction network which can be sampled during the testing time to calculate uncertainties in the predicted deformations. Finally, we introduce a new correction network which greatly increases the prediction accuracy of an already existing prediction network. We show experimental results for uni-modal atlas-to-image as well as uni- multi-modal image-to-image registrations. These experiments demonstrate that our method accurately predicts registrations obtained by numerical optimization, is very fast, achieves state-of-the-art registration results on four standard validation datasets, and can jointly learn an image similarity measure. Quicksilver is freely available as an open-source software.", "3D medical image registration is of great clinical importance. However, supervised learning methods require a large amount of accurately annotated corresponding control points (or morphing). The ground truth for 3D medical images is very difficult to obtain. Unsupervised learning methods ease the burden of manual annotation by exploiting unlabeled data without supervision. In this paper, we propose a new unsupervised learning method using convolutional neural networks under an end-to-end framework, Volume Tweening Network (VTN), to register 3D medical images. Three technical components ameliorate our unsupervised learning system for 3D end-to-end medical image registration: (1) We cascade the registration subnetworks; (2) We integrate affine registration into our network; and (3) We incorporate an additional invertibility loss into the training process. Experimental results demonstrate that our algorithm is 880x faster (or 3.3x faster without GPU acceleration) than traditional optimization-based methods and achieves state-of-the-art performance in medical image registration." ] }
1907.12212
2965633139
This paper is concerned with voting processes on graphs where each vertex holds one of two different opinions. In particular, we study the and the . Here at each synchronous and discrete time step, each vertex updates its opinion to match the majority among the opinions of two random neighbors and itself (the Best-of-two) or the opinions of three random neighbors (the Best-of-three). Previous studies have explored these processes on complete graphs and expander graphs, but we understand significantly less about their properties on graphs with more complicated structures. In this paper, we study the Best-of-two and the Best-of-three on the stochastic block model @math , which is a random graph consisting of two distinct Erdős-Renyi graphs @math joined by random edges with density @math . We obtain two main results. First, if @math and @math is a constant, we show that there is a phase transition in @math with threshold @math (specifically, @math for the Best-of-two, and @math for the Best-of-three). If @math , the process reaches consensus within @math steps for any initial opinion configuration with a bias of @math . By contrast, if @math , we show that, for any initial opinion configuration, the process reaches consensus within @math steps. To the best of our knowledge, this is the first result concerning multiple-choice voting for arbitrary initial opinion configurations on non-complete graphs.
Other studies have focused on voting processes with more general updating rules. Cooper and Rivera @cite_5 studied the linear voting model , whose updating rule is characterized by a set of @math binary matrices. This model covers the synchronous pull and the asynchronous push pull voting processes. However, it does not cover the Best-of-two and the Best-of-three. Schoenebeck and Yu @cite_0 studied asynchronous voting processes whose updating functions are majority-like (including the asynchronous Best-of- @math voting processes). They gave upper bounds on the consensus times of such models on dense Erd o s-R 'enyi random graphs using a potential technique.
{ "cite_N": [ "@cite_0", "@cite_5" ], "mid": [ "2589531210", "2533034710" ], "abstract": [ "We study consensus processes on the complete graph of n nodes. Initially, each node supports one up to n different opinions. Nodes randomly and in parallel sample the opinions of constantly many nodes. Based on these samples, they use an update rule to change their own opinion. The goal is to reach consensus, a configuration where all nodes support the same opinion. We compare two well-known update rules: 2-Choices and 3-Majority. In the former, each node samples two nodes and adopts their opinion if they agree. In the latter, each node samples three nodes: If an opinion is supported by at least two samples the node adopts it, otherwise it randomly adopts one of the sampled opinions. Known results for these update rules focus on initial configurations with a limited number of colors (say n1 3), or typically assume a bias, where one opinion has a much larger support than any other. For such biased configurations, the time to reach consensus is roughly the same for 2-Choices and 3-Majority. Interestingly, we prove that this is no longer true for configurations with a large number of initial colors. In particular, we show that 3-Majority reaches consensus with high probability in O(n3 4 · log7 8 n) rounds, while 2-Choices can need Ω(n log n) rounds. We thus get the first unconditional sublinear bound for 3-Majority and the first result separating the consensus time of these processes. Along the way, we develop a framework that allows a fine-grained comparison between consensus processes from a specific class. We believe that this framework might help to classify the performance of more consensus processes.", "We study voting models on graphs. In the beginning, the vertices of a given graph have some initial opinion. Over time, the opinions on the vertices change by interactions between graph neighbours. Under suitable conditions the system evolves to a state in which all vertices have the same opinion. In this work, we consider a new model of voting, called the Linear Voting Model. This model can be seen as a generalization of several models of voting, including among others, pull voting and push voting. One advantage of our model is that, even though it is very general, it has a rich structure making the analysis tractable. In particular we are able to solve the basic question about voting, the probability that certain opinion wins the poll, and furthermore, given appropriate conditions, we are able to bound the expected time until some opinion wins." ] }
1907.12079
2964825438
Topic modeling is commonly used to analyze and understand large document collections. However, in practice, users want to focus on specific aspects or "targets" rather than the entire corpus. For example, given a large collection of documents, users may want only a smaller subset which more closely aligns with their interests, tasks, and domains. In particular, our paper focuses on large-scale document retrieval with high recall where any missed relevant documents can be critical. A simple keyword matching search is generally not effective nor efficient as 1) it is difficult to find a list of keyword queries that can cover the documents of interest before exploring the dataset, 2) some documents may not contain the exact keywords of interest but may still be highly relevant, and 3) some words have multiple meanings, which would result in irrelevant documents included in the retrieved subset. In this paper, we present TopicSifter, a visual analytics system for interactive search space reduction. Our system utilizes targeted topic modeling based on nonnegative matrix factorization and allows users to give relevance feedback in order to refine their target and guide the topic modeling to the most relevant results.
Various information visualization techniques have been applied to improve user interfaces for search. Some systems augment search result lists with additional small visualizations. For example, TileBars @cite_26 , INSYDER @cite_5 , and HotMap @cite_25 visualize query-document relationships as icons or glyphs alongside search results. Another approach is to visualize search results in a spatial layout where proximity represents similarity. Systems such as InfoSky @cite_20 and IN-SPIRE @cite_19 are examples. FacetAtlas @cite_3 overlays additional heatmaps to visualize density. ProjSnippet @cite_34 visualizes text snippets in a 2-D layout. Many others cluster the search results and offer faceted navigation. FacetMap @cite_4 and ResultMap @cite_27 utilizes treemap-style visualizations to represent facets. These systems may guide users well in exploring search results, but they are mostly based on static search queries. Our system goes beyond search results exploration and offers interactive target (query) building.
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_3", "@cite_19", "@cite_27", "@cite_5", "@cite_34", "@cite_25", "@cite_20" ], "mid": [ "2158231614", "2170061697", "2130154693", "2140501570", "2024351142", "2138539908", "", "2116336519", "2106841726" ], "abstract": [ "The field of information retrieval has traditionally focused on textbases consisting of titles and abstracts. As a consequence, many underlying assumptions must be altered for retrieval from full-length text collections. This paper argues for making use of text structure when retrieving from full text documents, and presents a visualization paradigm, called TileBars, that demonstrates the usefulness of explicit term distribution information in Boolean-type queries. TileBars simultaneously and compactly indicate relative document length, query term frequency, and query term distribution. The patterns in a column of TileBars can be quickly scanned and deciphered, aiding users in making judgments about the potential relevance of the retrieved documents.", "The dominant paradigm for searching and browsing large data stores is text-based: presenting a scrollable list of search results in response to textual search term input. While this works well for the Web, there is opportunity for improvement in the domain of personal information stores, which tend to have more heterogeneous data and richer metadata. In this paper, we introduce FacetMap, an interactive, query-driven visualization, generalizable to a wide range of metadata-rich data stores. FacetMap uses a visual metaphor for both input (selection of metadata facets as filters) and output. Results of a user study provide insight into tradeoffs between FacetMap's graphical approach and the traditional text-oriented approach", "Documents in rich text corpora usually contain multiple facets of information. For example, an article about a specific disease often consists of different facets such as symptom, treatment, cause, diagnosis, prognosis, and prevention. Thus, documents may have different relations based on different facets. Powerful search tools have been developed to help users locate lists of individual documents that are most related to specific keywords. However, there is a lack of effective analysis tools that reveal the multifaceted relations of documents within or cross the document clusters. In this paper, we present FacetAtlas, a multifaceted visualization technique for visually analyzing rich text corpora. FacetAtlas combines search technology with advanced visual analytical tools to convey both global and local patterns simultaneously. We describe several unique aspects of FacetAtlas, including (1) node cliques and multifaceted edges, (2) an optimized density map, and (3) automated opacity pattern enhancement for highlighting visual patterns, (4) interactive context switch between facets. In addition, we demonstrate the power of FacetAtlas through a case study that targets patient education in the health care domain. Our evaluation shows the benefits of this work, especially in support of complex multifaceted data analysis.", "Professional analysts deal with a high volume of information and must constantly work to separate out the valuable data. However, analysts have difficulty determining what data is useful without reading or skimming almost all returned documents from a search. This presents them with a difficult tradeoff. Searching information broadly returns hundreds or thousands of documents. We present lessons learned from an observational study of the application of the InSpire visually oriented text exploitation system in an operational analysis environment.", "Hierarchical representations are common in digital repositories, yet are not always fully leveraged in their online search interfaces. This work describes ResultMaps, which use hierarchical treemap representations with query string-driven digital library search engines. We describe two lab experiments, which find that ResultsMap users yield significantly better results over a control condition on some subjective measures, and we find evidence that ResultMaps have ancillary benefits via increased understanding of some aspects of repository content. The ResultMap system and experiments contribute an understanding of the benefits-direct and indirect-of the ResultMap approach to repository search visualization.", "This paper presents INSYDER, a content-based visual-information-seeking system for the Web. The Web can be seen as one huge digital library offering a variety of very useful information for business analysts. INSYDER addresses these possibilities and offers powerful retrieval and visualisation functionalities. The main focus during the development was on the usability of the system. Therefore, a variety of well-established visualisation components were employed to support the user during the information-seeking process (e.g. visual query, result table, bar graph, segment view with tile bars, and scatterplot). Also, the retrieval aspects were developed with the goal of increasing the usability of the system (e.g. natural language search, content-based classification, relevance feedback). Extensive evaluations of the retrieval performance and the usability of the visualisation were conducted. The results of these evaluations offered many helpful insights into developing a new visual-information-seeking system called VisMeB.", "", "Users of traditional web search engines commonly find it difficult to evaluate the results of their web searches. We suggest the use of information visualization and interactive visual manipulation as methods for improving the ability of users to evaluate the results of a web search. In this paper, we present the results of a user study that compared the search results interface provided by Google to that of two systems we have developed: HotMap and Concept Highlighter. We found that users were able to perform their searches faster with HotMap, were able to find more relevant documents with Concept Highlighter, and generally ranked these interfaces higher than Google with respect to subjective measures. When given a choice between these interfaces, participants ranked HotMap the highest, followed by Google and Concept Highlighter. These results indicate that even though the list-based representation of search results are common among search engines, visual and interactive interfaces to web search results can be more efficient, effective, and satisfying to the users.", "InfoSky is a system enabling users to explore large, hierarchically structured document collections. Similar to a real-world telescope, InfoSky employs a planar graphical representation with variable magnification. Documents of similar content are placed close to each other and are visualised as stars, forming clusters with distinct shapes. For greater performance, the hierarchical structure is exploited and force-directed placement is applied recursively at each level on much fewer objects, rather than on the whole corpus. Collections of documents at a particular level in the hierarchy are visualised with bounding polygons using a modified weighted Voronoi diagram. Their area is related to the number of documents contained. Textual labels are displayed dynamically during navigation, adjusting to the visualisation content. Navigation is animated and provides a seamless zooming transition between summary and detail view. Users can map metadata such as document size or age to attributes of the visualisation such as colour and luminance. Queries can be made and matching documents or collections are highlighted. Formative usability testing is ongoing; a small baseline experiment comparing the telescope browser to a tree browser is discussed." ] }
1907.12079
2964825438
Topic modeling is commonly used to analyze and understand large document collections. However, in practice, users want to focus on specific aspects or "targets" rather than the entire corpus. For example, given a large collection of documents, users may want only a smaller subset which more closely aligns with their interests, tasks, and domains. In particular, our paper focuses on large-scale document retrieval with high recall where any missed relevant documents can be critical. A simple keyword matching search is generally not effective nor efficient as 1) it is difficult to find a list of keyword queries that can cover the documents of interest before exploring the dataset, 2) some documents may not contain the exact keywords of interest but may still be highly relevant, and 3) some words have multiple meanings, which would result in irrelevant documents included in the retrieved subset. In this paper, we present TopicSifter, a visual analytics system for interactive search space reduction. Our system utilizes targeted topic modeling based on nonnegative matrix factorization and allows users to give relevance feedback in order to refine their target and guide the topic modeling to the most relevant results.
Although topic summarization has been studied for a long time, discovering topic summary of a specific aspect (or targets) is a relatively new research problem. TTM @cite_12 is the first work to propose the term targeted topic modeling'. This work proposes a probabilistic model that is a variation of latent Dirichlet allocation (LDA) @cite_10 . Given a static keyword list defining a particular aspect, the model identifies topic keywords related to this aspect. @cite_23 identifies a list of target words from review data and disentangles aspect words and opinion words from the list. APSUM @cite_24 assigns aspects to each word in a generative process. Since the aforementioned model generates topic keywords based on a static keyword list, a dynamic model is desired. An automatic method to generate keyword dynamically has been proposed @cite_41 . This method focuses on the on-line environment of Twitter and automatically generates keywords based on the time-evolving word graph.
{ "cite_N": [ "@cite_41", "@cite_24", "@cite_23", "@cite_10", "@cite_12" ], "mid": [ "", "2788837479", "2788281060", "1880262756", "2352369035" ], "abstract": [ "", "Online reviews have become an inevitable part of a consumer’s decision making process, where the likelihood of purchase not only depends on the product’s overall rating, but also on the description of it’s aspects. Therefore, websites such as Amazon, Walmart, and Netflix constantly encourage users to write good quality reviews and categorically summa- rize different facets of the product. However, despite such efforts, it takes a significant effort to skim through thousands of reviews and look for answers that addresses the query of consumers. For example, a gamer might be interested in buying a monitor with fast refresh rates, support for Gsync and Freesync technologies etc., while a photographer might be interested in aspects such as color reproduction and Delta-e scores. Therefore, in this paper, we propose a generative aspect summarization model called APSUM that is capable of providing fine-grained summaries of on- line reviews. To overcome the inherent problem of aspect sparsity, we jointly constraint both the document-topic and the word-topic distribution by introducing a semi-supervised variation of the spike-and-slab prior. Using rigorous set of experiments, we show that the proposed model is capable of outperforming other state-of-the-art aspect-topic models over a variety of datasets and deliver intuitive fine-grained summaries that could simplify the purchase decisions of customers.", "Given a target name, which can be a product aspect or entity, identifying its aspect words and opinion words in a given corpus is a fine-grained task in target-based sentiment analysis (TSA). This task is challenging, especially when we have no labeled data and we want to perform it for any given domain. To address it, we propose a general two-stage approach. Stage one extracts groups the target-related words (call t-words) for a given target. This is relatively easy as we can apply an existing semantics-based learning technique. Stage two separates the aspect and opinion words from the grouped t-words, which is challenging because we often do not have enough word-level aspect and opinion labels. In this work, we formulate this problem in a PU learning setting and incorporate the idea of lifelong learning to solve it. Experimental results show the effectiveness of our approach.", "We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model.", "One of the overarching tasks of document analysis is to find what topics people talk about. One of the main techniques for this purpose is topic modeling. So far many models have been proposed. However, the existing models typically perform full analysis on the whole data to find all topics. This is certainly useful, but in practice we found that the user almost always also wants to perform more detailed analyses on some specific aspects, which we refer to as targets (or targeted aspects). Current full-analysis models are not suitable for such analyses as their generated topics are often too coarse and may not even be on target. For example, given a set of tweets about e-cigarette, one may want to find out what topics under discussion are specifically related to children. Likewise, given a collection of online reviews about a camera, a consumer or camera manufacturer may be interested in finding out all topics about the camera's screen, the targeted aspect. As we will see in our experiments, current full topic models are ineffective for such targeted analyses. This paper studies this problem and proposes a novel targeted topic model (TTM) to enable focused analyses on any specific aspect of interest. Our experimental results demonstrate the effectiveness of the TTM." ] }
1907.12079
2964825438
Topic modeling is commonly used to analyze and understand large document collections. However, in practice, users want to focus on specific aspects or "targets" rather than the entire corpus. For example, given a large collection of documents, users may want only a smaller subset which more closely aligns with their interests, tasks, and domains. In particular, our paper focuses on large-scale document retrieval with high recall where any missed relevant documents can be critical. A simple keyword matching search is generally not effective nor efficient as 1) it is difficult to find a list of keyword queries that can cover the documents of interest before exploring the dataset, 2) some documents may not contain the exact keywords of interest but may still be highly relevant, and 3) some words have multiple meanings, which would result in irrelevant documents included in the retrieved subset. In this paper, we present TopicSifter, a visual analytics system for interactive search space reduction. Our system utilizes targeted topic modeling based on nonnegative matrix factorization and allows users to give relevance feedback in order to refine their target and guide the topic modeling to the most relevant results.
Interactive topic models allow users to steer the topics to improve the topic modeling results. Various topic steering interactions such as adding, editing, deleting, splitting, and merging topics have been introduced @cite_32 @cite_18 @cite_16 @cite_42 @cite_11 @cite_28 @cite_37 @cite_33 @cite_15 . These interactions can be applied to refine relevant topics and remove irrelevant topics to identify targeted topics when most of the data items are relevant and only a small portion is irrelevant. However, in our large-scale search space reduction setting, a more tailored approach is needed. In this paper, we propose interactive targeted topic modeling to steer the topics to discover the target-relevant topics and documents.
{ "cite_N": [ "@cite_18", "@cite_37", "@cite_33", "@cite_28", "@cite_42", "@cite_32", "@cite_15", "@cite_16", "@cite_11" ], "mid": [ "1991676464", "2794125445", "2157821464", "2560971156", "2074930186", "1993503217", "2793816856", "2087382273", "2512177572" ], "abstract": [ "Clustering plays an important role in many large-scale data analyses providing users with an overall understanding of their data. Nonetheless, clustering is not an easy task due to noisy features and outliers existing in the data, and thus the clustering results obtained from automatic algorithms often do not make clear sense. To remedy this problem, automatic clustering should be complemented with interactive visualization strategies. This paper proposes an interactive visual analytics system for document clustering, called iVisClustering, based on a widely-used topic modeling method, latent Dirichlet allocation (LDA). iVisClustering provides a summary of each cluster in terms of its most representative keywords and visualizes soft clustering results in parallel coordinates. The main view of the system provides a 2D plot that visualizes cluster similarities and the relation among data items with a graph-based representation. iVisClustering provides several other views, which contain useful interaction methods. With help of these visualization modules, we can interactively refine the clustering results in various ways. Keywords can be adjusted so that they characterize each cluster better. In addition, our system can filter out noisy data and re-cluster the data accordingly. Cluster hierarchy can be constructed using a tree structure and for this purpose, the system supports cluster-level interactions such as sub-clustering, removing unimportant clusters, merging the clusters that have similar meanings, and moving certain clusters to any other node in the tree structure. Furthermore, the system provides document-level interactions such as moving mis-clustered documents to another cluster and removing useless documents. Finally, we present how interactive clustering is performed via iVisClustering by using real-world document data sets. © 2012 Wiley Periodicals, Inc.", "Human-in-the-loop topic modeling allows users to guide the creation of topic models and to improve model quality without having to be experts in topic modeling algorithms. Prior work in this area has focused either on algorithmic implementation without understanding how users actually wish to improve the model or on user needs but without the context of a fully interactive system. To address this disconnect, we implemented a set of model refinements requested by users in prior work and conducted a study with twelve non-expert participants to examine how end users are affected by issues that arise with a fully interactive, user-centered system. As these issues mirror those identified in interactive machine learning more broadly, such as unpredictability, latency, and trust, we also examined interactive machine learning challenges with non-expert end users through the lens of human-in-the-loop topic modeling. We found that although users experience unpredictability, their reactions vary from positive to negative, and, surprisingly, we did not find any cases of distrust, but instead noted instances where users perhaps trusted the system too much or had too little confidence in themselves.", "In the last decade, there has been an exponential growth of asynchronous online conversations thanks to the rise of social media. Analyzing and gaining insights from such conversations can be quite challenging for a user, especially when the discussion becomes very long. A promising solution to this problem is topic modeling, since it may help the user to quickly understand what was discussed in the long conversation and explore the comments of interest. However, the results of topic modeling can be noisy and may not match the user's current information needs. To address this problem, we propose a novel topic modeling system for asynchronous conversations that revises the model on the fly based on user's feedback. We then integrate this system with interactive visualization techniques to support the user in exploring long conversations, as well as revising the topic model when the current results are not adequate to fulfill her information needs. An evaluation with real users illustrates the potential benefits of our approach for exploring conversations, when compared to both a traditional interface as well as an interactive visual interface that does not support human-in-the-loop topic model.", "Abstract Visualizing high-dimensional labeled data on a two-dimensional plane can quickly result in visual clutter and information overload. To address this problem, the data usually needs to be structured, so that only parts of it are displayed at a time. We present a hierarchy-based approach that projects labeled data on different levels of detail on a two-dimensional plane, whilst keeping the user׳s cognitive load between the level changes as low as possible. The approach consists of three steps: First, the data is hierarchically clustered; second, the user can determine levels of detail; third, the levels of detail are visualized one at a time on a two-dimensional plane. Animations make transitions between the levels of detail traceable, while the exploration on each level is supported by several interaction techniques, including halos, a darts view, and a magic lens. We demonstrate the applicability and usefulness of the approach with use cases from the patent domain and a question-and-answer website. In addition, we conducted a qualitative evaluation to assess the usefulness and comprehensibility of our approach.", "Using a sequence of topic trees to organize documents is a popular way to represent hierarchical and evolving topics in text corpora. However, following evolving topics in the context of topic trees remains difficult for users. To address this issue, we present an interactive visual text analysis approach to allow users to progressively explore and analyze the complex evolutionary patterns of hierarchical topics. The key idea behind our approach is to exploit a tree cut to approximate each tree and allow users to interactively modify the tree cuts based on their interests. In particular, we propose an incremental evolutionary tree cut algorithm with the goal of balancing 1) the fitness of each tree cut and the smoothness between adjacent tree cuts; 2) the historical and new information related to user interests. A time-based visualization is designed to illustrate the evolving topics over time. To preserve the mental map, we develop a stable layout algorithm. As a result, our approach can quickly guide users to progressively gain profound insights into evolving hierarchical topics. We evaluate the effectiveness of the proposed method on Amazon's Mechanical Turk and real-world news data. The results show that users are able to successfully analyze evolving topics in text data.", "Visual analytic tools aim to support the cognitively demanding task of sensemaking. Their success often depends on the ability to leverage capabilities of mathematical models, visualization, and human intuition through flexible, usable, and expressive interactions. Spatially clustering data is one effective metaphor for users to explore similarity and relationships between information, adjusting the weighting of dimensions or characteristics of the dataset to observe the change in the spatial layout. Semantic interaction is an approach to user interaction in such spatializations that couples these parametric modifications of the clustering model with users' analytic operations on the data (e.g., direct document movement in the spatialization, highlighting text, search, etc.). In this paper, we present results of a user study exploring the ability of semantic interaction in a visual analytic prototype, ForceSPIRE, to support sensemaking. We found that semantic interaction captures the analytical reasoning of the user through keyword weighting, and aids the user in co-creating a spatialization based on the user's reasoning and intuition.", "In the last decade, there has been an exponential growth of asynchronous online conversations (e.g. blogs), thanks to the rise of social media. Analyzing and gaining insights from such discussions ...", "Topic modeling has been widely used for analyzing text document collections. Recently, there have been significant advancements in various topic modeling techniques, particularly in the form of probabilistic graphical modeling. State-of-the-art techniques such as Latent Dirichlet Allocation (LDA) have been successfully applied in visual text analytics. However, most of the widely-used methods based on probabilistic modeling have drawbacks in terms of consistency from multiple runs and empirical convergence. Furthermore, due to the complicatedness in the formulation and the algorithm, LDA cannot easily incorporate various types of user feedback. To tackle this problem, we propose a reliable and flexible visual analytics system for topic modeling called UTOPIAN (User-driven Topic modeling based on Interactive Nonnegative Matrix Factorization). Centered around its semi-supervised formulation, UTOPIAN enables users to interact with the topic modeling method and steer the result in a user-driven manner. We demonstrate the capability of UTOPIAN via several usage scenarios with real-world document corpuses such as InfoVis VAST paper data set and product review data sets.", "Topic modeling, which reveals underlying topics of a document corpus, has been actively adopted in visual analytics for large-scale document collections. However, due to its significant processing time and non-interactive nature, topic modeling has so far not been tightly integrated into a visual analytics workflow. Instead, most such systems are limited to utilizing a fixed, initial set of topics. Motivated by this gap in the literature, we propose a novel interaction technique called TopicLens that allows a user to dynamically explore data through a lens interface where topic modeling and the corresponding 2D embedding are efficiently computed on the fly. To support this interaction in real time while maintaining view consistency, we propose a novel efficient topic modeling method and a semi-supervised 2D embedding algorithm. Our work is based on improving state-of-the-art methods such as nonnegative matrix factorization and t-distributed stochastic neighbor embedding. Furthermore, we have built a web-based visual analytics system integrated with TopicLens. We use this system to measure the performance and the visualization quality of our proposed methods. We provide several scenarios showcasing the capability of TopicLens using real-world datasets." ] }
1907.12047
2956646654
Abstract The prevalence of e-learning systems and on-line courses has made educational material widely accessible to students of varying abilities and backgrounds. There is thus a growing need to accommodate for individual differences in e-learning systems. This paper presents an algorithm called EduRank for personalizing educational content to students that combines a collaborative filtering algorithm with voting methods. EduRank constructs a difficulty ranking for each student by aggregating the rankings of similar students using different aspects of their performance on common questions. These aspects include grades, number of retries, and time spent solving questions. It infers a difficulty ranking directly over the questions for each student, rather than ordering them according to the student’s predicted score. The EduRank algorithm was tested on two data sets containing thousands of students and a million records. It was able to outperform the state-of-the-art ranking approaches as well as a domain expert. EduRank was used by students in a classroom activity, where a prior model was incorporated to predict the difficulty rankings of students with no prior history in the system. It was shown to lead students to solve more difficult questions than an ordering by a domain expert, without reducing their performance.
Our work relates to several areas of research in student modeling. Several approaches within the educational data mining community have used computational methods for sequencing students' learning items. Pardos and Heffernan @cite_39 infer order over questions by predicting students' skill levels over action pairs using Bayesian knowledge tracing. They show the efficacy of this approach on a test-set comprising random sequences of three questions as well as simulated data. This approach explicitly considers each possible order sequence and does not scale to handling a large number of sequences, as in the student ranking problem we consider in this paper.
{ "cite_N": [ "@cite_39" ], "mid": [ "2129373702" ], "abstract": [ "Researchers who make tutoring systems would like to know which sequences of educational content lead to the most effective learning by their students. The majority of data collected in many ITS systems consist of answers to a group of questions of a given skill often presented in a random sequence. Following work that identifies which items produce the most learning we propose a Bayesian method using similar permutation analysis techniques to determine if item learning is context sensitive and if so which orderings of questions produce the most learning. We confine our analysis to random sequences with three questions. The method identifies question ordering rules such as, question A should go before B, which are statistically reliably beneficial to learning. Real tutor data from five random sequence problem sets were analyzed. Statistically reliable orderings of questions were found in two of the five real data problem sets. A simulation consisting of 140 experiments was run to validate the method's accuracy and test its reliability. The method succeeded in finding 43 of the underlying item order effects with a 6 false positive rate using a p value threshold of <= 0.05. Using this method, ITS researchers can gain valuable knowledge about their problem sets and feasibly let the ITS automatically identify item order effects and optimize student learning by restricting assigned sequences to those prescribed as most beneficial to learning." ] }
1907.12047
2956646654
Abstract The prevalence of e-learning systems and on-line courses has made educational material widely accessible to students of varying abilities and backgrounds. There is thus a growing need to accommodate for individual differences in e-learning systems. This paper presents an algorithm called EduRank for personalizing educational content to students that combines a collaborative filtering algorithm with voting methods. EduRank constructs a difficulty ranking for each student by aggregating the rankings of similar students using different aspects of their performance on common questions. These aspects include grades, number of retries, and time spent solving questions. It infers a difficulty ranking directly over the questions for each student, rather than ordering them according to the student’s predicted score. The EduRank algorithm was tested on two data sets containing thousands of students and a million records. It was able to outperform the state-of-the-art ranking approaches as well as a domain expert. EduRank was used by students in a classroom activity, where a prior model was incorporated to predict the difficulty rankings of students with no prior history in the system. It was shown to lead students to solve more difficult questions than an ordering by a domain expert, without reducing their performance.
Multiple researchers have used Bayesian knowledge tracing as a way to infer students' skill acquisition (i.e., mastery level) over time given their performance levels on different question sequences @cite_25 . These researchers reason about students' prior knowledge of skills and also account for slips and guessing on test problems. The models are trained on large data sets from multiple students using machine learning algorithms that account for latent variables @cite_18 @cite_17 . We solve a different problem --- using other students' performance to personalize ranking over test-questions. In addition, these methods measure students' performance dichotomously (i.e., success or failure) whereas we reason about additional features such as students' grade and number of attempts to solve the question. We intend to infer students' skill levels to improve the ranking prediction in future work.
{ "cite_N": [ "@cite_18", "@cite_25", "@cite_17" ], "mid": [ "1597703949", "2015040676", "2404502960" ], "abstract": [ "Modeling students' knowledge is a fundamental part of intelligent tutoring systems. One of the most popular methods for estimating students' knowledge is Corbett and Anderson's [6] Bayesian Knowledge Tracing model. The model uses four parameters per skill, fit using student performance data, to relate performance to learning. Beck [1] showed that existing methods for determining these parameters are prone to the Identifiability Problem:the same performance data can be fit equally well by different parameters, with different implications on system behavior. Beck offered a solution based on Dirichlet Priors [1], but, we show this solution is vulnerable to a different problem, Model Degeneracy, where parameter values violate the model's conceptual meaning (such as a student being more likely to get a correct answer if he she does not know a skill than if he she does).We offer a new method for instantiating Bayesian Knowledge Tracing, using machine learning to make contextual estimations of the probability that a student has guessed or slipped. This method is no more prone to problems with Identifiability than Beck's solution, has less Model Degeneracy than competing approaches, and fits student performance data better than prior methods. Thus, it allows for more accurate and reliable student modeling in ITSs that use knowledge tracing.", "This paper describes an effort to model students' changing knowledge state during skill acquisition. Students in this research are learning to write short programs with the ACT Programming Tutor (APT). APT is constructed around a production rule cognitive model of programming knowledge, called theideal student model. This model allows the tutor to solve exercises along with the student and provide assistance as necessary. As the student works, the tutor also maintains an estimate of the probability that the student has learned each of the rules in the ideal model, in a process calledknowledge tracing. The tutor presents an individualized sequence of exercises to the student based on these probability estimates until the student has ‘mastered’ each rule. The programming tutor, cognitive model and learning and performance assumptions are described. A series of studies is reviewed that examine the empirical validity of knowledge tracing and has led to modifications in the process. Currently the model is quite successful in predicting test performance. Further modifications in the modeling process are discussed that may improve performance levels.", "Knowledge Tracing (BKT) is a common way of determining student knowledge of skills in adaptive educational systems and cognitive tutors. The basic BKT is a Hidden Markov Model (HMM) that models student knowledge based on five parameters: prior, learn rate, forget, guess, and slip. Expectation Maximization (EM) is often used to learn these parameters from training data. However, EM is a time-consuming process, and is prone to converging to erroneous, implausible local optima depending on the initial values of the BKT parameters. In this paper we address these two problems by using spectral learning to learn a Predictive State Representation (PSR) that represents the BKT HMM. We then use a heuristic to extract the BKT parameters from the learned PSR using basic matrix operations. The spectral learning method is based on an approximate factorization of the estimated covariance of windows from students' sequences of correct and incorrect responses; it is fast, local-optimum-free, and statistically consistent. In the past few years, spectral techniques have been used on real-world problems involving latent variables in dynamical systems, computer vision, and natural language processing. Our results suggest that the parameters learned by the spectral algorithm can replace the parameters learned by EM; the results of our study show that the spectral algorithm can improve knowledge tracing parameter- fitting time significantly while maintaining the same prediction accuracy, or help to improve accuracy while still keeping parameter-fitting time equivalent to EM." ] }
1907.12047
2956646654
Abstract The prevalence of e-learning systems and on-line courses has made educational material widely accessible to students of varying abilities and backgrounds. There is thus a growing need to accommodate for individual differences in e-learning systems. This paper presents an algorithm called EduRank for personalizing educational content to students that combines a collaborative filtering algorithm with voting methods. EduRank constructs a difficulty ranking for each student by aggregating the rankings of similar students using different aspects of their performance on common questions. These aspects include grades, number of retries, and time spent solving questions. It infers a difficulty ranking directly over the questions for each student, rather than ordering them according to the student’s predicted score. The EduRank algorithm was tested on two data sets containing thousands of students and a million records. It was able to outperform the state-of-the-art ranking approaches as well as a domain expert. EduRank was used by students in a classroom activity, where a prior model was incorporated to predict the difficulty rankings of students with no prior history in the system. It was shown to lead students to solve more difficult questions than an ordering by a domain expert, without reducing their performance.
Approaches based on recommendation systems are increasingly being used in e-learning to predict students' scores and to personalize educational content. We mention a few examples below and refer the reader to the surveys by @cite_30 and @cite_37 for more details. Collaborative filtering (CF) was previously used in the educational domain for predicting students' performance. Toscher and Jahrer @cite_26 use an ensemble of CF algorithms to predict performance for items in the KDD 2010 educational challenge. Berger et. al @cite_3 use a model-based approach for predicting accuracy levels of students' performance and skill levels on real and simulated data sets. They also formalize a relationship between CF and Item Response Theory methods and demonstrate this relationship empirically. @cite_28 use matrix factorization for task sequencing in a large commercial Intelligent Tutoring System, showing improved adaptivity compared to a baseline sequencer. Finally, Loll and Pinkwart @cite_1 use CF as a diagnostic tool for knowledge test questions as well as more exploratory ill-defined tasks. None of these approaches ranked questions according the personal difficulty level of questions to specific students.
{ "cite_N": [ "@cite_30", "@cite_37", "@cite_26", "@cite_28", "@cite_1", "@cite_3" ], "mid": [ "2199058847", "1452699976", "2303127372", "2211922800", "2115355580", "1569739730" ], "abstract": [ "This chapter presents an analysis of recommender systems in Technology-Enhanced Learning along their 15 years existence (2000–2014). All recommender systems considered for the review aim to support educational stakeholders by personalising the learning process. In this meta-review 82 recommender systems from 35 different countries have been investigated and categorised according to a given classification framework. The reviewed systems have been classified into seven clusters according to their characteristics and analysed for their contribution to the evolution of the RecSysTEL research field. Current challenges have been identified to lead the work of the forthcoming years.", "The increasing number of publications on recommender systems for Technology Enhanced Learning (TEL) evidence a growing interest in their development and deployment. In order to support learning, recommender systems for TEL need to consider specific requirements, which differ from the requirements for recommender systems in other domains like e-commerce. Consequently, these particular requirements motivate the incorporation of specific goals and methods in the evaluation process for TEL recommender systems. In this article, the diverse evaluation methods that have been applied to evaluate TEL recommender systems are investigated. A total of 235 articles are selected from major conferences, workshops, journals, and books where relevant work have been published between 2000 and 2014. These articles are quantitatively analysed and classified according to the following criteria: type of evaluation methodology, subject of evaluation, and effects measured by the evaluation. Results from the survey suggest that there is a growing awareness in the research community of the necessity for more elaborate evaluations. At the same time, there is still substantial potential for further improvements. This survey highlights trends and discusses strengths and shortcomings of the evaluation of TEL recommender systems thus far, thereby aiming to stimulate researchers to contemplate novel evaluation approaches.", "We present our overall third ranking solution for the KDD Cup 2010 on educational data mining. The goal of the competition was to predict a student’s ability to answer questions correctly, based on historic results. In our approach we use an ensemble of collaborative filtering techniques, as used in the field of recommender systems and adopt them to fit the needs of the competition. The ensemble of predictions is finally blended, using a neural network.", "Correct evaluation of Machine Learning based sequencers require large data availability, large scale experiments and consideration of different evaluation measures. Such constraints make the construction of ad-hoc Intelligent Tutoring Systems (ITS) unfeasible and impose early integration in already existing ITS, which possesses a large amount of tasks to be sequenced. However, such systems were not designed to be combined with Machine Learning methods and require several adjustments. As a consequence more than a half of the components based on recommender technology are never evaluated with an online experiment. In this paper we show how we adapted a Matrix Factorization based performance predictor and a score based policy for task sequencing to be integrated in a commercial ITS with over 2000 tasks on 20 topics. We evaluated the experiment under different perspectives in comparison with the ITS sequencer designed by experts over the years. As a result we achieve same post-test results and outperform the current sequencer in the perceived experience questionnaire with almost no curriculum authoring effort. We also showed that the sequencer possess a better user modeling, better adapting to the knowledge acquisition rate of the students.", "Collaborative information filtering techniques play a key role in many Web 2.0 applications. While they are currently mainly used for business purposes such as product recommendation, collaborative filtering also has potential for usage in eLearning applications. The quality of a student provided solution can be heuristically determined by peers who review the solution, thus effectively disburdening the workload of tutors. This paper presents a collaborative filtering approach which is specifically designed for eLearning applications. A controlled lab study with the system confirmed that the underlying algorithm is suitable as a diagnostic tool: The system-generated quality heuristic correlated highly with an expert-provided manual grading of the student solutions. This was true independent of whether the students provided fine-grained or coarsegrained evaluations of peer solutions, and independent of the task type that the students worked on. Further, the system required only few peer evaluations in order to achieve an acceptable prediction quality.", "We apply collaborative filtering (CF) to dichotomously scored student response data (right, wrong, or no interaction), finding optimal parameters for each student and item based on cross-validated prediction accuracy. The approach is naturally suited to comparing different models, both unidimensional and multidimensional in ability, including a widely used subset of Item Response Theory (IRT) models which obtain as specific instances of the CF: the one-parameter logistic (Rasch) model, Birnbaum’s 2PL model, and Reckase’s multidimensional generalization M2PL. We find that IRT models perform well relative to generalized alternatives, and thus this method offers a fast and stable alternate approach to IRT parameter estimation. Using both real and simulated data we examine cases where oneor two-dimensional IRT models prevail and are not improved by increasing the number of features. Model selection is based on prediction accuracy of the CF, though it is shown to be consistent with factor analysis. In multidimensional cases the item parameterizations can be used in conjunction with cluster analysis to identify groups of items which measure different ability dimensions." ] }
1907.12209
2966634313
Monocular depth prediction plays a crucial role in understanding 3D scene geometry. Although recent methods have achieved impressive progress in evaluation metrics such as the pixel-wise relative error, most methods neglect the geometric constraints in the 3D space. In this work, we show the importance of the high-order 3D geometric constraints for depth prediction. By designing a loss term that enforces one simple type of geometric constraints, namely, virtual normal directions determined by randomly sampled three points in the reconstructed 3D space, we can considerably improve the depth prediction accuracy. Significantly, the byproduct of this predicted depth being sufficiently accurate is that we are now able to recover good 3D structures of the scene such as the point cloud and surface normal directly from the depth, eliminating the necessity of training new sub-models as was previously done. Experiments on two benchmarks: NYU Depth-V2 and KITTI demonstrate the effectiveness of our method and state-of-the-art performance.
Depth prediction from images is a long-standing problem. Previous work can be divided into active methods and passive methods. The former ones use the assistant optical information for prediction, such as coded patterns @cite_30 , while the latter ones completely focus on image contents. Monocular depth prediction @cite_5 @cite_50 @cite_22 @cite_13 @cite_14 has been extensively studied recently. As limited geometric information can be directly extracted from the monocular image, it is essentially an ill-posed problem. Recently, owing to the structural features from very deep convolution neural network, such as ResNet @cite_12 , various DCNN-based methods learn to predict depth with deep CNN features. Fu al @cite_16 proposed an encoder-decoder network, which extracts multi-scale features from the encoder and is trained in an end-to-end manner without iterative refinement. They achieved state-of-the-art performance on several datasets. Jiao al @cite_17 proposed an attention-driven loss, which merges the semantic priors to improve the prediction precision on unbalanced distribution datasets.
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_22", "@cite_50", "@cite_5", "@cite_16", "@cite_13", "@cite_12", "@cite_17" ], "mid": [ "2617593387", "", "2171740948", "1905829557", "2962741876", "2963488291", "", "2194775991", "2889002172" ], "abstract": [ "Abstract Structured-light projection methods are facing two remaining challenges. One is a high-speed optical metrology on objects with chromatic surfaces and the other one is avoiding phase errors caused by height steps or spatially isolated surfaces. To overcome them, this paper provides an effective profilometry with a single-shot image by employing the HSI color model to compose the colorful pattern. Three sinusoidal fringes with different phase steps are encoded in RGB channels respectively. The hue component of a deformed pattern is applied to reconstruct the 3D shape of an object. The saturation and the intensity are utilized to correct the hue demodulation. Besides, an effective color calibration procedure is developed to compensate the hue error. Experimental results verify the feasibility of the developed method.", "", "Predicting depth is an essential component in understanding the 3D geometry of a scene. While for stereo images local correspondence suffices for estimation, finding depth relations from a single image is less straightforward, requiring integration of both global and local information from various cues. Moreover, the task is inherently ambiguous, with a large source of uncertainty coming from the overall scale. In this paper, we present a new method that addresses this task by employing two deep network stacks: one that makes a coarse global prediction based on the entire image, and another that refines this prediction locally. We also apply a scale-invariant error to help measure depth relations rather than scale. By leveraging the raw datasets as large sources of training data, our method achieves state-of-the-art results on both NYU Depth and KITTI, and matches detailed depth boundaries without the need for superpixelation.", "In this paper we address three different computer vision tasks using a single basic architecture: depth prediction, surface normal estimation, and semantic labeling. We use a multiscale convolutional network that is able to adapt easily to each task using only small modifications, regressing from the input image to the output map directly. Our method progressively refines predictions using a sequence of scales, and captures many image details without any superpixels or low-level segmentation. We achieve state-of-the-art performance on benchmarks for all three tasks.", "Depth estimation from single monocular images is a key component in scene understanding. Most existing algorithms formulate depth estimation as a regression problem due to the continuous property of depths. However, the depth value of input data can hardly be regressed exactly to the ground-truth value. In this paper, we propose to formulate depth estimation as a pixelwise classification task. Specifically, we first discretize the continuous ground-truth depths into several bins and label the bins according to their depth ranges. Then, we solve the depth estimation problem as classification by training a fully convolutional deep residual network. Compared with estimating the exact depth of a single point, it is easier to estimate its depth range. More importantly, by performing depth classification instead of regression, we can easily obtain the confidence of a depth prediction in the form of probability distribution. With this confidence, we can apply an information gain loss to make use of the predictions that are close to ground-truth during training, as well as fully-connected conditional random fields for post-processing to further improve the performance. We test our proposed method on both indoor and outdoor benchmark RGB-Depth datasets and achieve state-of-the-art performance.", "Monocular depth estimation, which plays a crucial role in understanding 3D scene geometry, is an ill-posed problem. Recent methods have gained significant improvement by exploring image-level information and hierarchical features from deep convolutional neural networks (DCNNs). These methods model depth estimation as a regression problem and train the regression networks by minimizing mean squared error, which suffers from slow convergence and unsatisfactory local solutions. Besides, existing depth estimation networks employ repeated spatial pooling operations, resulting in undesirable low-resolution feature maps. To obtain high-resolution depth maps, skip-connections or multilayer deconvolution networks are required, which complicates network training and consumes much more computations. To eliminate or at least largely reduce these problems, we introduce a spacing-increasing discretization (SID) strategy to discretize depth and recast depth network learning as an ordinal regression problem. By training the network using an ordinary regression loss, our method achieves much higher accuracy and faster convergence in synch. Furthermore, we adopt a multi-scale network structure which avoids unnecessary spatial pooling and captures multi-scale information in parallel. The proposed deep ordinal regression network (DORN) achieves state-of-the-art results on three challenging benchmarks, i.e., KITTI [16], Make3D [49], and NYU Depth v2 [41], and outperforms existing methods by a large margin.", "", "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "Monocular depth estimation benefits greatly from learning based techniques. By studying the training data, we observe that the per-pixel depth values in existing datasets typically exhibit a long-tailed distribution. However, most previous approaches treat all the regions in the training data equally regardless of the imbalanced depth distribution, which restricts the model performance particularly on distant depth regions. In this paper, we investigate the long tail property and delve deeper into the distant depth regions (i.e. the tail part) to propose an attention-driven loss for the network supervision. In addition, to better leverage the semantic information for monocular depth estimation, we propose a synergy network to automatically learn the information sharing strategies between the two tasks. With the proposed attention-driven loss and synergy network, the depth estimation and semantic labeling tasks can be mutually improved. Experiments on the challenging indoor dataset show that the proposed approach achieves state-of-the-art performance on both monocular depth estimation and semantic labeling tasks." ] }
1907.12209
2966634313
Monocular depth prediction plays a crucial role in understanding 3D scene geometry. Although recent methods have achieved impressive progress in evaluation metrics such as the pixel-wise relative error, most methods neglect the geometric constraints in the 3D space. In this work, we show the importance of the high-order 3D geometric constraints for depth prediction. By designing a loss term that enforces one simple type of geometric constraints, namely, virtual normal directions determined by randomly sampled three points in the reconstructed 3D space, we can considerably improve the depth prediction accuracy. Significantly, the byproduct of this predicted depth being sufficiently accurate is that we are now able to recover good 3D structures of the scene such as the point cloud and surface normal directly from the depth, eliminating the necessity of training new sub-models as was previously done. Experiments on two benchmarks: NYU Depth-V2 and KITTI demonstrate the effectiveness of our method and state-of-the-art performance.
Most previous methods only adopted the pixel-wise depth supervision to train a network. By contrast, Liu al @cite_19 combined DCNN with the continuous conditional random field (CRF) to exploit consistency information of neighbouring pixels. CRF establishes a pair-wise constraint for local regions. Furthermore, several high-order constraints are investigated. Chen al @cite_33 applied the generative adversarial training to lead the network to learn a context-aware and patch-level loss automatically. Note that most of these methods directly work with the depth, instead of in the 3D space.
{ "cite_N": [ "@cite_19", "@cite_33" ], "mid": [ "1803059841", "2888531279" ], "abstract": [ "In this article, we tackle the problem of depth estimation from single monocular images. Compared with depth estimation using multiple images such as stereo depth perception, depth from monocular images is much more challenging. Prior work typically focuses on exploiting geometric priors or additional sources of information, most using hand-crafted features. Recently, there is mounting evidence that features from deep convolutional neural networks (CNN) set new records for various vision applications. On the other hand, considering the continuous characteristic of the depth values, depth estimation can be naturally formulated as a continuous conditional random field (CRF) learning problem. Therefore, here we present a deep convolutional neural field model for estimating depths from single monocular images, aiming to jointly explore the capacity of deep CNN and continuous CRF. In particular, we propose a deep structured learning scheme which learns the unary and pairwise potentials of continuous CRF in a unified deep CNN framework. We then further propose an equally effective model based on fully convolutional networks and a novel superpixel pooling method, which is about 10 times faster, to speedup the patch-wise convolutions in the deep model. With this more efficient model, we are able to design deeper networks to pursue better performance. Our proposed method can be used for depth estimation of general scenes with no geometric priors nor any extra information injected. In our case, the integral of the partition function can be calculated in a closed form such that we can exactly solve the log-likelihood maximization. Moreover, solving the inference problem for predicting depths of a test image is highly efficient as closed-form solutions exist. Experiments on both indoor and outdoor scene datasets demonstrate that the proposed method outperforms state-of-the-art depth estimation approaches.", "Monocular depth estimation is an extensively studied computer vision problem with a vast variety of applications. Deep learning-based methods have demonstrated promise for both supervised and unsupervised depth estimation from monocular images. Most existing approaches treat depth estimation as a regression problem with a local pixel-wise loss function. In this work, we innovate beyond existing approaches by using adversarial training to learn a context-aware, non-local loss function. Such an approach penalizes the joint configuration of predicted depth values at the patch-level instead of the pixel-level, which allows networks to incorporate more global information. In this framework, the generator learns a mapping between RGB images and its corresponding depth map, while the discriminator learns to distinguish depth map and RGB pairs from ground truth. This conditional GAN depth estimation framework is stabilized using spectral normalization to prevent mode collapse when learning from diverse datasets. We test this approach using a diverse set of generators that include U-Net and joint CNN-CRF. We benchmark this approach on the NYUv2, Make3D and KITTI datasets, and observe that adversarial training reduces relative error by several fold, achieving state-of-the-art performance." ] }
1907.12237
2964535354
This paper addresses the challenge of localization of anatomical landmarks in knee X-ray images at different stages of osteoarthritis (OA). Landmark localization can be viewed as regression problem, where the landmark position is directly predicted by using the region of interest or even full-size images leading to large memory footprint, especially in case of high resolution medical images. In this work, we propose an efficient deep neural networks framework with an hourglass architecture utilizing a soft-argmax layer to directly predict normalized coordinates of the landmark points. We provide an extensive evaluation of different regularization techniques and various loss functions to understand their influence on the localization performance. Furthermore, we introduce the concept of transfer learning from low-budget annotations, and experimentally demonstrate that such approach is improving the accuracy of landmark localization. Compared to the prior methods, we validate our model on two datasets that are independent from the train data and assess the performance of the method for different stages of OA severity. The proposed approach demonstrates better generalization performance compared to the current state-of-the-art.
There are several methods focusing solely on the ROI localization. Tiulpin @cite_16 proposed a novel anatomical proposal method to localize the knee joint area. Antony @cite_19 used fully convolutional networks for the same problem. Recently, Chen @cite_14 proposed to use object detection methods to measure the knee OA severity.
{ "cite_N": [ "@cite_19", "@cite_14", "@cite_16" ], "mid": [ "2604759322", "2951269226", "2584226844" ], "abstract": [ "This paper introduces a new approach to automatically quantify the severity of knee OA using X-ray images. Automatically quantifying knee OA severity involves two steps: first, automatically localizing the knee joints; next, classifying the localized knee joint images. We introduce a new approach to automatically detect the knee joints using a fully convolutional neural network (FCN). We train convolutional neural networks (CNN) from scratch to automatically quantify the knee OA severity optimizing a weighted ratio of two loss functions: categorical cross-entropy and mean-squared loss. This joint training further improves the overall quantification of knee OA severity, with the added benefit of naturally producing simultaneous multi-class classification and regression outputs. Two public datasets are used to evaluate our approach, the Osteoarthritis Initiative (OAI) and the Multicenter Osteoarthritis Study (MOST), with extremely promising results that outperform existing approaches.", "Abstract Knee osteoarthritis (OA) is one major cause of activity limitation and physical disability in older adults. Early detection and intervention can help slow down the OA degeneration. Physicians’ grading based on visual inspection is subjective, varied across interpreters, and highly relied on their experience. In this paper, we successively apply two deep convolutional neural networks (CNN) to automatically measure the knee OA severity, as assessed by the Kellgren-Lawrence (KL) grading system. Firstly, considering the size of knee joints distributed in X-ray images with small variability, we detect knee joints using a customized one-stage YOLOv2 network. Secondly, we fine-tune the most popular CNN models, including variants of ResNet, VGG, and DenseNet as well as InceptionV3, to classify the detected knee joint images with a novel adjustable ordinal loss. To be specific, motivated by the ordinal nature of the knee KL grading task, we assign higher penalty to misclassification with larger distance between the predicted KL grade and the real KL grade. The baseline X-ray images from the Osteoarthritis Initiative (OAI) dataset are used for evaluation. On the knee joint detection, we achieve mean Jaccard index of 0.858 and recall of 92.2 under the Jaccard index threshold of 0.75. On the knee KL grading task, the fine-tuned VGG-19 model with the proposed ordinal loss obtains the best classification accuracy of 69.7 and mean absolute error (MAE) of 0.344. Both knee joint detection and knee KL grading achieve state-of-the-art performance. The code, dataset, and models are released at https: github.com PingjunChen KneeAnalysis .", "Osteoarthritis (OA) is a common musculoskelet al condition typically diagnosed from radiographic assessment after clinical examination. However, a visual evaluation made by a practitioner suffers from subjectivity and is highly dependent on the experience. Computer-aided diagnostics (CAD) could improve the objectivity of knee radiographic examination. The first essential step of knee OA CAD is to automatically localize the joint area. However, according to the literature this task itself remains challenging. The aim of this study was to develop novel and computationally efficient method to tackle the issue. Here, three different datasets of knee radiographs were used (n = 473 93 77) to validate the overall performance of the method. Our pipeline consists of two parts: anatomically-based joint area proposal and their evaluation using Histogram of Oriented Gradients and the pre-trained Support Vector Machine classifier scores. The obtained results for the used datasets show the mean intersection over the union equals to: 0.84, 0.79 and 0.78. Using a high-end computer, the method allows to automatically annotate conventional knee radiographs within 14–16 ms and high resolution ones within 170 ms. Our results demonstrate that the developed method is suitable for large-scale analyses." ] }
1907.12237
2964535354
This paper addresses the challenge of localization of anatomical landmarks in knee X-ray images at different stages of osteoarthritis (OA). Landmark localization can be viewed as regression problem, where the landmark position is directly predicted by using the region of interest or even full-size images leading to large memory footprint, especially in case of high resolution medical images. In this work, we propose an efficient deep neural networks framework with an hourglass architecture utilizing a soft-argmax layer to directly predict normalized coordinates of the landmark points. We provide an extensive evaluation of different regularization techniques and various loss functions to understand their influence on the localization performance. Furthermore, we introduce the concept of transfer learning from low-budget annotations, and experimentally demonstrate that such approach is improving the accuracy of landmark localization. Compared to the prior methods, we validate our model on two datasets that are independent from the train data and assess the performance of the method for different stages of OA severity. The proposed approach demonstrates better generalization performance compared to the current state-of-the-art.
The proposed approach is related to the regression-based methods for keypoint localization @cite_7 . We utilize an hourglass network which is an encoder-decoder model initially introduced for human pose estimation @cite_34 and address both ROI and landmark localization tasks. Several other studies in medical imaging domain also leveraged a similar approach by applying U-Net @cite_38 to the landmark localization problem @cite_37 @cite_0 . However, the encoder-decoder networks are computationally heavy during the training phase since they regress a tensor of high-resolution heatmaps which is challenging for medical images that are typically of a large size. It is notable that decreasing the image resolution could negatively impact the accuracy of landmark localization. In addition, most of the existing approaches use a refinement step which makes the computational burden even harder to cope with. Nevertheless, hourglass CNNs are widely used in human pose estimation @cite_34 due to a possibility of lowering down the resolution and the absence of precise ground truth.
{ "cite_N": [ "@cite_38", "@cite_37", "@cite_7", "@cite_0", "@cite_34" ], "mid": [ "1901129140", "", "2799930024", "2925288829", "2307770531" ], "abstract": [ "There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http: lmb.informatik.uni-freiburg.de people ronneber u-net .", "", "The locations of the fiducial facial landmark points around facial components and facial contour capture the rigid and non-rigid facial deformations due to head movements and facial expressions. They are hence important for various facial analysis tasks. Many facial landmark detection algorithms have been developed to automatically detect those key points over the years, and in this paper, we perform an extensive review of them. We classify the facial landmark detection algorithms into three major categories: holistic methods, Constrained Local Model (CLM) methods, and the regression-based methods. They differ in the ways to utilize the facial appearance and shape information. The holistic methods explicitly build models to represent the global facial appearance and shape information. The CLMs explicitly leverage the global shape model but build the local appearance models. The regression based methods implicitly capture facial shape and appearance information. For algorithms within each category, we discuss their underlying theories as well as their differences. We also compare their performances on both controlled and in the wild benchmark datasets, under varying facial expressions, head poses, and occlusion. Based on the evaluations, we point out their respective strengths and weaknesses. There is also a separate section to review the latest deep learning based algorithms. The survey also includes a listing of the benchmark databases and existing software. Finally, we identify future research directions, including combining methods in different categories to leverage their respective strengths to solve landmark detection \"in-the-wild\".", "Abstract In many medical image analysis applications, only a limited amount of training data is available due to the costs of image acquisition and the large manual annotation effort required from experts. Training recent state-of-the-art machine learning methods like convolutional neural networks (CNNs) from small datasets is a challenging task. In this work on anatomical landmark localization, we propose a CNN architecture that learns to split the localization task into two simpler sub-problems, reducing the overall need for large training datasets. Our fully convolutional SpatialConfiguration-Net (SCN) learns this simplification due to multiplying the heatmap predictions of its two components and by training the network in an end-to-end manner. Thus, the SCN dedicates one component to locally accurate but ambiguous candidate predictions, while the other component improves robustness to ambiguities by incorporating the spatial configuration of landmarks. In our extensive experimental evaluation, we show that the proposed SCN outperforms related methods in terms of landmark localization error on a variety of size-limited 2D and 3D landmark localization datasets, i.e., hand radiographs, lateral cephalograms, hand MRIs, and spine CTs.", "This work introduces a novel convolutional network architecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a “stacked hourglass” network based on the successive steps of pooling and upsampling that are done to produce a final set of predictions. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods." ] }
1907.12022
2965190089
Traditional grid neighbor-based static pooling has become a constraint for point cloud geometry analysis. In this paper, we propose DAR-Net, a novel network architecture that focuses on dynamic feature aggregation. The central idea of DAR-Net is generating a self-adaptive pooling skeleton that considers both scene complexity and local geometry features. Providing variable semi-local receptive fields and weights, the skeleton serves as a bridge that connect local convolutional feature extractors and a global recurrent feature integrator. Experimental results on indoor scene datasets show advantages of the proposed approach compared to state-of-the-art architectures that adopt static pooling methods.
Although convolutional neural networks (CNN) had achieved great success in analyzing 2D images, they cannot be directly applied to point clouds beacuse of its unorganized nature. Without a pixel-based neighborhood defined, vanilla CNNs cannot extract local information and gradually expand receptive field sizes in a meaningful manner. Thus, segmentation tasks were first performed in a way that simulate 2D scenarios – by fusing partial views represented with RGB-D images together @cite_4 @cite_16 @cite_25 @cite_12 . Some other work transform point clouds into cost-inefficient voxel representations on which CNN can be directly applied @cite_5 @cite_8 @cite_11 .
{ "cite_N": [ "@cite_4", "@cite_8", "@cite_5", "@cite_16", "@cite_25", "@cite_12", "@cite_11" ], "mid": [ "2024844736", "2609719703", "2211722331", "", "2963600949", "", "2594519801" ], "abstract": [ "One of the most crucial requirements for building a multi-view system is the estimation of relative poses of all cameras. An approach tailored for a RGB-D cameras based multi-view system is missing. We propose BAICP+ which combines Bundle Adjustment (BA) and Iterative Closest Point (ICP) algorithms to take into account both 2D visual and 3D shape information in one minimization formulation to estimate relative pose parameters of each camera. BAICP+ is generic enough to take different types of visual features into account and can be easily adapted to varying quality of 2D and 3D data. We perform experiments on real and simulated data. Results show that with the right weighting factor BAICP+ has an optimal performance when compared to BA and ICP used independently or sequentially.", "In this paper, we tackle the labeling problem for 3D point clouds. We introduce a 3D point cloud labeling scheme based on 3D Convolutional Neural Network. Our approach minimizes the prior knowledge of the labeling problem and does not require a segmentation step or hand-crafted features as most previous approaches did. Particularly, we present solutions for large data handling during the training and testing process. Experiments performed on the urban point cloud dataset containing 7 categories of objects show the robustness of our approach.", "Robust object recognition is a crucial skill for robots operating autonomously in real world environments. Range sensors such as LiDAR and RGBD cameras are increasingly found in modern robotic systems, providing a rich source of 3D information that can aid in this task. However, many current systems do not fully utilize this information and have trouble efficiently dealing with large amounts of point cloud data. In this paper, we propose VoxNet, an architecture to tackle this problem by integrating a volumetric Occupancy Grid representation with a supervised 3D Convolutional Neural Network (3D CNN). We evaluate our approach on publicly available benchmarks using LiDAR, RGBD, and CAD data. VoxNet achieves accuracy beyond the state of the art while labeling hundreds of instances per second.", "", "We propose a method for reconstructing 3D shapes from 2D sketches in the form of line drawings. Our method takes as input a single sketch, or multiple sketches, and outputs a dense point cloud representing a 3D reconstruction of the input sketch(es). The point cloud is then converted into a polygon mesh. At the heart of our method lies a deep, encoder-decoder network. The encoder converts the sketch into a compact representation encoding shape information. The decoder converts this representation into depth and normal maps capturing the underlying surface from several output viewpoints. The multi-view maps are then consolidated into a 3D point cloud by solving an optimization problem that fuses depth and normals across all viewpoints. Based on our experiments, compared to other methods, such as volumetric networks, our architecture offers several advantages, including more faithful reconstruction, higher output surface resolution, better preservation of topology and shape structure.", "", "A key requirement for leveraging supervised deep learning methods is the availability of large, labeled datasets. Unfortunately, in the context of RGB-D scene understanding, very little data is available &#x2013; current datasets cover a small range of scene views and have limited semantic annotations. To address this issue, we introduce ScanNet, an RGB-D video dataset containing 2.5M views in 1513 scenes annotated with 3D camera poses, surface reconstructions, and semantic segmentations. To collect this data, we designed an easy-to-use and scalable RGB-D capture system that includes automated surface reconstruction and crowdsourced semantic annotation. We show that using this data helps achieve state-of-the-art performance on several 3D scene understanding tasks, including 3D object classification, semantic voxel labeling, and CAD model retrieval." ] }
1907.12022
2965190089
Traditional grid neighbor-based static pooling has become a constraint for point cloud geometry analysis. In this paper, we propose DAR-Net, a novel network architecture that focuses on dynamic feature aggregation. The central idea of DAR-Net is generating a self-adaptive pooling skeleton that considers both scene complexity and local geometry features. Providing variable semi-local receptive fields and weights, the skeleton serves as a bridge that connect local convolutional feature extractors and a global recurrent feature integrator. Experimental results on indoor scene datasets show advantages of the proposed approach compared to state-of-the-art architectures that adopt static pooling methods.
Although these methods did benefit from mature 2D image processing network structures, inefficient 3D data representations constrained them from showing good performance for scene segmentation, where it is necessary to deal with large, dense 3D scenes as a whole. Therefore, recent research gradually turned to networks that directly operate on point clouds when dealing with semantic segmentation for complex indoor outdoor scenes @cite_6 @cite_7 @cite_26 .
{ "cite_N": [ "@cite_26", "@cite_7", "@cite_6" ], "mid": [ "2769473888", "2797997528", "2895472109" ], "abstract": [ "We propose a novel deep learning-based framework to tackle the challenge of semantic segmentation of large-scale point clouds of millions of points. We argue that the organization of 3D point clouds can be efficiently captured by a structure called superpoint graph (SPG), derived from a partition of the scanned scene into geometrically homogeneous elements. SPGs offer a compact yet rich representation of contextual relationships between object parts, which is then exploited by a graph convolutional network. Our framework sets a new state of the art for segmenting outdoor LiDAR scans (+11.9 and +8.8 mIoU points for both Semantic3D test sets), as well as indoor scans (+12.4 mIoU points for the S3DIS dataset).", "We present an approach to semantic scene analysis using deep convolutional networks. Our approach is based on tangent convolutions - a new construction for convolutional networks on 3D data. In contrast to volumetric approaches, our method operates directly on surface geometry. Crucially, the construction is applicable to unstructured point clouds and other noisy real-world data. We show that tangent convolutions can be evaluated efficiently on large-scale point clouds with millions of points. Using tangent convolutions, we design a deep fully-convolutional network for semantic segmentation of 3D point clouds, and apply it to challenging real-world datasets of indoor and outdoor 3D environments. Experimental results show that the presented approach outperforms other recent deep network constructions in detailed analysis of large 3D scenes.", "Semantic segmentation of 3D unstructured point clouds remains an open research problem. Recent works predict semantic labels of 3D points by virtue of neural networks but take limited context knowledge into consideration. In this paper, a novel end-to-end approach for unstructured point cloud semantic segmentation, named 3P-RNN, is proposed to exploit the inherent contextual features. First the efficient pointwise pyramid pooling module is investigated to capture local structures at various densities by taking multi-scale neighborhood into account. Then the two-direction hierarchical recurrent neural networks (RNNs) are utilized to explore long-range spatial dependencies. Each recurrent layer takes as input the local features derived from unrolled cells and sweeps the 3D space along two directions successively to integrate structure knowledge. On challenging indoor and outdoor 3D datasets, the proposed framework demonstrates robust performance superior to state-of-the-arts." ] }
1907.12022
2965190089
Traditional grid neighbor-based static pooling has become a constraint for point cloud geometry analysis. In this paper, we propose DAR-Net, a novel network architecture that focuses on dynamic feature aggregation. The central idea of DAR-Net is generating a self-adaptive pooling skeleton that considers both scene complexity and local geometry features. Providing variable semi-local receptive fields and weights, the skeleton serves as a bridge that connect local convolutional feature extractors and a global recurrent feature integrator. Experimental results on indoor scene datasets show advantages of the proposed approach compared to state-of-the-art architectures that adopt static pooling methods.
As introduced, PointNet used multi-layer perception (which process each point independently) to fit the unordered nature of point clouds @cite_8 . Furthermore, similar approaches like using @math convolutional kernels @cite_10 , radius querying @cite_1 or nearest neighbor searching @cite_0 were also adopted. Because local dependencies were not effectively modeled, overfitting constantly occurred when these networks were used to perform large-scale scene segmentation. In addition, work like R-Conv @cite_9 tried to avoid time-consuming neighbor searching with global recurrent transformation prior to convolutional analysis. However, scalability problems still occurred as the global RNN cannot directly operate on the point cloud representing an entire dense scene, which often contains multiple million points.
{ "cite_N": [ "@cite_8", "@cite_9", "@cite_1", "@cite_0", "@cite_10" ], "mid": [ "2609719703", "2890834803", "2624503621", "2606987267", "2963719584" ], "abstract": [ "In this paper, we tackle the labeling problem for 3D point clouds. We introduce a 3D point cloud labeling scheme based on 3D Convolutional Neural Network. Our approach minimizes the prior knowledge of the labeling problem and does not require a segmentation step or hand-crafted features as most previous approaches did. Particularly, we present solutions for large data handling during the training and testing process. Experiments performed on the urban point cloud dataset containing 7 categories of objects show the robustness of our approach.", "Pointcloud is a very precise digital format for recording objects in space. Pointclouds have received increasing attention lately, due to the higher amount of information it provides compared to images. In this paper, we propose a new deep learning architecture called R-CovNet, designed for 3D object recognition. Unlike previous architectures that usually sample or convert pointcloud into three-dimensional grids before processing, R-CovNet does not require any preprocessing. Our main goal is to provide a permutation invariant architecture specially designed for pointclouds data of any size. Experiments with well-known benchmarks show that R-CovNet can achieve an accuracy of 92.7 , thus outperforming all the volumetric methods.", "Few prior works study deep learning on point sets. PointNet by is a pioneer in this direction. However, by design PointNet does not capture local structures induced by the metric space points live in, limiting its ability to recognize fine-grained patterns and generalizability to complex scenes. In this work, we introduce a hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set. By exploiting metric space distances, our network is able to learn local features with increasing contextual scales. With further observation that point sets are usually sampled with varying densities, which results in greatly decreased performance for networks trained on uniform densities, we propose novel set learning layers to adaptively combine features from multiple scales. Experiments show that our network called PointNet++ is able to learn deep point set features efficiently and robustly. In particular, results significantly better than state-of-the-art have been obtained on challenging benchmarks of 3D point clouds.", "We present a new deep learning architecture (called Kd-network) that is designed for 3D model recognition tasks and works with unstructured point clouds. The new architecture performs multiplicative transformations and share parameters of these transformations according to the subdivisions of the point clouds imposed onto them by Kd-trees. Unlike the currently dominant convolutional architectures that usually require rasterization on uniform two-dimensional or three-dimensional grids, Kd-networks do not rely on such grids in any way and therefore avoid poor scaling behaviour. In a series of experiments with popular shape recognition benchmarks, Kd-networks demonstrate competitive performance in a number of shape recognition tasks such as shape classification, shape retrieval and shape part segmentation.", "This paper presents SO-Net, a permutation invariant architecture for deep learning with orderless point clouds. The SO-Net models the spatial distribution of point cloud by building a Self-Organizing Map (SOM). Based on the SOM, SO-Net performs hierarchical feature extraction on individual points and SOM nodes, and ultimately represents the input point cloud by a single feature vector. The receptive field of the network can be systematically adjusted by conducting point-to-node k nearest neighbor search. In recognition tasks such as point cloud reconstruction, classification, object part segmentation and shape retrieval, our proposed network demonstrates performance that is similar with or better than state-of-the-art approaches. In addition, the training speed is significantly faster than existing point cloud recognition networks because of the parallelizability and simplicity of the proposed architecture. Our code is available at the project website.1" ] }
1907.12022
2965190089
Traditional grid neighbor-based static pooling has become a constraint for point cloud geometry analysis. In this paper, we propose DAR-Net, a novel network architecture that focuses on dynamic feature aggregation. The central idea of DAR-Net is generating a self-adaptive pooling skeleton that considers both scene complexity and local geometry features. Providing variable semi-local receptive fields and weights, the skeleton serves as a bridge that connect local convolutional feature extractors and a global recurrent feature integrator. Experimental results on indoor scene datasets show advantages of the proposed approach compared to state-of-the-art architectures that adopt static pooling methods.
Tangent Convolution @cite_7 proposed a way to efficiently model local dependencies and align convolutional filters on different scales. Their work is based on local covariance analysis and down-sampled neighborhood reconstruction with raw data points. Despite tangential convolution itself functioned well extracting local features, their network architecture was limited by static, uniform intermedium feature aggregation and a complete lack of global integration.
{ "cite_N": [ "@cite_7" ], "mid": [ "2797997528" ], "abstract": [ "We present an approach to semantic scene analysis using deep convolutional networks. Our approach is based on tangent convolutions - a new construction for convolutional networks on 3D data. In contrast to volumetric approaches, our method operates directly on surface geometry. Crucially, the construction is applicable to unstructured point clouds and other noisy real-world data. We show that tangent convolutions can be evaluated efficiently on large-scale point clouds with millions of points. Using tangent convolutions, we design a deep fully-convolutional network for semantic segmentation of 3D point clouds, and apply it to challenging real-world datasets of indoor and outdoor 3D environments. Experimental results show that the presented approach outperforms other recent deep network constructions in detailed analysis of large 3D scenes." ] }
1907.12022
2965190089
Traditional grid neighbor-based static pooling has become a constraint for point cloud geometry analysis. In this paper, we propose DAR-Net, a novel network architecture that focuses on dynamic feature aggregation. The central idea of DAR-Net is generating a self-adaptive pooling skeleton that considers both scene complexity and local geometry features. Providing variable semi-local receptive fields and weights, the skeleton serves as a bridge that connect local convolutional feature extractors and a global recurrent feature integrator. Experimental results on indoor scene datasets show advantages of the proposed approach compared to state-of-the-art architectures that adopt static pooling methods.
Several works turned to the global scale for permutation robustness. Its simplest form, global maximum pooling, only fulfilled light-weight tasks like object classification or part segmentation @cite_21 . Moreover, RNNs constructed with advance cells like Long-Short-Term-Memory @cite_28 or Gate-Recurrent-Unit @cite_19 offered promising results on scene segmentation @cite_26 , even for those architectures without significant consideration for local feature extraction @cite_15 @cite_14 . However, in those cases the global RNNs were built deep, bidirectional or compact with hidden units, giving out a strict limitation on the direct input. As a result, the original point cloud was often down-sampled to an extreme extent, or the network was only capable of operating on sections of the original point cloud @cite_15 .
{ "cite_N": [ "@cite_26", "@cite_14", "@cite_28", "@cite_21", "@cite_19", "@cite_15" ], "mid": [ "2769473888", "2963336905", "", "2560609797", "1924770834", "2963517242" ], "abstract": [ "We propose a novel deep learning-based framework to tackle the challenge of semantic segmentation of large-scale point clouds of millions of points. We argue that the organization of 3D point clouds can be efficiently captured by a structure called superpoint graph (SPG), derived from a partition of the scanned scene into geometrically homogeneous elements. SPGs offer a compact yet rich representation of contextual relationships between object parts, which is then exploited by a graph convolutional network. Our framework sets a new state of the art for segmenting outdoor LiDAR scans (+11.9 and +8.8 mIoU points for both Semantic3D test sets), as well as indoor scans (+12.4 mIoU points for the S3DIS dataset).", "Semantic parsing of large-scale 3D point clouds is an important research topic in computer vision and remote sensing fields. Most existing approaches utilize hand-crafted features for each modality independently and combine them in a heuristic manner. They often fail to consider the consistency and complementary information among features adequately, which makes them difficult to capture high-level semantic structures. The features learned by most of the current deep learning methods can obtain high-quality image classification results. However, these methods are hard to be applied to recognize 3D point clouds due to unorganized distribution and various point density of data. In this paper, we propose a 3DCNN-DQN-RNN method which fuses the 3D convolutional neural network (CNN), Deep Q-Network (DQN) and Residual recurrent neural network (RNN)for an efficient semantic parsing of large-scale 3D point clouds. In our method, an eye window under control of the 3D CNN and DQN can localize and segment the points of the object's class efficiently. The 3D CNN and Residual RNN further extract robust and discriminative features of the points in the eye window, and thus greatly enhance the parsing accuracy of large-scale point clouds. Our method provides an automatic process that maps the raw data to the classification results. It also integrates object localization, segmentation and classification into one framework. Experimental results demonstrate that the proposed method outperforms the state-of-the-art point cloud classification methods.", "", "Point cloud is an important type of geometric data structure. Due to its irregular format, most researchers transform such data to regular 3D voxel grids or collections of images. This, however, renders data unnecessarily voluminous and causes issues. In this paper, we design a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input. Our network, named PointNet, provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. Though simple, PointNet is highly efficient and effective. Empirically, it shows strong performance on par or even better than state of the art. Theoretically, we provide analysis towards understanding of what the network has learnt and why the network is robust with respect to input perturbation and corruption.", "In this paper we compare different types of recurrent units in recurrent neural networks (RNNs). Especially, we focus on more sophisticated units that implement a gating mechanism, such as a long short-term memory (LSTM) unit and a recently proposed gated recurrent unit (GRU). We evaluate these recurrent units on the tasks of polyphonic music modeling and speech signal modeling. Our experiments revealed that these advanced recurrent units are indeed better than more traditional recurrent units such as tanh units. Also, we found GRU to be comparable to LSTM.", "Point clouds are an efficient data format for 3D data. However, existing 3D segmentation methods for point clouds either do not model local dependencies [21] or require added computations [14, 23]. This work presents a novel 3D segmentation framework, RSNet1, to efficiently model local structures in point clouds. The key component of the RSNet is a lightweight local dependency module. It is a combination of a novel slice pooling layer, Recurrent Neural Network (RNN) layers, and a slice unpooling layer. The slice pooling layer is designed to project features of unordered points onto an ordered sequence of feature vectors so that traditional end-to-end learning algorithms (RNNs) can be applied. The performance of RSNet is validated by comprehensive experiments on the S3DIS[1], ScanNet[3], and ShapeNet [34] datasets. In its simplest form, RSNets surpass all previous state-of-the-art methods on these benchmarks. And comparisons against previous state-of-the-art methods [21, 23] demonstrate the efficiency of RSNets." ] }
1907.12022
2965190089
Traditional grid neighbor-based static pooling has become a constraint for point cloud geometry analysis. In this paper, we propose DAR-Net, a novel network architecture that focuses on dynamic feature aggregation. The central idea of DAR-Net is generating a self-adaptive pooling skeleton that considers both scene complexity and local geometry features. Providing variable semi-local receptive fields and weights, the skeleton serves as a bridge that connect local convolutional feature extractors and a global recurrent feature integrator. Experimental results on indoor scene datasets show advantages of the proposed approach compared to state-of-the-art architectures that adopt static pooling methods.
Various works in this area aimed to promote existing supervised-learning networks as auto-encoders. For example, FoldingNet @cite_23 managed to learn global features of a 3D object through constructing a deformable 2D grid surface; PointWise @cite_20 considered theoretical smoothness of object surface; and, MortonNet @cite_17 learned compact local features by generating fractal space-filling curves and predicting its endpoint. Although features provided by these auto-encoders are reported to be beneficial, we do not adopt them into our network for a fair evaluation on the aggregation method we propose.
{ "cite_N": [ "@cite_20", "@cite_23", "@cite_17" ], "mid": [ "2909765569", "2796426482", "" ], "abstract": [ "We present a novel approach to learning a point-wise, meaningful embedding for point-clouds in an unsupervised manner, through the use of neural-networks. The domain of point-cloud processing via neural-networks is rapidly evolving, with novel architectures and applications frequently emerging. Within this field of research, the availability and plethora of unlabeled point-clouds as well as their possible applications make finding ways of characterizing this type of data appealing. Though significant advancement was achieved in the realm of unsupervised learning, its adaptation to the point-cloud representation is not trivial. Previous research focuses on the embedding of entire point-clouds representing an object in a meaningful manner. We present a deep learning framework to learn point-wise description from a set of shapes without supervision. Our approach leverages self-supervision to define a relevant loss function to learn rich per-point features. We train a neural-network with objectives based on context derived directly from the raw data, with no added annotation. We use local structures of point-clouds to incorporate geometric information into each point's latent representation. In addition to using local geometric information, we encourage adjacent points to have similar representations and vice-versa, creating a smoother, more descriptive representation. We demonstrate the ability of our method to capture meaningful point-wise features through three applications. By clustering the learned embedding space, we perform unsupervised part-segmentation on point clouds. By calculating euclidean distance in the latent space we derive semantic point-analogies. Finally, by retrieving nearest-neighbors in our learned latent space we present meaningful point-correspondence within and among point-clouds.", "Recent deep networks that directly handle points in a point set, e.g., PointNet, have been state-of-the-art for supervised learning tasks on point clouds such as classification and segmentation. In this work, a novel end-to-end deep auto-encoder is proposed to address unsupervised learning challenges on point clouds. On the encoder side, a graph-based enhancement is enforced to promote local structures on top of PointNet. Then, a novel folding-based decoder deforms a canonical 2D grid onto the underlying 3D object surface of a point cloud, achieving low reconstruction errors even for objects with delicate structures. The proposed decoder only uses about 7 parameters of a decoder with fully-connected neural networks, yet leads to a more discriminative representation that achieves higher linear SVM classification accuracy than the benchmark. In addition, the proposed decoder structure is shown, in theory, to be a generic architecture that is able to reconstruct an arbitrary point cloud from a 2D grid. Our code is available at http: www.merl.com research license#FoldingNet", "" ] }
1907.12022
2965190089
Traditional grid neighbor-based static pooling has become a constraint for point cloud geometry analysis. In this paper, we propose DAR-Net, a novel network architecture that focuses on dynamic feature aggregation. The central idea of DAR-Net is generating a self-adaptive pooling skeleton that considers both scene complexity and local geometry features. Providing variable semi-local receptive fields and weights, the skeleton serves as a bridge that connect local convolutional feature extractors and a global recurrent feature integrator. Experimental results on indoor scene datasets show advantages of the proposed approach compared to state-of-the-art architectures that adopt static pooling methods.
Different from the common usage of finding a rich, concise feature embedding, SO-Net @cite_10 unsupervised learned a self-organizing map (SOM) for feature extraction and aggregation. Despite its novelty, few performance improvements were observed even when compared to PointNet or OctNet @cite_24 . Possible reasons include their usage of the SOM and a lack of deep local and global analysis.
{ "cite_N": [ "@cite_24", "@cite_10" ], "mid": [ "2556802233", "2963719584" ], "abstract": [ "We present OctNet, a representation for deep learning with sparse 3D data. In contrast to existing models, our representation enables 3D convolutional networks which are both deep and high resolution. Towards this goal, we exploit the sparsity in the input data to hierarchically partition the space using a set of unbalanced octrees where each leaf node stores a pooled feature representation. This allows to focus memory allocation and computation to the relevant dense regions and enables deeper networks without compromising resolution. We demonstrate the utility of our OctNet representation by analyzing the impact of resolution on several 3D tasks including 3D object classification, orientation estimation and point cloud labeling.", "This paper presents SO-Net, a permutation invariant architecture for deep learning with orderless point clouds. The SO-Net models the spatial distribution of point cloud by building a Self-Organizing Map (SOM). Based on the SOM, SO-Net performs hierarchical feature extraction on individual points and SOM nodes, and ultimately represents the input point cloud by a single feature vector. The receptive field of the network can be systematically adjusted by conducting point-to-node k nearest neighbor search. In recognition tasks such as point cloud reconstruction, classification, object part segmentation and shape retrieval, our proposed network demonstrates performance that is similar with or better than state-of-the-art approaches. In addition, the training speed is significantly faster than existing point cloud recognition networks because of the parallelizability and simplicity of the proposed architecture. Our code is available at the project website.1" ] }
1907.11941
2966630200
Abstract There can be performance and vulnerability concerns with block ciphers, thus stream ciphers can used as an alternative. Although many symmetric key stream ciphers are fairly resistant to side-channel attacks, cryptographic artefacts may exist in memory. This paper identifies a significant vulnerability within OpenSSH and OpenSSL and which involves the discovery of cryptographic artefacts used within the ChaCha20 cipher. This can allow for the cracking of tunneled data using a single targeted memory extraction. With this, law enforcement agencies and or malicious agents could use the vulnerability to take copies of the encryption keys used for each tunnelled connection. The user of a virtual machine would not be alerted to the capturing of the encryption key, as the method runs from an extraction of the running memory. Methods of mitigation include making cryptographic artefacts difficult to discover and limiting memory access.
This paper focuses on the decrypting network traffic encrypted with ChaCha20-Poly1305 cipher. Prior studies have investigated potential vulnerabilities in cipher design and in cipher implementation. Researchers have found no vulnerabilities in ChaCha20 design. For example, differential attacks using techniques such as identifying significant key bits only succeeded with reduced cipher rounds and significant volumes of plaintext-ciphertext pairs @cite_8 @cite_34 . Combined linear and differential analysis improves performance, but is similarly restricted @cite_21 .
{ "cite_N": [ "@cite_34", "@cite_21", "@cite_8" ], "mid": [ "2336016133", "2574027529", "1577801461" ], "abstract": [ "Recently, ChaCha20 (the stream cipher ChaCha with 20 rounds) is in the process of being a standardized and thus it attracts serious interest in cryptanalysis. The most significant effort to analyse Salsa and ChaCha was explained by Aumasson et?al. long back (FSE 2008) and further, only minor improvements could be achieved. In this paper, first we revisit the work of Aumasson et?al. to provide a clearer insight of the existing attack (2248 complexity for ChaCha7, i.e.,?7 rounds) and show certain improvements (complexity around 2243) by exploiting additional Probabilistic Neutral Bits. More importantly, we describe a novel idea that explores proper choice of IVs corresponding to the keys, for which the complexity can be improved further (2239). The choice of IVs corresponding to the keys is the prime observation of this work. We systematically show how a single difference propagates after one round and how the differences can be reduced with proper choices of IVs. For Salsa too (Salsa20 8, i.e.,?8 rounds), we get improvement in complexity, reducing it to 2 245.5 from 2 247.2 reported by Aumasson et?al.", "", "The stream cipher Salsa20 was introduced by Bernstein in 2005 as a candidate in the eSTREAM project, accompanied by the reduced versions Salsa20 8 and Salsa20 12. ChaCha is a variant of Salsa20 aiming at bringing better diffusion for similar performance. Variants of Salsa20 with up to 7 rounds (instead of 20) have been broken by differential cryptanalysis, while ChaCha has not been analyzed yet. We introduce a novel method for differential cryptanalysis of Salsa20 and ChaCha, inspired by correlation attacks and related to the notion of neutral bits. This is the first application of neutral bits in stream cipher cryptanalysis. It allows us to break the 256-bit version of Salsa20 8, to bring faster attacks on the 7-round variant, and to break 6- and 7-round ChaCha. In a second part, we analyze the compression function Rumba, built as the XOR of four Salsa20 instances and returning a 512-bit output. We find collision and preimage attacks for two simplified variants, then we discuss differential attacks on the original version, and exploit a high-probability differential to reduce complexity of collision search from 2256to 279for 3-round Rumba. To prove the correctness of our approach we provide examples of collisions and near-collisions on simplified versions." ] }
1907.11941
2966630200
Abstract There can be performance and vulnerability concerns with block ciphers, thus stream ciphers can used as an alternative. Although many symmetric key stream ciphers are fairly resistant to side-channel attacks, cryptographic artefacts may exist in memory. This paper identifies a significant vulnerability within OpenSSH and OpenSSL and which involves the discovery of cryptographic artefacts used within the ChaCha20 cipher. This can allow for the cracking of tunneled data using a single targeted memory extraction. With this, law enforcement agencies and or malicious agents could use the vulnerability to take copies of the encryption keys used for each tunnelled connection. The user of a virtual machine would not be alerted to the capturing of the encryption key, as the method runs from an extraction of the running memory. Methods of mitigation include making cryptographic artefacts difficult to discover and limiting memory access.
ChaCha20 implementations may be vulnerable to side-channel attacks. While the cipher design may prevent timing attacks @cite_19 , correlating power electromagnetic radiation when specific cryptographic activities are performed may leak key stream information @cite_7 @cite_10 . Engendering instruction skips, for example by using a laser or electromagnetic pulse, could potentially produce the key stream but timing the activity would be challenging @cite_28 . Furthermore, these approaches may be impractical in real-world scenarios.
{ "cite_N": [ "@cite_28", "@cite_19", "@cite_10", "@cite_7" ], "mid": [ "", "2809200209", "2768792654", "2612613213" ], "abstract": [ "", "Salsa20 is a family of 256-bit stream ciphers designed in 2005 and submitted to eSTREAM, the ECRYPT Stream Cipher Project. Salsa20 has progressed to the third round of eSTREAM without any changes. The 20-round stream cipher Salsa20 20 is consistently faster than AES and is recommended by the designer for typical cryptographic applications. The reduced-round ciphers Salsa20 12 and Salsa20 8 are among the fastest 256-bit stream ciphers available and are recommended for applications where speed is more important than confidence. The fastest known attacks use ˜?2153 simple operations against Salsa20 7, ˜?2249 simple operations against Salsa20 8, and ˜?2255 simple operations against Salsa20 9, Salsa20 10, etc. In this paper, the Salsa20 designer presents Salsa20 and discusses the decisions made in the Salsa20 design.", "ChaCha is a family of stream ciphers that are very efficient on constrainted platforms. In this paper, we present electromagnetic side-channel analyses for two different software implementations of ChaCha20 on a 32-bit architecture: one compiled and another one directly written in assembly. On the device under test, practical experiments show that they have different levels of resistance to side-channel attacks. For the most leakage-resilient implementation, an analysis of the whole quarter round is required. To overcome this complication, we introduce an optimized attack based on a divide-and-conquer strategy named bricklayer attack.", "The stream cipher ChaCha20 and the MAC function Poly1305 have been published as IETF RFC 7539. Since then, the industry is starting to use it more often. For example, it has been implemented by Google in their Chrome browser for TLS and also support has been added to OpenSSL, as well as OpenSSH. It is often claimed, that the algorithms are designed to be resistant to side-channel attacks. However, this is only true, if the only observable side-channel is the timing behavior. In this paper, we show that ChaCha20 is susceptible to power and EM side-channel analysis, which also translates to an attack on Poly1305, if used together with ChaCha20 for key generation. As a first countermeasure, we analyze the effectiveness of randomly shuffling the operations of the ChaCha round function." ] }
1907.11941
2966630200
Abstract There can be performance and vulnerability concerns with block ciphers, thus stream ciphers can used as an alternative. Although many symmetric key stream ciphers are fairly resistant to side-channel attacks, cryptographic artefacts may exist in memory. This paper identifies a significant vulnerability within OpenSSH and OpenSSL and which involves the discovery of cryptographic artefacts used within the ChaCha20 cipher. This can allow for the cracking of tunneled data using a single targeted memory extraction. With this, law enforcement agencies and or malicious agents could use the vulnerability to take copies of the encryption keys used for each tunnelled connection. The user of a virtual machine would not be alerted to the capturing of the encryption key, as the method runs from an extraction of the running memory. Methods of mitigation include making cryptographic artefacts difficult to discover and limiting memory access.
Cryptographic artefacts have been found in device memory. For instance, RSA keys may be discovered in virtual machine images @cite_32 @cite_1 . Studies have also discovered DES and AES cipher keys in cold-boot attacks @cite_17 , Skipjack and Twofish key blocks in virtual memory @cite_22 , and AES session keys in virtual memory @cite_3 . Although these approaches use entropy measures to determine possible keys, they do not decrypt ciphertext encrypted with ciphers such as AES in Counter mode and ChaCha20 which require nonces initialization vectors. This study builds on the TLSkex @cite_3 and MemDecrypt studies @cite_38 which used privileged monitors to extract identified virtual machine process memory to identify TLS 1.2 AES keys, and SSH AES keys and initialization vectors, respectively. Instead, this study uses a different algorithm to find ChaCha20 cipher keys and nonces in device memory enabling SSH and TLS sessions to be decrypted in a non-invasive manner. The approach may enable decryption of Adiantum encrypts @cite_12 , the Google disk-encryption algorithm based on XChaCha20, an extension to ChaCha20 and Salsa20 with a longer nonce @cite_33 .
{ "cite_N": [ "@cite_38", "@cite_22", "@cite_33", "@cite_1", "@cite_32", "@cite_3", "@cite_12", "@cite_17" ], "mid": [ "2935536571", "2136331433", "", "2076132976", "2154464007", "2324374191", "2912809249", "2175377689" ], "abstract": [ "Abstract Decrypting and inspecting encrypted malicious communications may assist crime detection and prevention. Access to client or server memory enables the discovery of artefacts required for decrypting secure communications. This paper develops the MemDecrypt framework to investigate the discovery of encrypted artefacts in memory and applies the methodology to decrypting the secure communications of virtual machines. For Secure Shell, used for secure remote server management, file transfer, and tunnelling inter alia, MemDecrypt experiments rapidly yield AES-encrypted details for a live secure file transfer including remote user credentials, transmitted file name and file contents. Thus, MemDecrypt discovers cryptographic artefacts and quickly decrypts live SSH malicious communications including the detection and interception of data exfiltration of confidential data.", "The increasing popularity of cryptography poses a great challenge in the field of digital forensics. Digital evidence protected by strong encryption may be impossible to decrypt without the correct key. We propose novel methods for cryptographic key identification and present a new proof of concept tool named Interrogate that searches through volatile memory and recovers cryptographic keys used by the ciphers AES, Serpent and Twofish. By using the tool in a virtual digital crime scene, we simulate and examine the different states of systems where well known and popular cryptosystems are installed. Our experiments show that the chances of uncovering cryptographic keys are high when the digital crime scene are in certain well-defined states. Finally, we argue that the consequence of this and other recent results regarding memory acquisition require that the current practices of digital forensics should be guided towards a more forensically sound way of handling live analysis in a digital crime scene.", "", "Cloud providers must detect malicious traffic in and out of their network, virtual or otherwise. The use of Intrusion Detection Systems (IDS) has been hampered by the encryption of network communication. The result is that current signatures cannot match potentially malicious requests. A method to acquire the encryption keys is Virtual Machine Introspection (VMI). VMI is a technique to view the internal, and yet raw, representation of a Virtual Machine (VM). Current methods to find keys are expensive and use sliding windows or entropy. This inevitably requires reading the memory space of the entire process, or worse the OS, in a live environment where performance is paramount. This paper describes a structured walk of memory to find keys, particularly RSA, using as fewer reads from the VM as possible. In doing this we create a scalable mechanism to populate an IDS with keys to analyse traffic.", "Cloud Computing is a recent paradigm that is creating high expectations about benefits such as the pay-per-use model and elasticity of resources. However, with this optimism come also concerns about security. In a public cloud, the user's data storage and processing is no longer done inside its premises, but in data centers owned and administrated by the cloud provider. This may be a concern for organizations that deal with critical data, such as medical records. We show that a malicious insider can steal confidential data of the cloud user, so the user is mostly left with trusting the cloud provider. The paper achieves this goal by showing a set of attacks that demonstrate how a malicious insider can easily obtain passwords, cryptographic keys, files and other confidential data. Additionally, the paper shows that recent research results that might be useful to protect data in the cloud, are still not enough to deal with the problem. The paper is a call to arms for research in the topic.", "Abstract Nowadays, many applications by default use encryption of network traffic to achieve a higher level of privacy and confidentiality. One of the most frequently applied cryptographic protocols is Transport Layer Security (TLS). However, also adversaries make use of TLS encryption in order to hide attacks or command & control communication. For detecting and analyzing such threats, making the contents of encrypted communication available to security tools becomes essential. The ideal solution for this problem should offer efficient and stealthy decryption without having a negative impact on over-all security. This paper presents TLSkex (TLS Key EXtractor), an approach to extract the master key of a TLS connection at runtime from the virtual machine's main memory using virtual machine introspection techniques. Afterwards, the master key is used to decrypt the TLS session. In contrast to other solutions, TLSkex neither manipulates the network connection nor the communicating application. Thus, our approach is applicable for malware analysis and intrusion detection in scenarios where applications cannot be modified. Moreover, TLSkex is also able to decrypt TLS sessions that use perfect forward secrecy key exchange algorithms. In this paper, we define a generic approach for TLS key extraction based on virtual machine introspection, present our TLSkex prototype implementation of this approach, and evaluate the prototype.", "We present HBSH, a simple construction for tweakable length-preserving encryption which supports the fastest options for hashing and stream encryption for processors without AES or other crypto instructions, with a provable quadratic advantage bound. Our composition Adiantum uses NH, Poly1305, XChaCha12, and a single AES invocation. On an ARM Cortex-A7 processor, Adiantum decrypts 4096-byte messages at 10.6 cycles per byte, over five times faster than AES-256-XTS, with a constant-time implementation. We also define HPolyC which is simpler and has excellent key agility at 13.6 cycles per byte.", "Contrary to widespread assumption, dynamic RAM (DRAM), the main memory in most modern computers, retains its contents for several seconds after power is lost, even at room temperature and even if removed from a motherboard. Although DRAM becomes less reliable when it is not refreshed, it is not immediately erased, and its contents persist sufficiently for malicious (or forensic) acquisition of usable full-system memory images. We show that this phenomenon limits the ability of an operating system to protect cryptographic key material from an attacker with physical access to a machine. It poses a particular threat to laptop users who rely on disk encryption: we demonstrate that it could be used to compromise several popular disk encryption products without the need for any special devices or materials. We experimentally characterize the extent and predictability of memory retention and report that remanence times can be increased dramatically with simple cooling techniques. We offer new algorithms for finding cryptographic keys in memory images and for correcting errors caused by bit decay. Though we discuss several strategies for mitigating these risks, we know of no simple remedy that would eliminate them." ] }
1907.11914
2965936015
We extend the state-of-the-art Cascade R-CNN with a simple feature sharing mechanism. Our approach focuses on the performance increases on high IoU but decreases on low IoU thresholds--a key problem this detector suffers from. Feature sharing is extremely helpful, our results show that given this mechanism embedded into all stages, we can easily narrow the gap between the last stage and preceding stages on low IoU thresholds without resorting to the commonly used testing ensemble but the network itself. We also observe obvious improvements on all IoU thresholds benefited from feature sharing, and the resulting cascade structure can easily match or exceed its counterparts, only with negligible extra parameters introduced. To push the envelope, we demonstrate 43.2 AP on COCO object detection without any bells and whistles including testing ensemble, surpassing previous Cascade R-CNN by a large margin. Our framework is easy to implement and we hope it can serve as a general and strong baseline for future research.
Multi-stage object detectors are very popular in recent years. Following the main idea of divide and conquer', these detectors optimze a simpler problem first, and then refine the difficult progressively. In the field of object detection, cascade can be introduced into two components, namely the proposal generation process usually called RPN' and the classification and localization predicting process usually called R-CNN @cite_12 '. In these works, for the former, @cite_17 @cite_22 @cite_25 propose to use a multi-stage procedure to generate accurate proposals, and then refine these proposals with a single Fast R-CNN @cite_18 . For the latter, Cascade R-CNN @cite_8 is the most famous object detectors among them, with increasingly foreground background thresholds selected to refine the ROIs progressively. HTC @cite_7 follows this thoughts and propose to refine the features in an interleaved manner, resulting in state-of-the-art performance on instance segmentation task. Other works like @cite_2 @cite_19 also apply the R-CNN stage several times, but the performance is far behind Cascade R-CNN @cite_8 .
{ "cite_N": [ "@cite_18", "@cite_22", "@cite_7", "@cite_8", "@cite_19", "@cite_2", "@cite_25", "@cite_12", "@cite_17" ], "mid": [ "", "2420613611", "2912662889", "2964241181", "1934410531", "1932624639", "2766430333", "2102605133", "2963418361" ], "abstract": [ "", "The problem of computing category agnostic bounding box proposals is utilized as a core component in many computer vision tasks and thus has lately attracted a lot of attention. In this work we propose a new approach to tackle this problem that is based on an active strategy for generating box proposals that starts from a set of seed boxes, which are uniformly distributed on the image, and then progressively moves its attention on the promising image areas where it is more likely to discover well localized bounding box proposals. We call our approach AttractioNet and a core component of it is a CNN-based category agnostic object location refinement module that is capable of yielding accurate and robust bounding box predictions regardless of the object category. We extensively evaluate our AttractioNet approach on several image datasets (i.e. COCO, PASCAL, ImageNet detection and NYU-Depth V2 datasets) reporting on all of them state-of-the-art results that surpass the previous work in the field by a significant margin and also providing strong empirical evidence that our approach is capable to generalize to unseen categories. Furthermore, we evaluate our AttractioNet proposals in the context of the object detection task using a VGG16-Net based detector and the achieved detection performance on COCO manages to significantly surpass all other VGG16-Net based detectors while even being competitive with a heavily tuned ResNet-101 based detector. Code as well as box proposals computed for several datasets are available at:: this https URL", "Cascade is a classic yet powerful architecture that has boosted performance on various tasks. However, how to introduce cascade to instance segmentation remains an open question. A simple combination of Cascade R-CNN and Mask R-CNN only brings limited gain. In exploring a more effective approach, we find that the key to a successful instance segmentation cascade is to fully leverage the reciprocal relationship between detection and segmentation. In this work, we propose a new framework, Hybrid Task Cascade (HTC), which differs in two important aspects: (1) instead of performing cascaded refinement on these two tasks separately, it interweaves them for a joint multi-stage processing; (2) it adopts a fully convolutional branch to provide spatial context, which can help distinguishing hard foreground from cluttered background. Overall, this framework can learn more discriminative features progressively while integrating complementary features together in each stage. Without bells and whistles, a single HTC obtains 38.4 and 1.5 improvement over a strong Cascade Mask R-CNN baseline on MSCOCO dataset. Moreover, our overall system achieves 48.6 mask AP on the test-challenge split, ranking 1st in the COCO 2018 Challenge Object Detection Task. Code is available at: this https URL.", "In object detection, an intersection over union (IoU) threshold is required to define positives and negatives. An object detector, trained with low IoU threshold, e.g. 0.5, usually produces noisy detections. However, detection performance tends to degrade with increasing the IoU thresholds. Two main factors are responsible for this: 1) overfitting during training, due to exponentially vanishing positive samples, and 2) inference-time mismatch between the IoUs for which the detector is optimal and those of the input hypotheses. A multi-stage object detection architecture, the Cascade R-CNN, is proposed to address these problems. It consists of a sequence of detectors trained with increasing IoU thresholds, to be sequentially more selective against close false positives. The detectors are trained stage by stage, leveraging the observation that the output of a detector is a good distribution for training the next higher quality detector. The resampling of progressively improved hypotheses guarantees that all detectors have a positive set of examples of equivalent size, reducing the overfitting problem. The same cascade procedure is applied at inference, enabling a closer match between the hypotheses and the detector quality of each stage. A simple implementation of the Cascade R-CNN is shown to surpass all single-model object detectors on the challenging COCO dataset. Experiments also show that the Cascade R-CNN is widely applicable across detector architectures, achieving consistent gains independently of the baseline detector strength. The code is available at https: github.com zhaoweicai cascade-rcnn.", "In real-world face detection, large visual variations, such as those due to pose, expression, and lighting, demand an advanced discriminative model to accurately differentiate faces from the backgrounds. Consequently, effective models for the problem tend to be computationally prohibitive. To address these two conflicting challenges, we propose a cascade architecture built on convolutional neural networks (CNNs) with very powerful discriminative capability, while maintaining high performance. The proposed CNN cascade operates at multiple resolutions, quickly rejects the background regions in the fast low resolution stages, and carefully evaluates a small number of challenging candidates in the last high resolution stage. To improve localization effectiveness, and reduce the number of candidates at later stages, we introduce a CNN-based calibration stage after each of the detection stages in the cascade. The output of each calibration stage is used to adjust the detection window position for input to the subsequent stage. The proposed method runs at 14 FPS on a single CPU core for VGA-resolution images and 100 FPS using a GPU, and achieves state-of-the-art detection performance on two public face detection benchmarks.", "We propose an object detection system that relies on a multi-region deep convolutional neural network (CNN) that also encodes semantic segmentation-aware features. The resulting CNN-based representation aims at capturing a diverse set of discriminative appearance factors and exhibits localization sensitivity that is essential for accurate object localization. We exploit the above properties of our recognition module by integrating it on an iterative localization mechanism that alternates between scoring a box proposal and refining its location with a deep CNN regression model. Thanks to the efficient use of our modules, we detect objects with very high localization accuracy. On the detection challenges of PASCAL VOC2007 and PASCAL VOC2012 we achieve mAP of 78.2 and 73.9 correspondingly, surpassing any other published work by a significant margin.", "Deep region-based object detector consists of a region proposal step and a deep object recognition step. In this paper, we make significant improvements on both of the two steps. For region proposal we propose a novel lightweight cascade structure which can effectively improve RPN proposal quality. For object recognition we re-implement global context modeling with a few modications and obtain a performance boost (4.2 mAP gain on the ILSVRC 2016 validation set). Besides, we apply the idea of pre-training extensively and show its importance in both steps. Together with common training and testing tricks, we improve Faster R-CNN baseline by a large margin. In particular, we obtain 87.9 mAP on the PASCAL VOC 2012 test set, 65.3 on the ILSVRC 2016 test set and 36.8 on the COCO test-std set.", "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.", "Object detection is a fundamental problem in image understanding. One popular solution is the R-CNN framework [15] and its fast versions [14, 27]. They decompose the object detection problem into two cascaded easier tasks: 1) generating object proposals from images, 2) classifying proposals into various object categories. Despite that we are handling with two relatively easier tasks, they are not solved perfectly and there's still room for improvement. In this paper, we push the \"divide and conquer\" solution even further by dividing each task into two sub-tasks. We call the proposed method \"CRAFT\" (Cascade Regionproposal-network And FasT-rcnn), which tackles each task with a carefully designed network cascade. We show that the cascade structure helps in both tasks: in proposal generation, it provides more compact and better localized object proposals, in object classification, it reduces false positives (mainly between ambiguous categories) by capturing both inter-and intra-category variances. CRAFT achieves consistent and considerable improvement over the state-of the-art on object detection benchmarks like PASCAL VOC 07 12 and ILSVRC." ] }
1907.11914
2965936015
We extend the state-of-the-art Cascade R-CNN with a simple feature sharing mechanism. Our approach focuses on the performance increases on high IoU but decreases on low IoU thresholds--a key problem this detector suffers from. Feature sharing is extremely helpful, our results show that given this mechanism embedded into all stages, we can easily narrow the gap between the last stage and preceding stages on low IoU thresholds without resorting to the commonly used testing ensemble but the network itself. We also observe obvious improvements on all IoU thresholds benefited from feature sharing, and the resulting cascade structure can easily match or exceed its counterparts, only with negligible extra parameters introduced. To push the envelope, we demonstrate 43.2 AP on COCO object detection without any bells and whistles including testing ensemble, surpassing previous Cascade R-CNN by a large margin. Our framework is easy to implement and we hope it can serve as a general and strong baseline for future research.
Feature sharing has also been taken in many approaches. In @cite_14 , sharing features in RPN stage can improve the performance and similar results can be found in @cite_6 @cite_7 across different tasks. Different from these methods, our approach not only focuses on the overall improvements but also narrowing the gap without resorting to the commonly used testing ensemble in cascaded approaches but the network itself based on feature sharing.
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_6" ], "mid": [ "2565639579", "2912662889", "2743473392" ], "abstract": [ "Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But pyramid representations have been avoided in recent object detectors that are based on deep convolutional networks, partially because they are slow to compute and memory intensive. In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost. A top-down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, called a Feature Pyramid Network (FPN), shows significant improvement as a generic feature extractor in several applications. Using a basic Faster R-CNN system, our method achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles, surpassing all existing single-model entries including those from the COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU and thus is a practical and accurate solution to multi-scale object detection. Code will be made publicly available.", "Cascade is a classic yet powerful architecture that has boosted performance on various tasks. However, how to introduce cascade to instance segmentation remains an open question. A simple combination of Cascade R-CNN and Mask R-CNN only brings limited gain. In exploring a more effective approach, we find that the key to a successful instance segmentation cascade is to fully leverage the reciprocal relationship between detection and segmentation. In this work, we propose a new framework, Hybrid Task Cascade (HTC), which differs in two important aspects: (1) instead of performing cascaded refinement on these two tasks separately, it interweaves them for a joint multi-stage processing; (2) it adopts a fully convolutional branch to provide spatial context, which can help distinguishing hard foreground from cluttered background. Overall, this framework can learn more discriminative features progressively while integrating complementary features together in each stage. Without bells and whistles, a single HTC obtains 38.4 and 1.5 improvement over a strong Cascade Mask R-CNN baseline on MSCOCO dataset. Moreover, our overall system achieves 48.6 mask AP on the test-challenge split, ranking 1st in the COCO 2018 Challenge Object Detection Task. Code is available at: this https URL.", "The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors. Code is at: this https URL" ] }
1907.12051
2965243075
Sparse Network Coding (SNC) has been a promising network coding scheme as an improvement for Random Linear Network Coding (RLNC) in terms of the computational complexity. However, in this literature, there has been no analytical expressions for the probability of decoding a fraction of source messages after transmission of some coded packets. In this work, we looked into the problem of the probability of decoding a fraction of source messages, i.e., partial decoding, in the decoder for a system which uses SNC. We exploited the Principle of Inclusion and Exclusion to derive expressions of partial decoding probability. The presented model predicts the probability of partial decoding with an average deviation of 6 . Our results show that SNC has a great potential for recovering a fraction of the source message, especially in higher sparsity and lower Galois Field size. Moreover, to achieve a better probability of partial decoding throughout transmissions, we define a sparsity tuning scheme that significantly increases the probability of partial decoding. Our results show that this tuning scheme achieves a 16 improvement in terms of probability of decoding a fraction of source packets with respect to traditional SNC.
The probability of decoding a fraction of source packets in RLNC scheme has been a major research topic, in context of both performance @cite_22 and security @cite_2 . The authors of @cite_2 showed an upper bound for the possibility of decoding a fraction of source packets, while in @cite_22 , the authors derived exact expressions for the probability of partial decoding for RLNC. Unfortunately, none of these works can be extended to SNC scheme. @cite_5 , these expressions were used to study the security of RLNC in a multi-relay network. The authors of @cite_10 also found an exact expression for the probability of partial decoding in systematic RLNC. However, their analysis is only valid for Binary Galois Field and also can not be extended to SNC.
{ "cite_N": [ "@cite_5", "@cite_10", "@cite_22", "@cite_2" ], "mid": [ "2766982895", "1882115122", "2613932151", "68293075" ], "abstract": [ "Opportunistic relaying has the potential to achieve full diversity gain, while random linear network coding (RLNC) can reduce latency and energy consumption. In recent years, there has been a growing interest in the integration of both schemes into wireless networks in order to reap their benefits, while considering security concerns. This paper considers a multi-relay network, where relay nodes employ RLNC to encode confidential data and transmit coded packets to a destination in the presence of an eavesdropper. Four relay selection protocols are studied covering a range of network capabilities, such as the availability of the eavesdropper’s channel state information or the possibility to pair the selected relay with a node that intentionally generates interference. For each case, expressions for the probability that a coded packet will not be recovered by a receiver, which can be either the destination or the eavesdropper, are derived. Based on those expressions, a framework is developed that characterizes the probability of the eavesdropper intercepting a sufficient number of coded packets and partially or fully recovering the confidential data. Simulation results confirm the validity and accuracy of the theoretical framework and unveil the security-reliability trade-offs attained by each RLNC-enabled relay selection protocol.", "We consider binary systematic network codes and investigate their capability of decoding a source message either in full or in part. We carry out a probability analysis, derive closed-form expressions for the decoding probability and show that systematic network coding outperforms conventional network coding. We also develop an algorithm based on Gaussian elimination that allows progressive decoding of source packets. Simulation results show that the proposed decoding algorithm can achieve the theoretical optimal performance. Furthermore, we demonstrate that systematic network codes equipped with the proposed algorithm are good candidates for progressive packet recovery owing to their overall decoding delay characteristics.", "In the literature, there exist analytical expressions for the probability of a receiver decoding a transmitted source message that has been encoded using random linear network coding. In this letter, we look into the probability that the receiver will decode at least a fraction of the source message, and present an exact solution to this problem for both non-systematic and systematic network coding. Based on the derived expressions, we investigate the potential of these two implementations of network coding for information-theoretic secure communication and progressive recovery of data.", "In this work we consider the problem of secure data transmission on an acyclic multicast network. A new information theoretic model for security is proposed that defines the system as secure if an eavesdropper is unable to get any “meaningful” information about the source. The “wiretap” network model by Cai and Yeung in which no information of the source is made available to the eavesdropper is a special case in the new model. We consider the case when the number of independent messages available to the eavesdropper is less than the multicast capacity of the network. We show that under the new security requirements communication is possible at the multicast capacity. A linear transformation is provided for networks with a given linear code to make the system secure. The transformation needs to be done only at the source and the operations at the intermediate nodes remain unchanged. We also show that if random coding scheme is used the probability of the system being secure can be made arbitrarily close to one by coding over a large enough field." ] }
1907.12051
2965243075
Sparse Network Coding (SNC) has been a promising network coding scheme as an improvement for Random Linear Network Coding (RLNC) in terms of the computational complexity. However, in this literature, there has been no analytical expressions for the probability of decoding a fraction of source messages after transmission of some coded packets. In this work, we looked into the problem of the probability of decoding a fraction of source messages, i.e., partial decoding, in the decoder for a system which uses SNC. We exploited the Principle of Inclusion and Exclusion to derive expressions of partial decoding probability. The presented model predicts the probability of partial decoding with an average deviation of 6 . Our results show that SNC has a great potential for recovering a fraction of the source message, especially in higher sparsity and lower Galois Field size. Moreover, to achieve a better probability of partial decoding throughout transmissions, we define a sparsity tuning scheme that significantly increases the probability of partial decoding. Our results show that this tuning scheme achieves a 16 improvement in terms of probability of decoding a fraction of source packets with respect to traditional SNC.
Rateless codes such as LT and Raptor codes, can be considered as a binary implementation of SNC @cite_18 @cite_0 and partial decoding probability has been a major research topic in the literature of rateless codes. To mention a few, the authors of @cite_14 designed an algorithm for an optimal recovery rate, i.e. partial decoding probability in LT codes. However, the results of this work are asymptotically optimal and may only be employed for infinite number of source packets. @cite_17 , the authors provided a probability analysis for decoding a fraction of source packets based on the structure of the received coded packets. However, their analysis can not provide any probability for partial decoding, where the exact structure of the received coded packets are unknown. The authors of @cite_9 and @cite_28 , proposed algorithms to increase the probability of partial decoding for rateless codes. However, these works only increase the probability of partial decoding in the specific stages of the whole transmission and also can not be extended to non-binary Galois Fields. Also these algorithms are based on current coded packets received by the decoder and require huge computation overhead while transmitting coded packets.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_28", "@cite_9", "@cite_0", "@cite_17" ], "mid": [ "", "2129824524", "2172232382", "2100207307", "", "2133349749" ], "abstract": [ "", "Fountain codes are designed so that all input symbols can be recovered from a slightly larger number of coded symbols, with high probability using an iterative decoder. In this paper we investigate the number of input symbols that can be recovered by the same decoder, but when the number of coded symbols available is less than the total number of input symbols. Of course recovery of all inputs is not possible, and the fraction that can be recovered will depend on the output degree distribution of the code. In this paper we (a) outer bound the fraction of inputs that can be recovered for any output degree distribution of the code, and (b) design degree distributions which meet perform close to this bound. Our results are of interest for real-time systems using rateless codes, and for Raptor-type two-stage designs.", "Rateless erasure resilient codes, e.g., LT codes, can recover all message symbols perfectly with high probability from a slightly greater number of encoding symbols. However, it has been observed that only a few message symbols can be recovered from a less number of encoding symbols. In this paper, we investigate the performance of rateless codes when the number of received encoding symbols is less than that of the original message symbols. Through ordered generation of encoding symbols according to their degrees, we demonstrate that a significantly improved intermediate performance of rateless codes can be obtained.", "Existing rateless codes have low intermediate symbol recovery rates (ISRR). Therefore, we first design new rateless codes with close to optimal ISRR employing genetic algorithms. Next, we assume an estimate of the channel erasure rate is available and propose an algorithm to further improve the ISRR of the designed codes.", "", "This paper provides an efficient method for analyzing the error probability of the belief propagation (BP) decoder applied to LT Codes. Each output symbol is generated independently by sampling from a distribution and adding the input symbols corresponding to the support of the sampled vector." ] }
1907.12051
2965243075
Sparse Network Coding (SNC) has been a promising network coding scheme as an improvement for Random Linear Network Coding (RLNC) in terms of the computational complexity. However, in this literature, there has been no analytical expressions for the probability of decoding a fraction of source messages after transmission of some coded packets. In this work, we looked into the problem of the probability of decoding a fraction of source messages, i.e., partial decoding, in the decoder for a system which uses SNC. We exploited the Principle of Inclusion and Exclusion to derive expressions of partial decoding probability. The presented model predicts the probability of partial decoding with an average deviation of 6 . Our results show that SNC has a great potential for recovering a fraction of the source message, especially in higher sparsity and lower Galois Field size. Moreover, to achieve a better probability of partial decoding throughout transmissions, we define a sparsity tuning scheme that significantly increases the probability of partial decoding. Our results show that this tuning scheme achieves a 16 improvement in terms of probability of decoding a fraction of source packets with respect to traditional SNC.
Another interesting research is Instantly Decodable Network Coding (IDNC) @cite_12 . In this scheme, the packets are sent in a way that an instant decoding of at least one of the source packets is guaranteed. However, the analysis on the partial decoding probability of this family of codes, such as @cite_16 and @cite_24 , is only valid for binary Galois Field and the presence of feedback in the system. Moreover, the IDNC family of codes heavily rely on feedbacks, though in our work, the system is considered to be feedback-free.
{ "cite_N": [ "@cite_24", "@cite_16", "@cite_12" ], "mid": [ "2949575255", "1980811884", "" ], "abstract": [ "Current approaches to the practical implementation of network coding are batch-based, and often do not use feedback, except possibly to signal completion of a file download. In this paper, the various benefits of using feedback in a network coded system are studied. It is shown that network coding can be performed in a completely online manner, without the need for batches or generations, and that such online operation does not affect the throughput. Although these ideas are presented in a single-hop packet erasure broadcast setting, they naturally extend to more general lossy networks which employ network coding in the presence of feedback. The impact of feedback on queue size at the sender and decoding delay at the receivers is studied. Strategies for adaptive coding based on feedback are presented, with the goal of minimizing the queue size and delay. The asymptotic behavior of these metrics is characterized, in the limit of the traffic load approaching capacity. Different notions of decoding delay are considered, including an order-sensitive notion which assumes that packets are useful only when delivered in order. Our work may be viewed as a natural extension of Automatic Repeat reQuest (ARQ) schemes to coded networks.", "This paper studies the complicated interplay of the completion time (as a measure of throughput) and the decoding delay performance in instantly decodable network coded (IDNC) systems over wireless broadcast erasure channels with memory. We propose two new algorithms that enable a tradeoff for an improved balance between completion time and decoding delay of broadcasting a block of packets. We first formulate the IDNC packet selection problem that improves the balance between completion time and decoding delay as a statistical shortest path (SSP) problem. However, since finding such packet selection policy using the SSP technique is computationally complex, we employ its geometric structure to find some guidelines and use them to propose two efficient heuristic packet selection algorithms for broadcast erasure channels with a wide range of memory conditions. It is shown that each one of the two proposed algorithms is superior for a specific range of memory conditions. Furthermore, we show that the proposed algorithms achieve an improved fairness in terms of the decoding delay across all receivers.", "" ] }
1907.12051
2965243075
Sparse Network Coding (SNC) has been a promising network coding scheme as an improvement for Random Linear Network Coding (RLNC) in terms of the computational complexity. However, in this literature, there has been no analytical expressions for the probability of decoding a fraction of source messages after transmission of some coded packets. In this work, we looked into the problem of the probability of decoding a fraction of source messages, i.e., partial decoding, in the decoder for a system which uses SNC. We exploited the Principle of Inclusion and Exclusion to derive expressions of partial decoding probability. The presented model predicts the probability of partial decoding with an average deviation of 6 . Our results show that SNC has a great potential for recovering a fraction of the source message, especially in higher sparsity and lower Galois Field size. Moreover, to achieve a better probability of partial decoding throughout transmissions, we define a sparsity tuning scheme that significantly increases the probability of partial decoding. Our results show that this tuning scheme achieves a 16 improvement in terms of probability of decoding a fraction of source packets with respect to traditional SNC.
The authors of @cite_13 and @cite_19 introduced and analyzed an improvement for the SNC scheme called perpetual coding. However, this coding scheme is not completely random and uses a structured type of coding to send packets, and the analysis on this coding scheme can not be extended to random SNC scheme, which is the scope of this paper.
{ "cite_N": [ "@cite_19", "@cite_13" ], "mid": [ "2564688069", "1983476430" ], "abstract": [ "Perpetual codes provide a sparse, but structured coding for fast encoding and decoding. In this work, we illustrate that perpetual codes introduce linear dependent packet transmissions in the presence of an erasure channel. We demonstrate that the number of linear dependent packet transmissions is highly dependent on a parameter called the width ( ( )), which represents the number of consecutive non-zero coding coefficient present in each coded packet after a pivot element. We provide a mathematical analysis based on the width of the coding vector for the number of transmitted packets and validate it with simulation results. The simulations show that for ( = 5 ), generation size (g = 256 ), and low erasure probability on the link, a destination can receive up to (70 ) overhead in average. Moreover, increasing the width, the overhead contracts, and for ( 60 ) it becomes negligible.", "Random Linear Network Coding (RLNC) provides a theoretically efficient method for coding. The drawbacks associated with it are the complexity of the decoding and the overhead resulting from the coding vector. This adds to the overall energy consumption and is problematic for computational limited and battery driven platforms. In this work we present an approach to RLNC where the code is sparse and non-uniform. The sparsity allow for fast encoding and decoding, and the non- uniform protection of symbols enables recoding where the produced symbols are indistinguishable from those encoded at the source. The results show that the approach presented here provides a better trade- off between coding throughput and code overhead. In particular it can provide a coding overhead identical to RLNC but at significantly reduced computational complexity. It also allow for easy adjustment of this trade-off, which make it suitable for a broad range of platforms and applications. Finally it is easy to perform recoding and coding vectors can be efficiently represented." ] }
1907.11907
2966776769
Lemmatization, finding the basic morphological form of a word in a corpus, is an important step in many natural language processing tasks when working with morphologically rich languages. We describe and evaluate Nefnir, a new open source lemmatizer for Icelandic. Nefnir uses suffix substitution rules, derived from a large morphological database, to lemmatize tagged text. Evaluation shows that for correctly tagged text, Nefnir obtains an accuracy of 99.55 , and for text tagged with a PoS tagger, the accuracy obtained is 96.88 .
Machine learning methods emerged to make the rule-learning process more effective, and various algorithms have been developed. These methods rely on training data, which can be a corpus of words and their lemmas or a large morphological lexicon @cite_6 . By analyzing the training data, transformation rules are formed, which can subsequently be used to find lemmas in new texts, given the word forms.
{ "cite_N": [ "@cite_6" ], "mid": [ "2141457256" ], "abstract": [ "We propose a method to automatically train lemmatization rules that handle prefix, infix and suffix changes to generate the lemma from the full form of a word. We explain how the lemmatization rules are created and how the lemmatizer works. We trained this lemmatizer on Danish, Dutch, English, German, Greek, Icelandic, Norwegian, Polish, Slovene and Swedish full form-lemma pairs respectively. We obtained significant improvements of 24 percent for Polish, 2.3 percent for Dutch, 1.5 percent for English, 1.2 percent for German and 1.0 percent for Swedish compared to plain suffix lemmatization using a suffix-only lemmatizer. Icelandic deteriorated with 1.9 percent. We also made an observation regarding the number of produced lemmatization rules as a function of the number of training pairs." ] }
1907.12108
2965617855
In this paper, we present an end-to-end empathetic conversation agent CAiRE. Our system adapts TransferTransfo (, 2019) learning approach that fine-tunes a large-scale pre-trained language model with multi-task objectives: response language modeling, response prediction and dialogue emotion detection. We evaluate our model on the recently proposed empathetic-dialogues dataset (, 2019), the experiment results show that CAiRE achieves state-of-the-art performance on dialogue emotion detection and empathetic response generation.
Previous work @cite_12 @cite_5 @cite_0 showed that leveraging a large amount of data to learn context-sensitive features from a language model can create state-of-the-art models for a wide range of tasks. Taking this further, deployed higher capacity models and improved the state-of-the-art results. In this paper, we build the empathetic chatbot based on the pre-trained language model and achieve state-of-the-art results on dialogue emotion detection and empathetic response generation.
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_12" ], "mid": [ "2896457183", "2950813464", "2787560479" ], "abstract": [ "We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5 (7.7 point absolute improvement), MultiNLI accuracy to 86.7 (4.6 absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).", "With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling. However, relying on corrupting the input with masks, BERT neglects dependency between the masked positions and suffers from a pretrain-finetune discrepancy. In light of these pros and cons, we propose XLNet, a generalized autoregressive pretraining method that (1) enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and (2) overcomes the limitations of BERT thanks to its autoregressive formulation. Furthermore, XLNet integrates ideas from Transformer-XL, the state-of-the-art autoregressive model, into pretraining. Empirically, XLNet outperforms BERT on 20 tasks, often by a large margin, and achieves state-of-the-art results on 18 tasks including question answering, natural language inference, sentiment analysis, and document ranking.", "We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy). Our word vectors are learned functions of the internal states of a deep bidirectional language model (biLM), which is pre-trained on a large text corpus. We show that these representations can be easily added to existing models and significantly improve the state of the art across six challenging NLP problems, including question answering, textual entailment and sentiment analysis. We also present an analysis showing that exposing the deep internals of the pre-trained network is crucial, allowing downstream models to mix different types of semi-supervision signals." ] }
1907.11866
2966500445
Wirelessly powered backscatter communication (WPBC) has been identified as a promising technology for low-power communication systems, which can reap the benefits of energy beamforming to improve energy transfer efficiency. Existing studies on energy beamforming fail to simultaneously take energy supply and information transfer in WPBC into account. This paper takes the first step to fill this gap, by considering the trade-off between the energy harvesting rate and achievable rate using estimated backscatter channel state information (BS-CSI). To ensure reliable communication and user fairness, we formulate the energy beamforming design as a max-min optimization problem by maximizing the minimum achievable rate for all tags subject to the energy constraint. We derive the closed-form expression of the energy harvesting rate, as well as the lower bound of the ergodic achievable rate. Our numerical results indicate that our scheme can significantly outperform state-of-the-art energy beamforming schemes. Additionally, the proposed scheme achieves performance comparable to that obtained via beamforming with perfect CSI.
Significant progress has recently been made on the energy beamforming in wireless powered communication @cite_5 @cite_1 @cite_2 @cite_9 . To harness the benefits of energy beamforming, there have been many efforts made to enable energy beamforming in WPBC. @cite_0 focus on optimizing the transmit beamforming to maximize the sum rate of a cooperative WPBC system. @cite_8 investigate energy beamforming in a relay WPBC system. Both these studies assume that F-CSI can be known to achieve energy beamforming. However, the closed-loop propagation and power-limited tags make it hard to obtain F-CSI. Instead, @cite_10 performs energy beamforming via the estimated BS-CSI to improve WET efficiency. Departure from these studies, our work considers the beamforming design for both energy supply and data transfer to ensure communication performance.
{ "cite_N": [ "@cite_8", "@cite_9", "@cite_1", "@cite_0", "@cite_2", "@cite_5", "@cite_10" ], "mid": [ "2791237814", "2897693927", "2963423274", "2783633736", "2888460575", "2584574674", "2964056649" ], "abstract": [ "The integration of wireless power transfer (WPT) with the low-power backscatter communications provides a promising way to sustain battery-less wireless networks. In this paper, we consider a backscatter communication network wirelessly powered by a power beacon station (PBS). Each backscatter radio uses the harvested energy to power its data transmissions, in which some other radios can help as the wireless relays with an aim to improve throughput performance by cooperative transmission. Under this setting, we formulate a throughput maximization problem to jointly optimize WPT and the relay strategy of the backscatter radios. An iterative algorithm with reduced complexity and communication overhead is proposed to decompose the original problem into two sub-problems distributed at the PBS and the backscatter receiver. Moreover, we take uncertain channel information into consideration and formulate robust counter-parts of the throughput maximization problem when either the backscatter or relay channel is subject to estimation errors. The difficulty of the robust counter-part lies in the coupling of the PBS’ power allocation and relay strategy in matrix inequalities, which is addressed by alternating optimization with guaranteed convergence. Numerical results reveal that the cooperative relay strategy of the backscatter radios significantly improves the throughput performance.", "This paper studies the beneficial combination of wireless power transfer and millimeter-wave (mmWave) communications in wireless sensor networks. In particular, an mmWave wireless powered sensor network is considered, where the access point (AP) employs beamforming techniques to transfer energy to the sensors of a selected sector of the cell. The served sensors harvest and store energy from the received signal, and use it to power their uplink transmissions. We consider a random energy beamforming scheme but also propose several intelligent schemes that steer the beam to specific areas of the cell by considering the sensors’ locations. This setup is investigated from a large-scale point-of-view where spatial randomness is considered with the aid of Poisson point processes. The performance of the network is described in terms of the energy outage probability and the beam outage probability. We show that, depending on the scenario, each of the considered energy beamforming schemes can provide significant gains to the network’s performance. Finally, we study an event monitoring application of our theoretical framework, where the active sensors observe a random event in the network and the AP attempts to estimate it based on the received information; this scenario is evaluated in terms of the estimation’s mean squared error.", "In this paper, the joint beamforming design for max-min fair simultaneous wireless information and power transfer (SWIPT) is investigated in a green cloud radio access network (Cloud-RAN) with millimeter wave (mmWave) wireless fronthaul. To achieve a balanced user experience for separately located data receivers (DRs) and energy receivers (ERs) in the network, joint transmit beamforming vectors will be optimized to maximize the minimum data rate among all the DRs, while satisfying each ER with sufficient RF energy at the same time. Then, a two-step iterative algorithm is proposed to solve the original non- convex optimization problem with the fronthaul capacity constraint in an @math -norm form. Specifically, the @math - norm constraint can be approximated by the reweighted @math -norm, from which the optimal max-min data rate and the corresponding joint beamforming vector can be derived via semidefinite relaxation (SDR) and bi-section search. Finally, extensive numerical simulations are performed to verify the superiority of the proposed joint beamforming design to other separate beamforming strategies.", "Ambient backscatter communication (AmBC) enables a tag to modulate its information bits over ambient RF carriers by intentionally changing its reflection coefficient, thus has emerged as a promising technique to achieve green communications for future Internet-of-Things. In this paper, we model a cooperative AmBC system from a spectrum- sharing perspective, where a cooperative receiver (C-RX) decodes the information from both a multi-antenna primary transmitter (PT) and a single-antenna secondary transmitter (i.e., tag). We consider two scenarios: first, the tag-symbol period equals the PT-symbol period; second, the tag-symbol period is an integer multiple of the PT-symbol period. For each scenario, we analyze the data rate via successive- interference-cancellation (SIC) based decoding, and formulate a problem to maximize the sum rate by optimizing the beamforming vector at the PT. The problems are transformed into semi-definite programming (SDP), and solved by using the technique of semi-definite relaxation (SDR). Furthermore, a novel transmit beamforming structure is proposed to reduce the computational complexity of beamforming optimization. Numerical results show that the cooperative AmBC system can achieve a higher sum rate than a conventional point-to-point system without a backscatter tag.", "Massive MIMO is attractive for wireless information and energy transfer due to its ability to focus energy toward desired spatial locations. In this paper, the overall power transfer efficiency (PTE) and the energy efficiency (EE) of a wirelessly powered massive MIMO system are investigated, where a multi-antenna base-station (BS) uses wireless energy transfer to charge single-antenna energy harvesting users on the downlink. The users may exploit the harvested energy to transmit information to the BS on the uplink. The overall system performance is analyzed while accounting for the nonlinear nature of practical energy harvesters. First, for wireless energy transfer, the PTE is characterized using a scalable model for the BS circuit power consumption. The PTE-optimal values for the number of BS antennas and users are derived. Then, for wireless energy and information transfer, the EE performance is characterized. The EE-optimal BS transmit power is derived in terms of the key system parameters, such as the number of BS antennas and the number of users. As the number of antennas becomes large, increasing the transmit power improves the EE for moderate to large number of antennas. Simulation results suggest that it is energy efficient to operate the system in the massive MIMO regime.", "This paper analyzes the performance of information and energy beamforming in multiple-input multiple- output (MIMO) wireless communications systems, where a self-powered multi-antenna hybrid access point (AP) coordinates wireless information and power transfer (WIPT) with an energy-constrained multi-antenna user terminal (UT). The wirelessly powered UT scavenge energy from the hybrid AP radio-frequency (RF) signal in the downlink (DL) using the harvest-then-transmit protocol, then uses the harvested energy to send its information to the hybrid AP in the uplink (UL). To maximize the overall signal-to-noise ratio (SNR) as well as the harvested energy so as to mitigate the severe effects of fading and enable long-distance wireless power transfer, information and energy beamforming is investigated by steering the transmitted information and energy signals along the strongest eigenmode. To this end, exact and lower-bound expressions for the outage probability and ergodic capacity are presented in closed-form, through which the throughput of the delay- constrained and delay-tolerant transmission modes are analyzed, respectively. Numerical results sustained by Monte Carlo simulations show the exactness and tightness of the proposed analytical expressions. The impact of various parameters such as energy harvesting time, hybrid AP transmit power and the number of antennas on the system throughput is also considered.", "We study RF-enabled wireless energy transfer (WET) via energy beamforming, from a multi-antenna energy transmitter (ET) to multiple energy receivers (ERs) in a backscatter communication system such as RFID. The acquisition of the forward-channel (i.e., ET-to-ER) state information (F-CSI) at the ET (or RFID reader) is challenging, since the ERs (or RFID tags) are typically too energy-and-hardware-constrained to estimate or feedback the F-CSI. The ET leverages its observed backscatter signals to estimate the backscatter-channel (i.e., ET-to-ER-to-ET) state information (BS-CSI) directly. We first analyze the harvested energy obtained using the estimated BS-CSI. Furthermore, we optimize the resource allocation to maximize the total utility of harvested energy. For WET to single ER, we obtain the optimal channel-training energy in a semiclosed form. For WET to multiple ERs, we optimize the channel-training energy and the energy allocation weights for different energy beams. For the straightforward weighted-sum-energy (WSE) maximization, the optimal WET scheme is shown to use only one energy beam, which leads to unfairness among ERs and motivates us to consider the complicated proportional-fair-energy (PFE) maximization. For PFE maximization, we show that it is a biconvex problem, and propose a block-coordinate-descent-based algorithm to find the close-to-optimal solution. Numerical results show that with the optimized solutions, the harvested energy suffers slight reduction of less than 10 , compared to that obtained using the perfect F-CSI." ] }
1907.11817
2954117579
Source code similarity are increasingly used in application development to identify clones, isolate bugs, and find copy-rights violations. Similar code fragments can be very problematic due to the fact that errors in the original code must be fixed in every copy. Other maintenance changes, such as extensions or patches, must be applied multiple times. Furthermore, the diversity of coding styles and flexibility of modern languages makes it difficult and cost ineffective to manually inspect large code repositories. Therefore, detection is only feasible by automatic techniques. We present an efficient and scalable approach for similar code fragment identification based on source code control flow graphs fingerprinting. The source code is processed to generate control flow graphs that are then hashed to create a unique fingerprint of the code capturing semantics as well as syntax similarity. The fingerprints can then be efficiently stored and retrieved to perform similarity search between code fragments. Experimental results from our prototype implementation supports the validity of our approach and show its effectiveness and efficiency in comparison with other solutions.
Clones can be broadly categorized into four types based on the nature of their similarity @cite_40 @cite_42 @cite_5 @cite_48 . : are clone pairs that are identical to each other with no modification to the source code. : are clone pairs that are only different in literals and variable types. : are renamed clone pairs with some structural modifications such as additions, deletions, and rearrangement of statements. : are clone pairs that have different syntax but perform the same functionality (i.e., semantically equivalent). These typically are most challenging to find and identify, yet, they are the more relevant in the context of ERP systems @cite_12 @cite_23 .
{ "cite_N": [ "@cite_48", "@cite_42", "@cite_40", "@cite_23", "@cite_5", "@cite_12" ], "mid": [ "", "2041190309", "2101832700", "2119686964", "2025962632", "2042092487" ], "abstract": [ "", "Abstract Context Reusing software by means of copy and paste is a frequent activity in software development. The duplicated code is known as a software clone and the activity is known as code cloning . Software clones may lead to bug propagation and serious maintenance problems. Objective This study reports an extensive systematic literature review of software clones in general and software clone detection in particular. Method We used the standard systematic literature review method based on a comprehensive set of 213 articles from a total of 2039 articles published in 11 leading journals and 37 premier conferences and workshops. Results Existing literature about software clones is classified broadly into different categories. The importance of semantic clone detection and model based clone detection led to different classifications. Empirical evaluation of clone detection tools techniques is presented. Clone management, its benefits and cross cutting nature is reported. Number of studies pertaining to nine different types of clones is reported. Thirteen intermediate representations and 24 match detection techniques are reported. Conclusion We call for an increased awareness of the potential benefits of software clone management, and identify the need to develop semantic and model clone detection techniques. Recommendations are given for future research.", "Over the last decade many techniques and tools for software clone detection have been proposed. In this paper, we provide a qualitative comparison and evaluation of the current state-of-the-art in clone detection techniques and tools, and organize the large amount of information into a coherent conceptual framework. We begin with background concepts, a generic clone detection process and an overall taxonomy of current techniques and tools. We then classify, compare and evaluate the techniques and tools in two different dimensions. First, we classify and compare approaches based on a number of facets, each of which has a set of (possibly overlapping) attributes. Second, we qualitatively evaluate the classified techniques and tools with respect to a taxonomy of editing scenarios designed to model the creation of Type-1, Type-2, Type-3 and Type-4 clones. Finally, we provide examples of how one might use the results of this study to choose the most appropriate clone detection tool or technique in the context of a particular set of goals and constraints. The primary contributions of this paper are: (1) a schema for classifying clone detection techniques and tools and a classification of current clone detectors based on this schema, and (2) a taxonomy of editing scenarios that produce different clone types and a qualitative evaluation of current clone detectors based on this taxonomy.", "A business application automates a collection of business processes. A business process describes how a set of logically related tasks are executed, ordered and managed by following business rules to achieve business objectives. An online bookstore business application contains several tasks such as buying a book, ordering a book, and sending out promotions. Business analysts specify business tasks and software developers implement these tasks. Throughout the lifetime of a business application, business analysts may clone (e.g., copy and slightly modify) business processes to handle special circumstances or promotions. Identifying these clones and removing them helps improve the efficiency of an organization. However most clone detection techniques are source code based not business process based. In this paper, we propose an approach that makes use of traditional source code detection techniques to detect clones in business applications. The effectiveness of our approach is demonstrated through a case study on 10 large open source business applications in the Apache Open for Business Project.", "Many clone detection tools and techniques have been introduced in the literature, and these tools have been used to manage clones and study their effects on software maintenance and evolution. However, the performance of these modern tools is not well known, especially recall. In this paper, we evaluate and compare the recall of eleven modern clone detection tools using four benchmark frameworks, including: (1) Bellon's Framework, (2) our modification to Bellon's Framework to improve the accuracy of its clone matching metrics, (3) 's extension of Bellon's Framework which adds type 3 gap awareness to the framework, and (4) our Mutation and Injection Framework. Bellon's Framework uses a curated corpus of manually validated clones detected by tools contemporary to 2002. In contrast, our Mutation and Injection Framework synthesizes a corpus of artificial clones using a cloning taxonomy produced in 2009. While still very popular in the clone community, there is some concern that Bellon's corpus may not be accurate for modern clone detection tools. We investigate the accuracy of the frameworks by (1) checking for anomalies in their results, (2) checking for agreement between the frameworks, and (3) checking for agreement with our expectations of these tools. Our expectations are researched and flexible. While expectations may contain inaccuracies, they are valuable for identifying possible inaccuracies in a benchmark. We find anomalies in the results of Bellon's Framework, and disagreement with both our expectations and the Mutation Framework. We conclude that Bellon's Framework may not be accurate for modern tools, and that an update of its corpus with clones detected by the modern tools is warranted. The results of the Mutation Framework agree with our expectations in most cases. We suggest that it is a good solution for evaluating modern tools.", "Microsoft Dynamics NAV is a widely used enterprise resource planning system for small and medium-sized enterprises that, by design, encourages rapid customization by copy-paste programming. We report the results of analyzing clone detection for NAV using two previously published methods and one new algorithmic method: character-based sliding window sampling using Rabin-Karp hashing (MOSS), line-based sequence matching using suffix trees (CodeDup), and abstract-syntax-tree based graph sharing analysis (XMLClone). The latter is piggybacked on XMLStore, which stores XML trees as directed acyclic graphs (dags) where all isomorphic subtrees are identified and coalesced into single nodes, which can be done in linear time using multiset discrimination. This dagification discovers all well-formed Type-1 and, with suitable input normalization, Type-2 clones. We find that the subsequent dag analysis to discover Type-3 clones performs well on NAV source code, both in terms of computational complexity and precision. This suggests that efficient dagification and independently configurable dag interpretation may be valuable ingredients for modular clone detection." ] }
1907.11817
2954117579
Source code similarity are increasingly used in application development to identify clones, isolate bugs, and find copy-rights violations. Similar code fragments can be very problematic due to the fact that errors in the original code must be fixed in every copy. Other maintenance changes, such as extensions or patches, must be applied multiple times. Furthermore, the diversity of coding styles and flexibility of modern languages makes it difficult and cost ineffective to manually inspect large code repositories. Therefore, detection is only feasible by automatic techniques. We present an efficient and scalable approach for similar code fragment identification based on source code control flow graphs fingerprinting. The source code is processed to generate control flow graphs that are then hashed to create a unique fingerprint of the code capturing semantics as well as syntax similarity. The fingerprints can then be efficiently stored and retrieved to perform similarity search between code fragments. Experimental results from our prototype implementation supports the validity of our approach and show its effectiveness and efficiency in comparison with other solutions.
Several approaches have been proposed in the literature to identify similar source code ranging from textual to semantic similarity identification. Generally, they're classified based on the source representations they work with. In , the raw source code, with minimal transformation, is used to perform a pairwise comparison to identify similar source code @cite_27 . on the other hand, extracts a sequence of tokens using compiler-style source code transformation @cite_25 . The sequence is then used to match tokens and identify duplicates in the repository and the corresponding original code is returned as clones. In , the code is transformed to Abstract Syntax Trees (ASTs) that are then used in tree sub matching algorithms to identify similar sub trees @cite_16 . Similarly, clone detection is expressed as graph matching problem for Program Dependence Graphs (PDGs) in @cite_36 . extracts a number of metrics from the source code fragments and then compare metrics rather than code or trees to identify similar code @cite_37 .
{ "cite_N": [ "@cite_37", "@cite_36", "@cite_27", "@cite_16", "@cite_25" ], "mid": [ "1698439592", "2096491586", "2128698639", "2157532207", "2109943392" ], "abstract": [ "This paper presents a technique to automatically identify duplicate and near duplicate functions in a large software system. The identification technique is based on metrics extracted from the source code using the tool Datrix sup TM . This clone identification technique uses 21 function metrics grouped into four points of comparison. Each point of comparison is used to compare functions and determine their cloning level. An ordinal scale of eight cloning levels is defined. The levels range from an exact copy to distinct functions. The metrics, the thresholds and the process used are fully described. The results of applying the clone detection technique to two telecommunication monitoring systems totaling one million lines of source code are provided as examples. The information provided by this study is useful in monitoring the maintainability of large software systems.", "We present an approach to identifying similar code in programs based on finding similar subgraphs in attributed directed graphs. This approach is used on program dependence graphs and therefore considers not only the syntactic structure of programs but also the data flow within (as an abstraction of the semantics). As a result, there is no tradeoff between precision and recall; our approach is very good in both. An evaluation of our prototype implementation shows that the approach is feasible and gives very good results despite the non polynomial complexity of the problem.", "Code duplication is one of the factors that severely complicates the maintenance and evolution of large software systems. Techniques for detecting duplicated code exist but rely mostly on parsers, technology that has proven to be brittle in the face of different languages and dialects. In this paper we show that is possible to circumvent this hindrance by applying a language independent and visual approach, i.e. a tool that requires no parsing, yet is able to detect a significant amount of code duplication. We validate our approach on a number of case studies, involving four different implementation languages and ranging from 256 K up to 13 Mb of source code size.", "Existing research suggests that a considerable fraction (5-10 ) of the source code of large scale computer programs is duplicate code (\"clones\"). Detection and removal of such clones promises decreased software maintenance costs of possibly the same magnitude. Previous work was limited to detection of either near misses differing only in single lexems, or near misses only between complete functions. The paper presents simple and practical methods for detecting exact and near miss clones over arbitrary program fragments in program source code by using abstract syntax trees. Previous work also did not suggest practical means for removing detected clones. Since our methods operate in terms of the program structure, clones could be removed by mechanical methods producing in-lined procedures or standard preprocessor macros. A tool using these techniques is applied to a C production software system of some 400 K source lines, and the results confirm detected levels of duplication found by previous work. The tool produces macro bodies needed for clone removal, and macro invocations to replace the clones. The tool uses a variation of the well known compiler method for detecting common sub expressions. This method determines exact tree matches; a number of adjustments are needed to detect equivalent statement sequences, commutative operands, and nearly exact matches. We additionally suggest that clone detection could also be useful in producing more structured code, and in reverse engineering to discover domain concepts and their implementations.", "This paper describes how a program called dup can be used to locate instances of duplication or near-duplication in a software system. Dup reports both textually identical sections of code and sections that are the same textually except for systematic substitution of one set of variable names and constants for another. Further processing locates longer sections of code that are the same except for other small modifications. Experimental results from running dup on millions of lines from two large software systems show dup to be both effective at locating duplication and fast. Applications could include identifying sections of code that should be replaced by procedures, elimination of duplication during reengineering of the system, redocumentation to include references to copies, and debugging." ] }
1907.11817
2954117579
Source code similarity are increasingly used in application development to identify clones, isolate bugs, and find copy-rights violations. Similar code fragments can be very problematic due to the fact that errors in the original code must be fixed in every copy. Other maintenance changes, such as extensions or patches, must be applied multiple times. Furthermore, the diversity of coding styles and flexibility of modern languages makes it difficult and cost ineffective to manually inspect large code repositories. Therefore, detection is only feasible by automatic techniques. We present an efficient and scalable approach for similar code fragment identification based on source code control flow graphs fingerprinting. The source code is processed to generate control flow graphs that are then hashed to create a unique fingerprint of the code capturing semantics as well as syntax similarity. The fingerprints can then be efficiently stored and retrieved to perform similarity search between code fragments. Experimental results from our prototype implementation supports the validity of our approach and show its effectiveness and efficiency in comparison with other solutions.
Generally, similar code identification techniques work at varying level of granularity. detection leverages tokens, statements and lines as the basis for detection and comparison @cite_32 . On the other hand, detection uses functions, methods, classes, or program files as the basic units of detection @cite_40 . Naturally, the finer the granularity of the tool is, the longer time it takes to find clone candidates. Equally, the larger the granularity of the tool is, the faster time it takes for detection, albeit with fewer detected clones @cite_4 . Detection tools have therefore to make design trade-offs between accuracy and performance on an almost constant basis based on the code base being examined.
{ "cite_N": [ "@cite_40", "@cite_4", "@cite_32" ], "mid": [ "2101832700", "2547865220", "2138756793" ], "abstract": [ "Over the last decade many techniques and tools for software clone detection have been proposed. In this paper, we provide a qualitative comparison and evaluation of the current state-of-the-art in clone detection techniques and tools, and organize the large amount of information into a coherent conceptual framework. We begin with background concepts, a generic clone detection process and an overall taxonomy of current techniques and tools. We then classify, compare and evaluate the techniques and tools in two different dimensions. First, we classify and compare approaches based on a number of facets, each of which has a set of (possibly overlapping) attributes. Second, we qualitatively evaluate the classified techniques and tools with respect to a taxonomy of editing scenarios designed to model the creation of Type-1, Type-2, Type-3 and Type-4 clones. Finally, we provide examples of how one might use the results of this study to choose the most appropriate clone detection tool or technique in the context of a particular set of goals and constraints. The primary contributions of this paper are: (1) a schema for classifying clone detection techniques and tools and a classification of current clone detectors based on this schema, and (2) a taxonomy of editing scenarios that produce different clone types and a qualitative evaluation of current clone detectors based on this taxonomy.", "If two fragments of source code are identical to each other, they are called code clones. Code clones introduce difficulties in software maintenance and cause bug propagation. Coarse-grained clone detectors have higher precision than fine-grained, but fine-grained detectors have higher recall than coarse-grained. In this paper, we present a hybrid clone detection technique that first uses a coarse-grained technique to analyze clones effectively to improve precision. Subsequently, we use a fine-grained detector to obtain additional information about the clones and to improve recall. Our method detects Type-1 and Type-2 clones using hash values for blocks, and gapped code clones (Type-3) using block detection and subsequent comparison between them using Levenshtein distance and Cosine measures with varying thresholds.", "A code clone is a code portion in source files that is identical or similar to another. Since code clones are believed to reduce the maintainability of software, several code clone detection techniques and tools have been proposed. This paper proposes a new clone detection technique, which consists of the transformation of input source text and a token-by-token comparison. For its implementation with several useful optimization techniques, we have developed a tool, named CCFinder (Code Clone Finder), which extracts code clones in C, C++, Java, COBOL and other source files. In addition, metrics for the code clones have been developed. In order to evaluate the usefulness of CCFinder and metrics, we conducted several case studies where we applied the new tool to the source code of JDK, FreeBSD, NetBSD, Linux, and many other systems. As a result, CCFinder has effectively found clones and the metrics have been able to effectively identify the characteristics of the systems. In addition, we have compared the proposed technique with other clone detection techniques." ] }
1907.11817
2954117579
Source code similarity are increasingly used in application development to identify clones, isolate bugs, and find copy-rights violations. Similar code fragments can be very problematic due to the fact that errors in the original code must be fixed in every copy. Other maintenance changes, such as extensions or patches, must be applied multiple times. Furthermore, the diversity of coding styles and flexibility of modern languages makes it difficult and cost ineffective to manually inspect large code repositories. Therefore, detection is only feasible by automatic techniques. We present an efficient and scalable approach for similar code fragment identification based on source code control flow graphs fingerprinting. The source code is processed to generate control flow graphs that are then hashed to create a unique fingerprint of the code capturing semantics as well as syntax similarity. The fingerprints can then be efficiently stored and retrieved to perform similarity search between code fragments. Experimental results from our prototype implementation supports the validity of our approach and show its effectiveness and efficiency in comparison with other solutions.
Another challenge in finding duplicate code is the performance of querying and retrieving possible matches from a large code base. Fingerprinting and hashing have been used to improve the search efficiency @cite_44 . Hashing maps variable size source code to a fixed size fingerprint that can later be used to query and search for clones in linear time @cite_14 . However, a simple match doesn't work well for inexact matches. Others @cite_12 @cite_38 use hashing techniques to group similar source code fragments together, thus enhancing the accuracy and performance of clone detection techniques. However, this is less effective in detecting Type 4 clones as hashing and fingerprints are based on the source code and not its semantic. Machine learning approaches have been proposed @cite_31 to link lexical level features with syntactic level features using semantic encoding techniques @cite_28 to improve Type 4 clone detection. However, in order for them to be effective, human experts need to analyze source code repositories to define features that are most relevant for clone detection.
{ "cite_N": [ "@cite_38", "@cite_14", "@cite_28", "@cite_44", "@cite_31", "@cite_12" ], "mid": [ "2018986336", "1996676702", "2809551588", "", "2511803001", "2042092487" ], "abstract": [ "Clone detection techniques essentially cluster textually, syntactically and or semantically similar code fragments in or across software systems. For large datasets, similarity identification is costly both in terms of time and memory, and especially so when detecting near-miss clones where lines could be modified, added and or deleted in the copied fragments. The capability and effectiveness of a clone detection tool mostly depends on the code similarity measurement technique it uses. A variety of similarity measurement approaches have been used for clone detection, including fingerprint based approaches, which have had varying degrees of success notwithstanding some limitations. In this paper, we investigate the effectiveness of simhash, a state of the art fingerprint based data similarity measurement technique for detecting both exact and near-miss clones in large scale software systems. Our experimental data show that simhash is indeed effective in identifying various types of clones in a software system despite wide variations in experimental circumstances. The approach is also suitable as a core capability for building other tools, such as tools for: incremental clone detection, code searching, and clone management.", "There is much research on the use of tokenized source code to find code clones both within and between trees of source code. Some approaches have used suffix trees [1], [3]; others have used variations of longest common substring algorithms [4], [5]. This paper outlines an algorithm, embodied in a new tool called ctcompare, that takes a different tokenization approach. Each code base to be compared is first lexically analysed to produce a sequence of tokens. These are then broken into overlapping tuples of N consecutive tokens. The tuples are then hashed and the hash values of token tuples are used to identify type-1 and type-2 clone pairs. Hashed token sequences combined with a database have already been used in earlier ctcompare versions and elsewhere [2], but with a significant performance penalty due to database insertions. The benefits of this approach over the existing research include the simultaneous comparison of multiple large code bases and fast absolute performance.", "Sparse topic models (STMs) are widely used for learning a semantically rich latent sparse representation of short texts in large scale, mainly by imposing sparse priors or appropriate regularizers on topic models. However, it is difficult for these STMs to model the sparse structure and pattern of the corpora accurately, since their sparse priors always fail to achieve real sparseness, and their regularizers bypass the prior information of the relevance between sparse coefficients. In this paper, we propose a novel Bayesian hierarchical topic models called Bayesian Sparse Topical Coding with Poisson Distribution (BSTC-P) on the basis of Sparse Topical Coding with Sparse Groups (STCSG). Different from traditional STMs, it focuses on imposing hierarchical sparse prior to leverage the prior information of relevance between sparse coefficients. Furthermore, we propose a sparsity-enhanced BSTC, Bayesian Sparse Topical Coding with Normal Distribution (BSTC-N), via mathematic approximation. We adopt superior hierarchical sparse inducing prior, with the purpose of achieving the sparsest optimal solution. Experimental results on datasets of Newsgroups and Twitter show that both BSTC-P and BSTC-N have better performance on finding clear latent semantic representations. Therefore, they yield better performance than existing works on document classification tasks.", "", "Code clone detection is an important problem for software maintenance and evolution. Many approaches consider either structure or identifiers, but none of the existing detection techniques model both sources of information. These techniques also depend on generic, handcrafted features to represent code fragments. We introduce learning-based detection techniques where everything for representing terms and fragments in source code is mined from the repository. Our code analysis supports a framework, which relies on deep learning, for automatically linking patterns mined at the lexical level with patterns mined at the syntactic level. We evaluated our novel learning-based approach for code clone detection with respect to feasibility from the point of view of software maintainers. We sampled and manually evaluated 398 file- and 480 method-level pairs across eight real-world Java systems; 93 of the file- and method-level samples were evaluated to be true positives. Among the true positives, we found pairs mapping to all four clone types. We compared our approach to a traditional structure-oriented technique and found that our learning-based approach detected clones that were either undetected or suboptimally reported by the prominent tool Deckard. Our results affirm that our learning-based approach is suitable for clone detection and a tenable technique for researchers.", "Microsoft Dynamics NAV is a widely used enterprise resource planning system for small and medium-sized enterprises that, by design, encourages rapid customization by copy-paste programming. We report the results of analyzing clone detection for NAV using two previously published methods and one new algorithmic method: character-based sliding window sampling using Rabin-Karp hashing (MOSS), line-based sequence matching using suffix trees (CodeDup), and abstract-syntax-tree based graph sharing analysis (XMLClone). The latter is piggybacked on XMLStore, which stores XML trees as directed acyclic graphs (dags) where all isomorphic subtrees are identified and coalesced into single nodes, which can be done in linear time using multiset discrimination. This dagification discovers all well-formed Type-1 and, with suitable input normalization, Type-2 clones. We find that the subsequent dag analysis to discover Type-3 clones performs well on NAV source code, both in terms of computational complexity and precision. This suggests that efficient dagification and independently configurable dag interpretation may be valuable ingredients for modular clone detection." ] }
1907.11817
2954117579
Source code similarity are increasingly used in application development to identify clones, isolate bugs, and find copy-rights violations. Similar code fragments can be very problematic due to the fact that errors in the original code must be fixed in every copy. Other maintenance changes, such as extensions or patches, must be applied multiple times. Furthermore, the diversity of coding styles and flexibility of modern languages makes it difficult and cost ineffective to manually inspect large code repositories. Therefore, detection is only feasible by automatic techniques. We present an efficient and scalable approach for similar code fragment identification based on source code control flow graphs fingerprinting. The source code is processed to generate control flow graphs that are then hashed to create a unique fingerprint of the code capturing semantics as well as syntax similarity. The fingerprints can then be efficiently stored and retrieved to perform similarity search between code fragments. Experimental results from our prototype implementation supports the validity of our approach and show its effectiveness and efficiency in comparison with other solutions.
One way to capture the program semantic is the code Control Flow Graphs (CFGs) . CFGs are one of the intermediate code representations that describes in graph notation, all paths that might be followed through a piece of code during its execution @cite_34 . In CFGs, vertices represent basic blocks and edges (i.e., arcs) represent execution flow. Since CFGs capture syntactic and semantic features of the code, they are better at resisting changes in the code that manipulate source code in very minor ways, while not affecting the functionality of the program. For this reason, control flow graphs have been used in static analysis @cite_30 , fuzzing and test coverage tools @cite_8 , execution profiling @cite_33 @cite_24 , binary code analysis @cite_7 , malware analysis @cite_26 , and anomaly analysis @cite_41 .
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_33", "@cite_7", "@cite_8", "@cite_41", "@cite_24", "@cite_34" ], "mid": [ "2127637733", "", "2101134669", "2001114671", "2128006558", "2508465325", "2115971347", "2117426803" ], "abstract": [ "Any static, global analysis of the expression and data relationships in a program requires a knowledge of the control flow of the program. Since one of the primary reasons for doing such a global analysis in a compiler is to produce optimized programs, control flow analysis has been embedded in many compilers and has been described in several papers. An early paper by Prosser [5] described the use of Boolean matrices (or, more particularly, connectivity matrices) in flow analysis. The use of “dominance” relationships in flow analysis was first introduced by Prosser and much expanded by Lowry and Medlock [6]. References [6,8,9] describe compilers which use various forms of control flow analysis for optimization. Some recent developments in the area are reported in [4] and in [7]. The underlying motivation in all the different types of control flow analysis is the need to codify the flow relationships in the program. The codification may be in connectivity matrices, in predecessor-successor tables, in dominance lists, etc. Whatever the form, the purpose is to facilitate determining what the flow relationships are; in other words to facilitate answering such questions as: is this an inner loop?, if an expression is removed from the loop where can it be correctly and profitably placed?, which variable definitions can affect this use? In this paper the basic control flow relationships are expressed in a directed graph. Various graph constructs are then found and shown to codify interesting global relationships.", "", "A path profile determines how many times each acyclic path in a routine executes. This type of profiling subsumes the more common basic block and edge profiling, which only approximate path frequencies. Path profiles have many potential uses in program performance tuning, profile-directed compilation, and software test coverage. This paper describes a new algorithm for path profiling. This simple, fast algorithm selects and places profile instrumentation to minimize run-time overhead. Instrumented programs run with overhead comparable to the best previous profiling techniques. On the SPEC95 benchmarks, path profiling overhead averaged 31 , as compared to 16 for efficient edge profiling. Path profiling also identifies longer paths than a previous technique, which predicted paths from edge profiles (average of 88, versus 34 instructions). Moreover, profiling shows that the SPEC95 train input datasets covered most of the paths executed in the ref datasets.", "In this paper, we present an approach to comparing control flow graphs of binary programs by matching their basic blocks. We first set up an initial match and propagate it to reach a stable state. We consider the matched pairs to identify overall similarities. To evaluate the proposed method, we perform experiments on real-world Java applications, and compare their performance with previous structural matching method. In the experimental results, the proposed method shows more reliable results than previous method at distinguishing similar control flow graphs.", "We present an extension of traditional \"black box\" fuzz testing using a genetic algorithm based upon a dynamic Markov model fitness heuristic. This heuristic allows us to \"intelligently\" guide input selection based upon feedback concerning the \"success\" of past inputs that have been tried. Unlike many software testing tools, our implementation is strictly based upon binary code and does not require that source code be available. Our evaluation on a Windows server program shows that this approach is superior to random black box fuzzing for increasing code coverage and depth of penetration into program control flow logic. As a result, the technique may be beneficial to the development of future automated vulnerability analysis tools.", "We focus on the problem of detecting anomalous run-time behavior of distributed applications from their execution logs. Specifically we mine templates and template sequences from logs to form a control flow graph (cfg) spanning distributed components. This cfg represents the baseline healthy system state and is used to flag deviations from the expected behavior of runtime logs. The novelty in our work stems from the new techniques employed to: (1) overcome the instrumentation requirements or application specific assumptions made in prior log mining approaches, (2) improve the accuracy of mined templates and the cfg in the presence of long parameters and high amount of interleaving respectively, and (3) improve by orders of magnitude the scalability of the cfg mining process in terms of volume of log data that can be processed per day. We evaluate our approach using (a) synthetic log traces and (b) multiple real-world log datasets collected at different layers of application stack. Results demonstrate that our template mining, cfg mining, and anomaly detection algorithms have high accuracy. The distributed implementation of our pipeline is highly scalable and has more than 500 GB day of log data processing capability even on a 10 low-end VM based (Spark + Hadoop) cluster. We also demonstrate the efficacy of our end-to-end system using a case study with the Openstack VM provisioning system.", "Program profiles identify frequently executed portions of a program, which are the places at which optimizations offer programmers and compilers the greatest benefit. Compilers, however, infrequently exploit program profiles, because, profiling a program requires a programmer to instrument and run the program. An attractive alternative is for the complier to statically estimate program profiles. This paper presents several new techniques for static branch prediction and profiling. The first technique combines multiple predictions of a branch's outcome into a prediction of the probability that the branch is taken. Another technique uses these predictions to estimate the relative execution frequency (i.e., profile) of basic blocks and control-flow edges within a procedure. A third algorithm uses local frequency estimates to predict the global frequency of calls, procedure invocations, and basic block and control-flow edge executions. Experiments on the SPEC92 integer benchmarks and Unix applications show that the frequently executed blocks, edges, and functions identified by our techniques closely match those in a dynamic profile.", "A large number of call graph construction algorithms for object-oriented and functional languages have been proposed, each embodying different tradeoffs between analysis cost and call graph precision. In this article we present a unifying framework for understanding call graph construction algorithms and an empirical comparison of a representative set of algorithms. We first present a general parameterized algorithm that encompasses many well-known and novel call graph construction algorithms. We have implemented this general algorithm in the Vortex compiler infrastructure, a mature, multilanguage, optimizing compiler. The Vortex implementation provides a \"level playing field\" for meaningful cross-algorithm performance comparisons. The costs and benefits of a number of call graph construction algorithms are empirically assessed by applying their Vortex implementation to a suite of sizeable (5,000 to 50,000 lines of code) Cecil and Java programs. For many of these applications, interprocedural analysis enabled substantial speed-ups over an already highly optimized baseline. Furthermore, a significant fraction of these speed-ups can be obtained through the use of a scalable, near-linear time call graph construction algorithm." ] }
1907.11845
2964526184
Generative adversarial networks (GANs) has proven hugely successful in variety of applications of image processing. However, generative adversarial networks for handwriting is relatively rare somehow because of difficulty of handling sequential handwriting data by Convolutional Neural Network (CNN). In this paper, we propose a handwriting generative adversarial network framework (HWGANs) for synthesizing handwritten stroke data. The main features of the new framework include: (i) A discriminator consists of an integrated CNN-Long-Short-Term- Memory (LSTM) based feature extraction with Path Signature Features (PSF) as input and a Feedforward Neural Network (FNN) based binary classifier; (ii) A recurrent latent variable model as generator for synthesizing sequential handwritten data. The numerical experiments show the effectivity of the new model. Moreover, comparing with sole handwriting generator, the HWGANs synthesize more natural and realistic handwritten text.
: The GANs proposed on 2019 @cite_1 aims at generating realistic images of handwritten texts, which is naturally a fit for Optical Character Recognition (OCR). The authors use bidirectional LSTM recurrent layers to get an embedding of the word to be rendered, and then feed it to the generator network. They also modify the standard GAN by adding an auxiliary network for text recognition. However, since this approach can not directly synthesize handwritten text of digital ink, although its generated images are realistic, an additional effective Ink Grab algorithm is further required for the conversion from image to digital stroke.
{ "cite_N": [ "@cite_1" ], "mid": [ "2920553990" ], "abstract": [ "State-of-the-art offline handwriting text recognition systems tend to use neural networks and therefore require a large amount of annotated data to be trained. In order to partially satisfy this requirement, we propose a system based on Generative Adversarial Networks (GAN) to produce synthetic images of handwritten words. We use bidirectional LSTM recurrent layers to get an embedding of the word to be rendered, and we feed it to the generator network. We also modify the standard GAN by adding an auxiliary network for text recognition. The system is then trained with a balanced combination of an adversarial loss and a CTC loss. Together, these extensions to GAN enable to control the textual content of the generated word images. We obtain realistic images on both French and Arabic datasets, and we show that integrating these synthetic images into the existing training data of a text recognition system can slightly enhance its performance." ] }
1907.11845
2964526184
Generative adversarial networks (GANs) has proven hugely successful in variety of applications of image processing. However, generative adversarial networks for handwriting is relatively rare somehow because of difficulty of handling sequential handwriting data by Convolutional Neural Network (CNN). In this paper, we propose a handwriting generative adversarial network framework (HWGANs) for synthesizing handwritten stroke data. The main features of the new framework include: (i) A discriminator consists of an integrated CNN-Long-Short-Term- Memory (LSTM) based feature extraction with Path Signature Features (PSF) as input and a Feedforward Neural Network (FNN) based binary classifier; (ii) A recurrent latent variable model as generator for synthesizing sequential handwritten data. The numerical experiments show the effectivity of the new model. Moreover, comparing with sole handwriting generator, the HWGANs synthesize more natural and realistic handwritten text.
: Alex Grave proposed an RNN based generator model to mimic handwriting data @cite_12 , referred as throughout the whole paper. For each timestamp, encodes prefix sampled path to produce a set of parameters of a probability distribution of next stroke point, then sample the next stroke point given this distribution. There are two variants of , i.e., handwritten predictor and handwritten synthesizer, where the later one has the capability to synthesize handwritten text for given text.
{ "cite_N": [ "@cite_12" ], "mid": [ "1810943226" ], "abstract": [ "This paper shows how Long Short-term Memory recurrent neural networks can be used to generate complex sequences with long-range structure, simply by predicting one data point at a time. The approach is demonstrated for text (where the data are discrete) and online handwriting (where the data are real-valued). It is then extended to handwriting synthesis by allowing the network to condition its predictions on a text sequence. The resulting system is able to generate highly realistic cursive handwriting in a wide variety of styles." ] }
1907.11836
2966726302
Massive multiple-input multiple-output (MIMO) with frequency division duplex (FDD) mode is a promising approach to increasing system capacity and link robustness for the fifth generation (5G) wireless cellular systems. The premise of these advantages is the accurate downlink channel state information (CSI) fed back from user equipment. However, conventional feedback methods have difficulties in reducing feedback overhead due to significant amount of base station (BS) antennas in massive MIMO systems. Recently, deep learning (DL)-based CSI feedback conquers many difficulties, yet still shows insufficiency to decrease the occupation of uplink bandwidth resources. In this paper, to solve this issue, we combine DL and superimposed coding (SC) for CSI feedback, in which the downlink CSI is spread and then superimposed on uplink user data sequences (UL-US) toward the BS. Then, a multi-task neural network (NN) architecture is proposed at BS to recover the downlink CSI and UL-US by unfolding two iterations of the minimum mean-squared error (MMSE) criterion-based interference reduction. In addition, for a network training, a subnet-by-subnet approach is exploited to facilitate the parameter tuning and expedite the convergence rate. Compared with standalone SC-based CSI scheme, our multi-task NN, trained in a specific signal-to-noise ratio (SNR) and power proportional coefficient (PPC), consistently improves the estimation of downlink CSI with similar or better UL-US detection under SNR and PPC varying.
Without any occupation of uplink bandwidth resources, @cite_31 and @cite_6 estimated downlink CSI from uplink CSI by using DL approach. In @cite_31 , the core idea was that since the same propagating environment was shared for both uplink and downlink channels, the environment information could be applied to downlink channel cases after it was extracted from uplink channel response. Similar to @cite_31 , a NN-based scheme for extrapolating downlink CSI from observed uplink CSI has been proposed in @cite_6 , where the underlying physical relation between the downlink and uplink frequency bands was exploited to construct the learning architecture. Need to mention that, the methods in @cite_31 usually needs to retrain the NN when the environment information changes significantly. For example, for a well-trained equipment, its extracted environment information (e.g., the shapes of buildings, streets and mountains, the materials that objects are made up, etc) from one city would no longer be applicable for another. The method in @cite_6 will encounter poor CSI recovery performance in the environment of wide band interval between downlink and uplink frequency bands.
{ "cite_N": [ "@cite_31", "@cite_6" ], "mid": [ "2904192264", "2908955555" ], "abstract": [ "Knowledge of the channel state information (CSI) at the transmitter side is one of the primary sources of information that can be used for efficient allocation of wireless resources. Obtaining Down-Link (DL) CSI in FDD systems from Up-Link (UL) CSI is not as straightforward as TDD systems, and so usually users feedback the DL-CSI to the transmitter. To remove the need for feedback (and thus having less signaling overhead), several methods have been studied to estimate DL-CSI from UL-CSI. In this paper, we propose a scheme to infer DL-CSI by observing UL-CSI in which we use two recent deep neural network structures: a) Convolutional Neural network and b) Generative Adversarial Networks. The proposed deep network structures are first learning a latent model of the environment from the training data. Then, the resulted latent model is used to predict the DL-CSI from the UL-CSI. We have simulated the proposed scheme and evaluated its performance in a few network settings.", "A major obstacle for widespread deployment of frequency division duplex (FDD)-based Massive multiple-input multiple-output (MIMO) communications is the large signaling overhead for reporting full downlink (DL) channel state information (CSI) back to the basestation (BS), in order to enable closed-loop precoding. We completely remove this overhead by a deep-learning based channel extrapolation (or \"prediction\") approach and demonstrate that a neural network (NN) at the BS can infer the DL CSI centered around a frequency @math by solely observing uplink (UL) CSI on a different, yet adjacent frequency band around @math ; no more pilot reporting overhead is needed than with a genuine time division duplex (TDD)-based system. The rationale is that scatterers and the large-scale propagation environment are sufficiently similar to allow a NN to learn about the physical connections and constraints between two neighboring frequency bands, and thus provide a well-operating system even when classic extrapolation methods, like the Wiener filter (used as a baseline for comparison throughout) fails. We study its performance for various state-of-the-art Massive MIMO channel models, and, even more so, evaluate the scheme using actual Massive MIMO channel measurements, rendering it to be practically feasible at negligible loss in spectral efficiency when compared to a genuine TDD-based system." ] }
1907.11836
2966726302
Massive multiple-input multiple-output (MIMO) with frequency division duplex (FDD) mode is a promising approach to increasing system capacity and link robustness for the fifth generation (5G) wireless cellular systems. The premise of these advantages is the accurate downlink channel state information (CSI) fed back from user equipment. However, conventional feedback methods have difficulties in reducing feedback overhead due to significant amount of base station (BS) antennas in massive MIMO systems. Recently, deep learning (DL)-based CSI feedback conquers many difficulties, yet still shows insufficiency to decrease the occupation of uplink bandwidth resources. In this paper, to solve this issue, we combine DL and superimposed coding (SC) for CSI feedback, in which the downlink CSI is spread and then superimposed on uplink user data sequences (UL-US) toward the BS. Then, a multi-task neural network (NN) architecture is proposed at BS to recover the downlink CSI and UL-US by unfolding two iterations of the minimum mean-squared error (MMSE) criterion-based interference reduction. In addition, for a network training, a subnet-by-subnet approach is exploited to facilitate the parameter tuning and expedite the convergence rate. Compared with standalone SC-based CSI scheme, our multi-task NN, trained in a specific signal-to-noise ratio (SNR) and power proportional coefficient (PPC), consistently improves the estimation of downlink CSI with similar or better UL-US detection under SNR and PPC varying.
As a whole, the DL-based and SC-based CSI feedback methods still face huge challenge, which can be summarized as follows: Concentrated on feedback reduction, the DL-based CSI feedback methods, e.g., the methods in @cite_34 -- @cite_10 , inevitably occupy uplink bandwidth resources. Although the occupation of uplink bandwidth resources can be avoided, the methods that estimate downlink CSI from uplink CSI in @cite_31 and @cite_6 usually limit the applications in mobile or wide frequency-band interval environment. The SC-based CSI feedback @cite_21 can also avoid the occupation of uplink bandwidth resources, while facing with huge challenge to cancel the interference between downlink CSI and UL-US due to the lack of good solutions in previous works.
{ "cite_N": [ "@cite_21", "@cite_6", "@cite_31", "@cite_34", "@cite_10" ], "mid": [ "2109711397", "2908955555", "2904192264", "2963145597", "2911910187" ], "abstract": [ "In closed-loop FDD MIMO system, downlink channel state information (DL-CSI) is usually feedback to base station in forms of codebook or CQI, both of which aim at lowering the feedback quantity at the cost of limited feedback precision and heavy processing complexity at mobile side. Meanwhile, the recently proposed direct channel feedback method incurs great system overhead due to its exclusive occupation of uplink bandwidth resources. We propose a low-cost feedback method for DL-CSI, which spreads unquantized and uncoded DL-CSI and superimposes it onto uplink user data sequences (UL-US). Exclusive occupation of system resources by DL-CSI can thus be avoided. Due to spreading, DL-CSI can be estimated accurately with little power allocation at the cost of some UL-US's SER performance", "A major obstacle for widespread deployment of frequency division duplex (FDD)-based Massive multiple-input multiple-output (MIMO) communications is the large signaling overhead for reporting full downlink (DL) channel state information (CSI) back to the basestation (BS), in order to enable closed-loop precoding. We completely remove this overhead by a deep-learning based channel extrapolation (or \"prediction\") approach and demonstrate that a neural network (NN) at the BS can infer the DL CSI centered around a frequency @math by solely observing uplink (UL) CSI on a different, yet adjacent frequency band around @math ; no more pilot reporting overhead is needed than with a genuine time division duplex (TDD)-based system. The rationale is that scatterers and the large-scale propagation environment are sufficiently similar to allow a NN to learn about the physical connections and constraints between two neighboring frequency bands, and thus provide a well-operating system even when classic extrapolation methods, like the Wiener filter (used as a baseline for comparison throughout) fails. We study its performance for various state-of-the-art Massive MIMO channel models, and, even more so, evaluate the scheme using actual Massive MIMO channel measurements, rendering it to be practically feasible at negligible loss in spectral efficiency when compared to a genuine TDD-based system.", "Knowledge of the channel state information (CSI) at the transmitter side is one of the primary sources of information that can be used for efficient allocation of wireless resources. Obtaining Down-Link (DL) CSI in FDD systems from Up-Link (UL) CSI is not as straightforward as TDD systems, and so usually users feedback the DL-CSI to the transmitter. To remove the need for feedback (and thus having less signaling overhead), several methods have been studied to estimate DL-CSI from UL-CSI. In this paper, we propose a scheme to infer DL-CSI by observing UL-CSI in which we use two recent deep neural network structures: a) Convolutional Neural network and b) Generative Adversarial Networks. The proposed deep network structures are first learning a latent model of the environment from the training data. Then, the resulted latent model is used to predict the DL-CSI from the UL-CSI. We have simulated the proposed scheme and evaluated its performance in a few network settings.", "In frequency division duplex mode, the downlink channel state information (CSI) should be sent to the base station through feedback links so that the potential gains of a massive multiple-input multiple-output can be exhibited. However, such a transmission is hindered by excessive feedback overhead. In this letter, we use deep learning technology to develop CsiNet, a novel CSI sensing and recovery mechanism that learns to effectively use channel structure from training samples. CsiNet learns a transformation from CSI to a near-optimal number of representations (or codewords) and an inverse transformation from codewords to CSI. We perform experiments to demonstrate that CsiNet can recover CSI with significantly improved reconstruction quality compared with existing compressive sensing (CS)-based methods. Even at excessively low compression regions where CS-based methods cannot work, CsiNet retains effective beamforming gain.", "In this letter, we study the channel state information (CSI) feedback based on the deep autoencoder (AE) considering the feedback errors and feedback delay in the frequency division duplex massive multiple-input multiple-output system. We construct the deep AE by modeling the CSI feedback process, which involves feedback transmission errors and delays. The deep AE is trained by setting the delayed version of the downlink channel as the desired output. The proposed scheme reduces the impact of the feedback errors and feedback delay. Simulation results demonstrate that the proposed scheme achieves better performance than other comparable schemes." ] }
1907.11770
2965569712
In this paper we compare learning-based methods and classical methods for navigation in virtual environments. We construct classical navigation agents and demonstrate that they outperform state-of-the-art learning-based agents on two standard benchmarks: MINOS and Stanford Large-Scale 3D Indoor Spaces. We perform detailed analysis to study the strengths and weaknesses of learned agents and classical agents, as well as how characteristics of the virtual environment impact navigation performance. Our results show that learned agents have inferior collision avoidance and memory management, but are superior in handling ambiguity and noise. These results can inform future design of navigation agents.
Error analysis has played an important role in computer vision research such as object detection @cite_41 and VQA @cite_14 . Although many learning-based methods have recently been proposed for navigation @cite_11 @cite_3 @cite_2 @cite_29 @cite_33 , there has been little work focused on error analysis of state-of-the-art methods. The closest to ours is the concurrent works by Mishkin al @cite_34 and Savva al @cite_13 , who bench-marked learned agents against classical ones in indoor simulators. Our work shares similarity in comparing learned and classical agents, but is different in that we propose new metrics to diagnose various aspects of navigation capability including collision, avoidance, memory management, and exploitation of available information.
{ "cite_N": [ "@cite_14", "@cite_33", "@cite_41", "@cite_29", "@cite_3", "@cite_2", "@cite_34", "@cite_13", "@cite_11" ], "mid": [ "2597425697", "", "1832500336", "", "", "", "2918642789", "2929928372", "2952578114" ], "abstract": [ "In visual question answering (VQA), an algorithm must answer text-based questions about images. While multiple datasets for VQA have been created since late 2014, they all have flaws in both their content and the way algorithms are evaluated on them. As a result, evaluation scores are inflated and predominantly determined by answering easier questions, making it difficult to compare different methods. In this paper, we analyze existing VQA algorithms using a new dataset called the Task Driven Image Understanding Challenge (TDIUC), which has over 1.6 million questions organized into 12 different categories. We also introduce questions that are meaningless for a given image to force a VQA system to reason about image content. We propose new evaluation schemes that compensate for over-represented question-types and make it easier to study the strengths and weaknesses of algorithms. We analyze the performance of both baseline and state-of-the-art VQA models, including multi-modal compact bilinear pooling (MCB), neural module networks, and recurrent answering units. Our experiments establish how attention helps certain categories more than others, determine which models work better than others, and explain how simple models (e.g. MLP) can surpass more complex models (MCB) by simply learning to answer large, easy question categories.", "", "This paper shows how to analyze the influences of object characteristics on detection performance and the frequency and impact of different types of false positives. In particular, we examine effects of occlusion, size, aspect ratio, visibility of parts, viewpoint, localization error, and confusion with semantically similar objects, other labeled objects, and background. We analyze two classes of detectors: the multiple kernel learning detector and different versions of the detector. Our study shows that sensitivity to size, localization error, and confusion with similar objects are the most impactful forms of error. Our analysis also reveals that many different kinds of improvement are necessary to achieve large gains, making more detailed analysis essential for the progress of recognition research. By making our software and annotations available, we make it effortless for future researchers to perform similar analysis.", "", "", "", "Navigation research is attracting renewed interest with the advent of learning-based methods. However, this new line of work is largely disconnected from well-established classic navigation approaches. In this paper, we take a step towards coordinating these two directions of research. We set up classic and learning-based navigation systems in common simulated environments and thoroughly evaluate them in indoor spaces of varying complexity, with access to different sensory modalities. Additionally, we measure human performance in the same environments. We find that a classic pipeline, when properly tuned, can perform very well in complex cluttered environments. On the other hand, learned systems can operate more robustly with a limited sensor suite. Overall, both approaches are still far from human-level performance.", "We present Habitat, a new platform for research in embodied artificial intelligence (AI). Habitat enables training embodied agents (virtual robots) in highly efficient photorealistic 3D simulation, before transferring the learned skills to reality. Specifically, Habitat consists of the following: 1. Habitat-Sim: a flexible, high-performance 3D simulator with configurable agents, multiple sensors, and generic 3D dataset handling (with built-in support for SUNCG, Matterport3D, Gibson datasets). Habitat-Sim is fast -- when rendering a scene from the Matterport3D dataset, Habitat-Sim achieves several thousand frames per second (fps) running single-threaded, and can reach over 10,000 fps multi-process on a single GPU, which is orders of magnitude faster than the closest simulator. 2. Habitat-API: a modular high-level library for end-to-end development of embodied AI algorithms -- defining embodied AI tasks (e.g. navigation, instruction following, question answering), configuring and training embodied agents (via imitation or reinforcement learning, or via classic SLAM), and benchmarking using standard metrics. These large-scale engineering contributions enable us to answer scientific questions requiring experiments that were till now impracticable or merely' impractical. Specifically, in the context of point-goal navigation (1) we revisit the comparison between learning and SLAM approaches from two recent works and find evidence for the opposite conclusion -- that learning outperforms SLAM, if scaled to total experience far surpassing that of previous investigations, and (2) we conduct the first cross-dataset generalization experiments train, test x Matterport3D, Gibson for multiple sensors blind, RGB, RGBD, D and find that only agents with depth (D) sensors generalize across datasets. We hope that our open-source platform and these findings will advance research in embodied AI.", "We present an approach to sensorimotor control in immersive environments. Our approach utilizes a high-dimensional sensory stream and a lower-dimensional measurement stream. The cotemporal structure of these streams provides a rich supervisory signal, which enables training a sensorimotor control model by interacting with the environment. The model is trained using supervised learning techniques, but without extraneous supervision. It learns to act based on raw sensory input from a complex three-dimensional environment. The presented formulation enables learning without a fixed goal at training time, and pursuing dynamically changing goals at test time. We conduct extensive experiments in three-dimensional simulations based on the classical first-person game Doom. The results demonstrate that the presented approach outperforms sophisticated prior formulations, particularly on challenging tasks. The results also show that trained models successfully generalize across environments and goals. A model trained using the presented approach won the Full Deathmatch track of the Visual Doom AI Competition, which was held in previously unseen environments." ] }
1907.11770
2965569712
In this paper we compare learning-based methods and classical methods for navigation in virtual environments. We construct classical navigation agents and demonstrate that they outperform state-of-the-art learning-based agents on two standard benchmarks: MINOS and Stanford Large-Scale 3D Indoor Spaces. We perform detailed analysis to study the strengths and weaknesses of learned agents and classical agents, as well as how characteristics of the virtual environment impact navigation performance. Our results show that learned agents have inferior collision avoidance and memory management, but are superior in handling ambiguity and noise. These results can inform future design of navigation agents.
Another line of research follows a more module approach by developing learning-based navigation modules which can be integrated to a larger network. For example, localization can be formulated as a 3-DOF or 6-DOF camera pose estimation problem and performed by a deep network @cite_22 @cite_7 @cite_21 . Learning-based approaches have also been studied in the context of SLAM @cite_36 @cite_18 @cite_25 . Most relevant to our work, Tamar al propose Value Iteration Network (VIN) @cite_43 as a differential planner, and Gupta al integrate VIN with a differential mapper and propose CMP in @cite_28 , an end-to-end mapper-planner which we analyze as the state-of-the-art method with specially designed components.
{ "cite_N": [ "@cite_18", "@cite_22", "@cite_7", "@cite_36", "@cite_28", "@cite_21", "@cite_43", "@cite_25" ], "mid": [ "", "2556455135", "2200124539", "2795447360", "2951660448", "2592936284", "2258731934", "" ], "abstract": [ "", "RANSAC is an important algorithm in robust optimization and a central building block for many computer vision applications. In recent years, traditionally hand-crafted pipelines have been replaced by deep learning pipelines, which can be trained in an end-to-end fashion. However, RANSAC has so far not been used as part of such deep learning pipelines, because its hypothesis selection procedure is non-differentiable. In this work, we present two different ways to overcome this limitation. The most promising approach is inspired by reinforcement learning, namely to replace the deterministic hypothesis selection by a probabilistic selection for which we can derive the expected loss w.r.t. to all learnable parameters. We call this approach DSAC, the differentiable counterpart of RANSAC. We apply DSAC to the problem of camera localization, where deep learning has so far failed to improve on traditional approaches. We demonstrate that by directly minimizing the expected loss of the output camera poses, robustly estimated by RANSAC, we achieve an increase in accuracy. In the future, any deep learning pipeline can use DSAC as a robust optimization component.", "We present a robust and real-time monocular six degree of freedom relocalization system. Our system trains a convolutional neural network to regress the 6-DOF camera pose from a single RGB image in an end-to-end manner with no need of additional engineering or graph optimisation. The algorithm can operate indoors and outdoors in real time, taking 5ms per frame to compute. It obtains approximately 2m and 3 degrees accuracy for large scale outdoor scenes and 0.5m and 5 degrees accuracy indoors. This is achieved using an efficient 23 layer deep convnet, demonstrating that convnets can be used to solve complicated out of image plane regression problems. This was made possible by leveraging transfer learning from large scale classification data. We show that the PoseNet localizes from high level features and is robust to difficult lighting, motion blur and different camera intrinsics where point based SIFT registration fails. Furthermore we show how the pose feature that is produced generalizes to other scenes allowing us to regress pose with only a few dozen training examples.", "The representation of geometry in real-time 3D perception systems continues to be a critical research issue. Dense maps capture complete surface shape and can be augmented with semantic labels, but their high dimensionality makes them computationally costly to store and process, and unsuitable for rigorous probabilistic inference. Sparse feature-based representations avoid these problems, but capture only partial scene information and are mainly useful for localisation only. We present a new compact but dense representation of scene geometry which is conditioned on the intensity data from a single image and generated from a code consisting of a small number of parameters. We are inspired by work both on learned depth from images, and auto-encoders. Our approach is suitable for use in a keyframe-based monocular dense SLAM system: While each keyframe with a code can produce a depth map, the code can be optimised efficiently jointly with pose variables and together with the codes of overlapping keyframes to attain global consistency. Conditioning the depth map on the image allows the code to only represent aspects of the local geometry which cannot directly be predicted from the image. We explain how to learn our code representation, and demonstrate its advantageous properties in monocular SLAM.", "We introduce a neural architecture for navigation in novel environments. Our proposed architecture learns to map from first-person views and plans a sequence of actions towards goals in the environment. The Cognitive Mapper and Planner (CMP) is based on two key ideas: a) a unified joint architecture for mapping and planning, such that the mapping is driven by the needs of the task, and b) a spatial memory with the ability to plan given an incomplete set of observations about the world. CMP constructs a top-down belief map of the world and applies a differentiable neural net planner to produce the next action at each time step. The accumulated belief of the world enables the agent to track visited regions of the environment. We train and test CMP on navigation problems in simulation environments derived from scans of real world buildings. Our experiments demonstrate that CMP outperforms alternate learning-based architectures, as well as, classical mapping and path planning approaches in many cases. Furthermore, it naturally extends to semantically specified goals, such as 'going to a chair'. We also deploy CMP on physical robots in indoor environments, where it achieves reasonable performance, even though it is trained entirely in simulation.", "This paper presents a convolutional neural network based approach for estimating the relative pose between two cameras. The proposed network takes RGB images from both cameras as input and directly produces the relative rotation and translation as output. The system is trained in an end-to-end manner utilising transfer learning from a large scale classification dataset. The introduced approach is compared with widely used local feature based methods (SURF, ORB) and the results indicate a clear improvement over the baseline. In addition, a variant of the proposed architecture containing a spatial pyramid pooling (SPP) layer is evaluated and shown to further improve the performance.", "We introduce the value iteration network (VIN): a fully differentiable neural network with a planning module' embedded within. VINs can learn to plan, and are suitable for predicting outcomes that involve planning-based reasoning, such as policies for reinforcement learning. Key to our approach is a novel differentiable approximation of the value-iteration algorithm, which can be represented as a convolutional neural network, and trained end-to-end using standard backpropagation. We evaluate VIN based policies on discrete and continuous path-planning domains, and on a natural-language based search task. We show that by learning an explicit planning computation, VIN policies generalize better to new, unseen domains.", "" ] }
1907.11653
2965308893
The prediction of electrical power in combined cycle power plants is a key challenge in the electrical power and energy systems field. This power output can vary depending on environmental variables, such as temperature, pressure, and humidity. Thus, the business problem is how to predict the power output as a function of these environmental conditions in order to maximize the profit. The research community has solved this problem by applying machine learning techniques and has managed to reduce the computational and time costs in comparison with the traditional thermodynamical analysis. Until now, this challenge has been tackled from a batch learning perspective in which data is assumed to be at rest, and where models do not continuously integrate new information into already constructed models. We present an approach closer to the Big Data and Internet of Things paradigms in which data is arriving continuously and where models learn incrementally, achieving significant enhancements in terms of data processing (time, memory and computational costs), and obtaining competitive performances. This work compares and examines the hourly electrical power prediction of several streaming regressors, and discusses about the best technique in terms of time processing and performance to be applied on this streaming scenario.
Regarding the SL topic, many researches have focused on it due to its mentioned relevance, such as @cite_3 @cite_57 @cite_36 @cite_44 @cite_25 , and more recently in @cite_28 @cite_51 @cite_30 @cite_5 . The application of regression techniques to SL has been recently addressed in @cite_16 , where the authors cover the most important online regression methods. The work @cite_37 deals with ensemble learning from data streams, and concretely it focused on regression ensembles. The authors of @cite_4 propose several criteria for efficient sample selection in case of SL regression problems within an online active learning context. In general, we can say that regression tasks in SL have not received as much attention as classification tasks, and this was spotlighted in @cite_21 , where researchers carried out an study and an empirical evaluation of a set of online algorithms for regression, which includes the baseline Hoeffding-based regression trees, online option trees, and an online least mean squares filter.
{ "cite_N": [ "@cite_30", "@cite_37", "@cite_4", "@cite_36", "@cite_28", "@cite_21", "@cite_3", "@cite_57", "@cite_44", "@cite_5", "@cite_16", "@cite_51", "@cite_25" ], "mid": [ "", "2585528949", "2574867284", "2588336250", "2762596667", "2073427650", "2758219826", "2528421823", "2602516395", "", "2785331100", "2885060605", "" ], "abstract": [ "", "A comprehensive survey of ensemble approaches for data stream analysis.Taxonomy of ensemble algorithms for various data stream mining tasks.Discussion of open research problems and lines of future research. In many applications of information systems learning algorithms have to act in dynamic environments where data are collected in the form of transient data streams. Compared to static data mining, processing streams imposes new computational requirements for algorithms to incrementally process incoming examples while using limited memory and time. Furthermore, due to the non-stationary characteristics of streaming data, prediction models are often also required to adapt to concept drifts. Out of several new proposed stream algorithms, ensembles play an important role, in particular for non-stationary environments. This paper surveys research on ensembles for data stream classification as well as regression tasks. Besides presenting a comprehensive spectrum of ensemble approaches for data streams, we also discuss advanced learning concepts such as imbalanced data streams, novelty detection, active and semi-supervised learning, complex data representations and structured outputs. The paper concludes with a discussion of open research problems and lines of future research.", "In this paper, we propose three criteria for efficient sample selection in case of data stream regression problems within an online active learning context. The selection becomes important whenever the target values, which guide the update of the regressors as well as the implicit model structures, are costly or time-consuming to measure and also in case when very fast models updates are required to cope with stream mining real-time demands. Reducing the selected samples as much as possible while keeping the predictive accuracy of the models on a high level is, thus, a central challenge. This should be ideally achieved in unsupervised and single-pass manner. Our selection criteria rely on three aspects: 1) the extrapolation degree combined with the model's nonlinearity degree , which is measured in terms of a new specific homogeneity criterion among adjacent local approximators; 2) the uncertainty in model outputs, which can be measured in terms of confidence intervals using so-called adaptive local error bars — we integrate a weighted localization of an incremental noise level estimator and propose formulas for online merging of local error bars; 3) the uncertainty in model parameters, which is estimated by the so-called A-optimality criterion, which relies on the Fisher information matrix. The selection criteria are developed in combination with evolving generalized Takagi–Sugeno (TS) fuzzy models (containing rules in arbitrarily rotated position), as it could be shown in previous publications that these outperform conventional evolving TS models (containing axis-parallel rules). The results based on three high-dimensional real-world streaming problems show that a model update based on only 10 –20 selected samples can still achieve similar accumulated model errors over time to the case when performing a full model update on all samples. This can be achieved with a negligible sensitivity on the size of the active learning latency buffer. Random sampling with the same percentages of samples selected, however, achieved much higher error rates. Hence, the intelligence in our sample selection concept leads to an economic balance between model accuracy and measurement as well computational costs for model updates.", "Data preprocessing and reduction have become essential techniques in current knowledge discovery scenarios, dominated by increasingly large datasets. These methods aim at reducing the complexity inherent to real-world datasets, so that they can be easily processed by current data mining solutions. Advantages of such approaches include, among others, a faster and more precise learning process, and more understandable structure of raw data. However, in the context of data preprocessing techniques for data streams have a long road ahead of them, despite online learning is growing in importance thanks to the development of Internet and technologies for massive data collection. Throughout this survey, we summarize, categorize and analyze those contributions on data preprocessing that cope with streaming data. This work also takes into account the existing relationships between the different families of methods (feature and instance selection, and discretization). To enrich our study, we conduct thorough experiments using the most relevant contributions and present an analysis of their predictive performance, reduction rates, computational time, and memory usage. Finally, we offer general advices about existing data stream preprocessing algorithms, as well as discuss emerging future challenges to be faced in the domain of data stream preprocessing.", "Abstract Nowadays fast-arriving information flows lay the basis of many data mining applications. Such data streams are usually affected by non-stationary events that eventually change their distribution (concept drift), causing that predictive models trained over these data become obsolete and do not adapt suitably to the new distribution. Specially in online learning scenarios, there is a pressing need for new algorithms that adapt to this change as fast as possible, while maintaining good performance scores. Recent studies have revealed that a good strategy is to construct highly diverse ensembles towards utilizing them shortly after the drift (independently from the type of drift) to obtain good performance scores. However, the existence of the so-called trade-off between stability (performance over stable data concepts) and plasticity (recovery and adaptation after drift events) implies that the construction of the ensemble model should account simultaneously for these two conflicting objectives. In this regard, this work presents a new approach to artificially generate an optimal diversity level when building prediction ensembles once shortly after a drift occurs. The approach uses a Kernel Density Estimation (KDE) method to generate synthetic data, which are subsequently labeled by means a multi-objective optimization method that allows training each model of the ensemble with a different subset of synthetic samples. Computational experiments reveal that the proposed approach can be hybridized with other traditional diversity generation approaches, yielding optimized levels of diversity that render an enhanced recovery from drifts.", "Abstract The emergence of ubiquitous sources of streaming data has given rise to the popularity of algorithms for online machine learning. In that context, Hoeffding trees represent the state-of-the-art algorithms for online classification. Their popularity stems in large part from their ability to process large quantities of data with a speed that goes beyond the processing power of any other streaming or batch learning algorithm. As a consequence, Hoeffding trees have often been used as base models of many ensemble learning algorithms for online classification. However, despite the existence of many algorithms for online classification, ensemble learning algorithms for online regression do not exist. In particular, the field of online any-time regression analysis seems to have experienced a serious lack of attention. In this paper, we address this issue through a study and an empirical evaluation of a set of online algorithms for regression, which includes the baseline Hoeffding-based regression trees, online option trees, and an online least mean squares filter. We also design, implement and evaluate two novel ensemble learning methods for online regression: online bagging with Hoeffding-based model trees, and an online RandomForest method in which we have used a randomized version of the online model tree learning algorithm as a basic building block. Within the study presented in this paper, we evaluate the proposed algorithms along several dimensions: predictive accuracy and quality of models, time and memory requirements, bias–variance and bias–variance–covariance decomposition of the error, and responsiveness to concept drift.", "Abstract Recently, incremental and on-line learning gained more attention especially in the context of big data and learning from data streams, conflicting with the traditional assumption of complete data availability. Even though a variety of different methods are available, it often remains unclear which of them is suitable for a specific task and how they perform in comparison to each other. We analyze the key properties of eight popular incremental methods representing different algorithm classes. Thereby, we evaluate them with regards to their on-line classification error as well as to their behavior in the limit. Further, we discuss the often neglected issue of hyperparameter optimization specifically for each method and test how robustly it can be done based on a small set of examples. Our extensive evaluation on data sets with different characteristics gives an overview of the performance with respect to accuracy, convergence speed as well as model complexity, facilitating the choice of the best method for a given application.", "Recent advances in computational intelligent systems have focused on addressing complex problems related to the dynamicity of the environments. In increasing number of real world applications, data are presented as streams that may evolve over time and this is known by concept drift. Handling concept drift is becoming an attractive topic of research that concerns multidisciplinary domains such that machine learning, data mining, ubiquitous knowledge discovery, statistic decision theory, etc... Therefore, a rich body of the literature has been devoted to the study of methods and techniques for handling drifting data. However, this literature is fairly dispersed and it does not define guidelines for choosing an appropriate approach for a given application. Hence, the main objective of this survey is to present an ease understanding of the concept drift issues and related works, in order to help researchers from different disciplines to consider concept drift handling in their applications. This survey covers different facets of existing approaches, evokes discussion and helps readers to underline the sharp criteria that allow them to properly design their own approach. For this purpose, a new categorization of the existing state-of-the-art is presented with criticisms, future tendencies and not-yet-addressed challenges.", "Ensemble-based methods are among the most widely used techniques for data stream classification. Their popularity is attributable to their good performance in comparison to strong single learners while being relatively easy to deploy in real-world applications. Ensemble algorithms are especially useful for data stream learning as they can be integrated with drift detection algorithms and incorporate dynamic updates, such as selective removal or addition of classifiers. This work proposes a taxonomy for data stream ensemble learning as derived from reviewing over 60 algorithms. Important aspects such as combination, diversity, and dynamic updates, are thoroughly discussed. Additional contributions include a listing of popular open-source tools and a discussion about current data stream research challenges and how they relate to ensemble learning (big data streams, concept evolution, feature drifts, temporal dependencies, and others).", "", "The area of online machine learning in big data streams covers algorithms that are (1) distributed and (2) work from data streams with only a limited possibility to store past data. The first requirement mostly concerns software architectures and efficient algorithms. The second one also imposes nontrivial theoretical restrictions on the modeling methods: In the data stream model, older data is no longer available to revise earlier suboptimal modeling decisions as the fresh data arrives. In this article, we provide an overview of distributed software architectures and libraries as well as machine learning models for online learning. We highlight the most important ideas for classification, regression, recommendation, and unsupervised modeling from streaming data, and we show how they are implemented in various distributed data stream processing systems. This article is a reference material and not a survey. We do not attempt to be comprehensive in describing all existing methods and solutions; rather, we give pointers to the most important resources in the field. All related sub-fields, online algorithms, online learning, and distributed data processing are hugely dominant in current research and development with conceptually new research results and software components emerging at the time of writing. In this article, we refer to several survey results, both for distributed data processing and for online machine learning. Compared to past surveys, our article is different because we discuss recommender systems in extended detail.", "Abstract Nowadays huge volumes of data are produced in the form of fast streams, which are further affected by non-stationary phenomena. The resulting lack of stationarity in the distribution of the produced data calls for efficient and scalable algorithms for online analysis capable of adapting to such changes (concept drift). The online learning field has lately turned its focus on this challenging scenario, by designing incremental learning algorithms that avoid becoming obsolete after a concept drift occurs. Despite the noted activity in the literature, a need for new efficient and scalable algorithms that adapt to the drift still prevails as a research topic deserving further effort. Surprisingly, Spiking Neural Networks, one of the major exponents of the third generation of artificial neural networks, have not been thoroughly studied as an online learning approach, even though they are naturally suited to easily and quickly adapting to changing environments. This work covers this research gap by adapting Spiking Neural Networks to meet the processing requirements that online learning scenarios impose. In particular the work focuses on limiting the size of the neuron repository and making the most of this limited size by resorting to data reduction techniques. Experiments with synthetic and real data sets are discussed, leading to the empirically validated assertion that, by virtue of a tailored exploitation of the neuron repository, Spiking Neural Networks adapt better to drifts, obtaining higher accuracy scores than naive versions of Spiking Neural Networks for online learning environments.", "" ] }
1907.11752
2965643485
Decision making under uncertain conditions has been well studied when uncertainty can only be considered at the associative level of information. The classical Theorems of von Neumann-Morgenstern and Savage provide a formal criterion for rationally making choices using associative information. We provide here a previous result from Pearl and show that it can be considered as a causal version of the von Neumann-Morgenstern Theorem; furthermore, we consider the case when the true causal mechanism that controls the environment is unknown to the decision maker and propose a causal version of the Savage Theorem. As applications, we argue how previous optimal action learning methods for causal environments fit within the Causal Savage Theorem we present thus showing the utility of our result in the justification and design of learning algorithms; furthermore, we define a Causal Nash Equilibria for a strategic game in a causal environment in terms of the preferences induced by our Causal Decision Making Theorem.
A previous attempt to formalize Decision Theory in the presence of Causal Information is given in @cite_53 , @cite_70 . According to such formulation, a decision maker must choose whatever action is more likely to (causally) produce desired outcomes while keeping any beliefs about causal relations fixed ( @cite_4 ). This is stated by the Stalnaker ( @cite_59 ) equation where @math is to be read as @math @math ( @cite_41 , @cite_11 ). Lewis' and Joyce's work captured the intuition that causal relations may be used to control the environment and to predict what is caused by the actions of a decision maker. In Section we refine the @math operator by an explicit way of calculating the probability of causing an outcome by doing a certain action in terms of Pearl's do-calculus.
{ "cite_N": [ "@cite_4", "@cite_70", "@cite_41", "@cite_53", "@cite_59", "@cite_11" ], "mid": [ "1584574224", "2020595559", "1511574020", "2038908222", "1590902735", "601509448" ], "abstract": [ "This introduction to decision theory offers comprehensive and accessible discussions of decision-making under ignorance and risk, the foundations of utility theory, the debate over subjective and o ...", "Preface Introduction: a chance to reconsider 1. Prudential rationality as expected utility maximization 2. Decision problems 3. Savage's theory 4. Evidential decision theory 5. Causal decision theory 6. A general theory of conditional beliefs 7. A representation theorem for causal decision theory 8. Where things stand Notes References.", "We begin with a rough theory of rational decision-making. In the first place, rational decision-making involves conditional propositions: when a person weighs a major decision, it is rational for him to ask, for each act he considers, what would happen if he performed that act. It is rational, then, for him to consider propositions of the form ‘If I were to do a, then c would happen’. Such a proposition we shall call a counterfactual, and we shall form counterfactuals with a connective ‘☐→' on this pattern: ‘If I were to do a, then c would happen’ is to be written ‘I do a ‘☐→' c happens’.", "Abstract Newcomb's problem and similar cases show the need to incorporate causal distinctions into the theory of rational decision; the usual noncausal decision theory, though simpler, does not always give the right answers. I give my own version of causal decision theory, compare it with versions offered by several other authors, and suggest that the versions have more in common than meets the eye.", "A conditional sentence expresses a proposition which is a function of two other propositions, yet not one which is a truth function of those propositions. I may know the truth values of “Willie Mays played in the American League” and “Willie Mays hit four hundred” without knowing whether or not Mays, would have hit four hundred if he had played in the American League. This fact has tended to puzzle, displease, or delight philosophers, and many have felt that it is a fact that calls for some comment or explanation. It has given rise to a number of philosophical problems; I shall discuss three of these.", "1. Introduction 2. A brief history of causality 3. Probability, logic and probabilistic temporal logic 4. Defining causality 5. Inferring causality 6. Token causality 7. Case studies 8. Conclusion Appendix A. A little bit of statistics Appendix B. Proofs." ] }
1907.11752
2965643485
Decision making under uncertain conditions has been well studied when uncertainty can only be considered at the associative level of information. The classical Theorems of von Neumann-Morgenstern and Savage provide a formal criterion for rationally making choices using associative information. We provide here a previous result from Pearl and show that it can be considered as a causal version of the von Neumann-Morgenstern Theorem; furthermore, we consider the case when the true causal mechanism that controls the environment is unknown to the decision maker and propose a causal version of the Savage Theorem. As applications, we argue how previous optimal action learning methods for causal environments fit within the Causal Savage Theorem we present thus showing the utility of our result in the justification and design of learning algorithms; furthermore, we define a Causal Nash Equilibria for a strategic game in a causal environment in terms of the preferences induced by our Causal Decision Making Theorem.
@cite_43 provides a framework for defining the notions of cause and effect in terms of decision theoretical concepts, such as states and outcomes and gives a theoretical basis for graphical description of causes and effects, such as causal influence diagrams ( @cite_54 ). Heckerman gave an elegant definition of causality, but did not addressed how to actually make choices using causal information.
{ "cite_N": [ "@cite_43", "@cite_54" ], "mid": [ "1480413091", "2169315100" ], "abstract": [ "We present a definition of cause and effect in terms of decision-theoretic primitives and thereby provide a principled foundation for causal reasoning. Our definition departs from the traditional view of causation in that causal assertions may vary with the set of decisions available. We argue that this approach provides added clarity to the notion of cause. Also in this paper, we examine the encoding of causal relationships in directed acyclic graphs. We describe a special class of influence diagrams, those in canonical form, and show its relationship to Pearl's representation of cause and effect. Finally, we show how canonical form facilitates counterfactual reasoning.", "Summary We consider a variety of ways in which probabilistic and causal models can be represented in graphical form. By adding nodes to our graphs to represent parameters, decisions, etc., we obtain a generalisation of influence diagrams that supports meaningful causal modelling and inference, and only requires concepts and methods that are already standard in the purely probabilistic case. We relate our representations to others, particularly functional models, and present arguments and examples in favour of their superiority." ] }
1907.11703
2966816175
Deep reinforcement learning has achieved great successes in recent years, however, one main challenge is the sample inefficiency. In this paper, we focus on how to use action guidance by means of a non-expert demonstrator to improve sample efficiency in a domain with sparse, delayed, and possibly deceptive rewards: the recently-proposed multi-agent benchmark of Pommerman. We propose a new framework where even a non-expert simulated demonstrator, e.g., planning algorithms such as Monte Carlo tree search with a small number rollouts, can be integrated within asynchronous distributed deep reinforcement learning methods. Compared to a vanilla deep RL algorithm, our proposed methods both learn faster and converge to better policies on a two-player mini version of the Pommerman game.
Safe Reinforcement Learning tries to ensure reasonable system performance and or respect safety constraints during the learning and or deployment processes @cite_30 . Roughly, there are two ways of doing safe RL: some methods adapt the optimality criterion, while others adapt the exploration mechanism. Our work uses continuous action guidance of lookahead search with MCTS for better exploration.
{ "cite_N": [ "@cite_30" ], "mid": [ "1845972764" ], "abstract": [ "Safe Reinforcement Learning can be defined as the process of learning policies that maximize the expectation of the return in problems in which it is important to ensure reasonable system performance and or respect safety constraints during the learning and or deployment processes. We categorize and analyze two approaches of Safe Reinforcement Learning. The first is based on the modification of the optimality criterion, the classic discounted finite infinite horizon, with a safety factor. The second is based on the modification of the exploration process through the incorporation of external knowledge or the guidance of a risk metric. We use the proposed classification to survey the existing literature, as well as suggesting future directions for Safe Reinforcement Learning." ] }
1907.11703
2966816175
Deep reinforcement learning has achieved great successes in recent years, however, one main challenge is the sample inefficiency. In this paper, we focus on how to use action guidance by means of a non-expert demonstrator to improve sample efficiency in a domain with sparse, delayed, and possibly deceptive rewards: the recently-proposed multi-agent benchmark of Pommerman. We propose a new framework where even a non-expert simulated demonstrator, e.g., planning algorithms such as Monte Carlo tree search with a small number rollouts, can be integrated within asynchronous distributed deep reinforcement learning methods. Compared to a vanilla deep RL algorithm, our proposed methods both learn faster and converge to better policies on a two-player mini version of the Pommerman game.
Approaches such as DAgger @cite_31 formulate imitation learning as a supervised problem where the aim is to match the demonstrator performance. However, the performance of agents using these methods is upper-bounded by the demonstrator. Recent works such as Expert Iteration @cite_2 and AlphaGo Zero @cite_4 extend imitation learning to the RL setting where the demonstrator is also continuously improved during training. There has been a growing body of work on imitation learning where demonstrators' data is used to speed up policy learning in RL @cite_10 .
{ "cite_N": [ "@cite_10", "@cite_31", "@cite_4", "@cite_2" ], "mid": [ "2788862220", "2962957031", "2766447205", "2618097077" ], "abstract": [ "Deep reinforcement learning (RL) has achieved several high profile successes in difficult decision-making problems. However, these algorithms typically require a huge amount of data before they reach reasonable performance. In fact, their performance during learning can be extremely poor. This may be acceptable for a simulator, but it severely limits the applicability of deep RL to many real-world tasks, where the agent must learn in the real environment. In this paper we study a setting where the agent may access data from previous control of the system. We present an algorithm, Deep Q-learning from Demonstrations (DQfD), that leverages small sets of demonstration data to massively accelerate the learning process even from relatively small amounts of demonstration data and is able to automatically assess the necessary ratio of demonstration data while learning thanks to a prioritized replay mechanism. DQfD works by combining temporal difference updates with supervised classification of the demonstrator's actions. We show that DQfD has better initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN) as it starts with better scores on the first million steps on 41 of 42 games and on average it takes PDD DQN 83 million steps to catch up to DQfD's performance. DQfD learns to out-perform the best demonstration given in 14 of 42 games. In addition, DQfD leverages human demonstrations to achieve state-of-the-art results for 11 games. Finally, we show that DQfD performs better than three related algorithms for incorporating demonstration data into DQN.", "Sequential prediction problems such as imitation learning, where future observations depend on previous predictions (actions), violate the common i.i.d. assumptions made in statistical learning. This leads to poor performance in theory and often in practice. Some recent approaches (Daume , 2009; Ross and Bagnell, 2010) provide stronger guarantees in this setting, but remain somewhat unsatisfactory as they train either non-stationary or stochastic policies and require a large number of iterations. In this paper, we propose a new iterative algorithm, which trains a stationary deterministic policy, that can be seen as a no regret algorithm in an online learning setting. We show that any such no regret algorithm, combined with additional reduction assumptions, must find a policy with good performance under the distribution of observations it induces in such sequential settings. We demonstrate that this new approach outperforms previous approaches on two challenging imitation learning problems and a benchmark sequence labeling problem.", "Starting from zero knowledge and without human data, AlphaGo Zero was able to teach itself to play Go and to develop novel strategies that provide new insights into the oldest of games.", "Sequential decision making problems, such as structured prediction, robotic control, and game playing, require a combination of planning policies and generalisation of those plans. In this paper, we present Expert Iteration (ExIt), a novel reinforcement learning algorithm which decomposes the problem into separate planning and generalisation tasks. Planning new policies is performed by tree search, while a deep neural network generalises those plans. Subsequently, tree search is improved by using the neural network policy to guide search, increasing the strength of new plans. In contrast, standard deep Reinforcement Learning algorithms rely on a neural network not only to generalise plans, but to discover them too. We show that ExIt outperforms REINFORCE for training a neural network to play the board game Hex, and our final tree search agent, trained tabula rasa, defeats MoHex, the previous state-of-the-art Hex player." ] }
1907.11703
2966816175
Deep reinforcement learning has achieved great successes in recent years, however, one main challenge is the sample inefficiency. In this paper, we focus on how to use action guidance by means of a non-expert demonstrator to improve sample efficiency in a domain with sparse, delayed, and possibly deceptive rewards: the recently-proposed multi-agent benchmark of Pommerman. We propose a new framework where even a non-expert simulated demonstrator, e.g., planning algorithms such as Monte Carlo tree search with a small number rollouts, can be integrated within asynchronous distributed deep reinforcement learning methods. Compared to a vanilla deep RL algorithm, our proposed methods both learn faster and converge to better policies on a two-player mini version of the Pommerman game.
( hester2017deep ) used demonstrator data by combining the supervised learning loss with the Q-learning loss within the DQN algorithm to pre-train and showed that their method achieves good results on Atari games by using a few minutes of game-play data. ( kim2013learning ) proposed a learning from demonstration approach where limited demonstrator data is used to impose constraints on the policy iteration phase. Another recent work @cite_1 used planner demonstrations to learn a value function, which was then further refined with RL and a short-horizon planner for robotic manipulation tasks.
{ "cite_N": [ "@cite_1" ], "mid": [ "2951680172" ], "abstract": [ "Manipulation in clutter requires solving complex sequential decision making problems in an environment rich with physical interactions. The transfer of motion planning solutions from simulation to the real world, in open-loop, suffers from the inherent uncertainty in modelling real world physics. We propose interleaving planning and execution in real-time, in a closed-loop setting, using a Receding Horizon Planner (RHP) for pushing manipulation in clutter. In this context, we address the problem of finding a suitable value function based heuristic for efficient planning, and for estimating the cost-to-go from the horizon to the goal. We estimate such a value function first by using plans generated by an existing sampling-based planner. Then, we further optimize the value function through reinforcement learning. We evaluate our approach and compare it to state-of-the-art planning techniques for manipulation in clutter. We conduct experiments in simulation with artificially injected uncertainty on the physics parameters, as well as in real world tasks of manipulation in clutter. We show that this approach enables the robot to react to the uncertain dynamics of the real world effectively." ] }
1907.11703
2966816175
Deep reinforcement learning has achieved great successes in recent years, however, one main challenge is the sample inefficiency. In this paper, we focus on how to use action guidance by means of a non-expert demonstrator to improve sample efficiency in a domain with sparse, delayed, and possibly deceptive rewards: the recently-proposed multi-agent benchmark of Pommerman. We propose a new framework where even a non-expert simulated demonstrator, e.g., planning algorithms such as Monte Carlo tree search with a small number rollouts, can be integrated within asynchronous distributed deep reinforcement learning methods. Compared to a vanilla deep RL algorithm, our proposed methods both learn faster and converge to better policies on a two-player mini version of the Pommerman game.
Previous work @cite_19 combined planning and RL in such a way that RL can explore on the action space filtered down by the planner, outperforming using either solely a planner or RL. Other work @cite_23 employed MCTS as a high-level planner, which is fed a set of low-level offline learned DRL policies and refines them for safer execution within a simulated autonomous driving domain. A recent work by ( vodopivec2017monte ) unified RL, planning, and search.
{ "cite_N": [ "@cite_19", "@cite_23" ], "mid": [ "2518731509", "2911324515" ], "abstract": [ "Automated planning and reinforcement learning are characterized by complementary views on decision making: the former relies on previous knowledge and computation, while the latter on interaction with the world, and experience. Planning allows robots to carry out different tasks in the same domain, without the need to acquire knowledge about each one of them, but relies strongly on the accuracy of the model. Reinforcement learning, on the other hand, does not require previous knowledge, and allows robots to robustly adapt to the environment, but often necessitates an infeasible amount of experience. We present Domain Approximation for Reinforcement LearnING (DARLING), a method that takes advantage of planning to constrain the behavior of the agent to reasonable choices, and of reinforcement learning to adapt to the environment, and increase the reliability of the decision making process. We demonstrate the effectiveness of the proposed method on a service robot, carrying out a variety of tasks in an office building. We find that when the robot makes decisions by planning alone on a given model it often fails, and when it makes decisions by reinforcement learning alone it often cannot complete its tasks in a reasonable amount of time. When employing DARLING, even when seeded with the same model that was used for planning alone, however, the robot can quickly learn a behavior to carry out all the tasks, improves over time, and adapts to the environment as it changes.", "Machine learning can provide efficient solutions to the complex problems encountered in autonomous driving, but ensuring their safety remains a challenge. A number of authors have attempted to address this issue, but there are few publicly-available tools to adequately explore the trade-offs between functionality, scalability, and safety. We thus present WiseMove, a software framework to investigate safe deep reinforcement learning in the context of motion planning for autonomous driving. WiseMove adopts a modular learning architecture that suits our current research questions and can be adapted to new technologies and new questions. We present the details of WiseMove, demonstrate its use on a common traffic scenario, and describe how we use it in our ongoing safe learning research." ] }
1907.11717
2966432599
The benefits of the ubiquitous caching in information centric networking (ICN) are profound; even though such features make ICN promising for content distribution, but it also introduces a challenge to content protection against the unauthorized access. The protection of a content against unauthorized access requires consumer authentication and involves the conventional end-to-end encryption. However, in ICN, such end-to-end encryption makes the content caching ineffective since encrypted contents stored in a cache are useless for any consumers except those who know the encryption key. For effective caching of encrypted contents in ICN, we propose a secure distribution of protected content (SDPC) scheme, which ensures that only authenticated consumers can access the content. SDPC is lightweight and allows consumers to verify the originality of the published content by using a symmetric key encryption. SDPC also provides protection against privacy leakage. The security of SDPC was proved with the Burrows–Abadi–Needham (BAN) logic and Scyther tool verification, and simulation results show that SDPC can reduce the content download delay.
Most existing access control schemes for secure contents are application specific or lack security strength. For example, in @cite_0 , the authors presented a scheme for protected contents using network coding as encryption. However, the scheme requires a private connection between the publisher and consumer to obtain the decoding matrix and missing data blocks. @cite_25 , the authors presented a security framework for the copyrighted video streaming in ICN based on linear random coding. It is proven that the linear random coding alone improves the performance of ICN @cite_26 . However in @cite_25 , each video was encrypted with a large number of symmetric encryption keys, such that each video frame was encrypted with a unique symmetric encryption key. Since only authorized users who possessed the set of all keys could decrypt the video content, the distribution of a large number of keys for each video content can be an extra communication overhead.
{ "cite_N": [ "@cite_0", "@cite_26", "@cite_25" ], "mid": [ "2328980985", "2886568791", "2514042371" ], "abstract": [ "", "The current internet architecture is inefficient in fulfilling the demands of newly emerging internet applications. To address this issue, several over-the-top application-level solutions have been employed, making the overall architecture very complex. Information-centric-networking (ICN) architecture has emerged as a promising alternative solution. The ICN architecture decouples the content from the host at the network level and supports the temporary storage of content in an in-network cache. Fundamentally, the ICN can be considered a multisource, multicast content-delivery solution. Because of the benefits of network coding in multicasting scenarios and proven benefits in distributed storage networks, the network coding is apt for the ICN architecture. In this study, we propose a solvable linear network-coding scheme for the ICN architecture. We also propose a practical implementation of the network-coding scheme for the ICN, particularly for the content-centric network (CCN) architecture, which is termed the coded CCN. The performance results show that the network-coding scheme improves the performance of the CCN and significantly reduces the network traffic and average download delay.", "As a novel network architecture, Information-Centric Networking(ICN) has a good performance in security, mobility and scalability. Although in-network cache used in ICN can effectively solve the problem of network congestion. Meanwhile, it also brings a lot of challenges such as copyright protection. How to prevent unauthorized user access to the large-sized contents of the route has become the focus of the research. Some of the current solutions are based on the traditional encryption technology. But these solutions don't applay to ICN. Current approaches that rely on a common encryption key among authorized users cannot protect copyright well since if authorized user leaks the private key out, we cannot tell who has leaked the key out. In this paper, we use a novel scheme to solve the problem of copyright protection. In this scheme, we take the method of the network encoding. The first, we have splitted the large-sized content into N blocks. Then through linear network coding(LNC) encrypted the content, if the user can not obtain the decrypted matrix, the user will not be able to decrypt the content. Therefore, the scheme can achieve the protection of the content. Our analysis of this program shows that the scheme has a good performance in copyright protection." ] }
1907.11717
2966432599
The benefits of the ubiquitous caching in information centric networking (ICN) are profound; even though such features make ICN promising for content distribution, but it also introduces a challenge to content protection against the unauthorized access. The protection of a content against unauthorized access requires consumer authentication and involves the conventional end-to-end encryption. However, in ICN, such end-to-end encryption makes the content caching ineffective since encrypted contents stored in a cache are useless for any consumers except those who know the encryption key. For effective caching of encrypted contents in ICN, we propose a secure distribution of protected content (SDPC) scheme, which ensures that only authenticated consumers can access the content. SDPC is lightweight and allows consumers to verify the originality of the published content by using a symmetric key encryption. SDPC also provides protection against privacy leakage. The security of SDPC was proved with the Burrows–Abadi–Needham (BAN) logic and Scyther tool verification, and simulation results show that SDPC can reduce the content download delay.
In earlier work @cite_35 , the authors proposed a content access control scheme for ICN enabled wireless edge. The proposed one is an extension of @cite_23 , which employs the public-key based algorithm and shamir's secret sharing as a building block, named AccConF. To obtain a unique interpolating polynomial of shamir's scheme, AccConF espoused Lagrangian Interpolation technique. The calculation of Lagrangian Interpolation is a computationally expensive process. To reduce the client-side computational burden the publisher piggy backs an enabling block with each content, which encapsulates partially solved Lagrangian coefficients.
{ "cite_N": [ "@cite_35", "@cite_23" ], "mid": [ "2590898937", "2136452616" ], "abstract": [ "The fast-growing Internet traffic is increasingly becoming content-based and driven by mobile users, with users more interested in data rather than its source. This has precipitated the need for an information-centric Internet architecture. Research in information-centric networks (ICNs) have resulted in novel architectures, e.g., CCN NDN, DONA, and PSIRP PURSUIT; all agree on named data based addressing and pervasive caching as integral design components. With network-wide content caching, enforcement of content access control policies become non-trivial. Each caching node in the network needs to enforce access control policies with the help of the content provider. This becomes inefficient and prone to unbounded latencies especially during provider outages. In this paper, we propose an efficient access control framework for ICN, which allows legitimate users to access and use the cached content directly, and does not require verification authentication by an online provider authentication server or the content serving router. This framework would help reduce the impact of system down-time from server outages and reduce delivery latency by leveraging caching while guaranteeing access only to legitimate users. Experimental simulation results demonstrate the suitability of this scheme for all users, but particularly for mobile users, especially in terms of the security and latency overheads.", "In this paper, we propose a novel secure content delivery framework, for an information-centric network, which will enable content providers (e.g., Netflix and Youtube) to securely disseminate their content to legitimate users via content distribution networks (CDNs) and Internet service providers (ISPs). Use of our framework will enable legitimate users to receive consume encrypted content cached at a nearby router (CDN or ISP), even when the providers are offline. Our framework would slash system-downtime due to server outages, such as that recently experienced by Netflix, Pinterest, and Instagram users in the US (October 22, 2012). It will also help the providers utilize in-network caches for shaping content transmission and reducing delivery latency. We discuss the handling of security, access control, and system dynamics challenges and demonstrate the practicality of our framework by implementing it on a CCNx testbed." ] }
1907.11717
2966432599
The benefits of the ubiquitous caching in information centric networking (ICN) are profound; even though such features make ICN promising for content distribution, but it also introduces a challenge to content protection against the unauthorized access. The protection of a content against unauthorized access requires consumer authentication and involves the conventional end-to-end encryption. However, in ICN, such end-to-end encryption makes the content caching ineffective since encrypted contents stored in a cache are useless for any consumers except those who know the encryption key. For effective caching of encrypted contents in ICN, we propose a secure distribution of protected content (SDPC) scheme, which ensures that only authenticated consumers can access the content. SDPC is lightweight and allows consumers to verify the originality of the published content by using a symmetric key encryption. SDPC also provides protection against privacy leakage. The security of SDPC was proved with the Burrows–Abadi–Needham (BAN) logic and Scyther tool verification, and simulation results show that SDPC can reduce the content download delay.
In work by @cite_30 , an access control realized by a flexible secure content distribution architecture, which combins the proxy re-encryption and identity-based encryption mechanisms. The publisher generates a symmetric key and encrypt the content before dissemination. To access the content from in-network cache or directly from publisher, a consumer first sends a request to publisher to acquires the symmetric encryption key. Upon receiving the key request, the publisher validates and verifies the authenticity of consumer, and sends the symmetric key encapsulated in response message encrypted with consumer’s identity. The proposed scheme eliminated the asymmetric encryption, but it is not clear that how the consumer’s private identity could be known to the content provider.
{ "cite_N": [ "@cite_30" ], "mid": [ "1995965669" ], "abstract": [ "Content-centric networking (CCN) project, a flavor of information-centric networking (ICN), decouples data from its source by shifting the emphasis from hosts and interfaces to information. As a result, content becomes directly accessible and routable within the network. In this data-centric paradigm, techniques for maintaining content confidentiality and privacy typically rely on cryptographic techniques similar to those used in modern digital rights management (DRM) applications, which often require multiple consumer-to-producer (end-to-end) messages to be transmitted to establish identities, acquire licenses, and access encrypted content. In this paper, we present a secure content distribution architecture for CCN that is based on proxy re-encryption. Our design provides strong end-to-end content security and reduces the number of protocol messages required for user authentication and key retrieval. Unlike widely-deployed solutions, our solution is also capable of utilizing the opportunistic in-network caches in CCN. We also experimentally compare two proxy re-encryption schemes that can be used to implement the architecture, and describe the proof of concept application we developed over CCNx." ] }
1907.11717
2966432599
The benefits of the ubiquitous caching in information centric networking (ICN) are profound; even though such features make ICN promising for content distribution, but it also introduces a challenge to content protection against the unauthorized access. The protection of a content against unauthorized access requires consumer authentication and involves the conventional end-to-end encryption. However, in ICN, such end-to-end encryption makes the content caching ineffective since encrypted contents stored in a cache are useless for any consumers except those who know the encryption key. For effective caching of encrypted contents in ICN, we propose a secure distribution of protected content (SDPC) scheme, which ensures that only authenticated consumers can access the content. SDPC is lightweight and allows consumers to verify the originality of the published content by using a symmetric key encryption. SDPC also provides protection against privacy leakage. The security of SDPC was proved with the Burrows–Abadi–Needham (BAN) logic and Scyther tool verification, and simulation results show that SDPC can reduce the content download delay.
In other work @cite_33 , author proposed a content access control scheme based on proxy re-encryption. In proxy re-encryption the content is re-encrypted by an intermediate node. In proposed scheme the edge routers perform the content re-encryption. Upon receiving a content request, the publisher encrypts the data and a randomly generated key k1, using its public key. Upon receiving the content request, edge router generates a random key k2 encrypted by the publisher’s public key and signed by the edge router. Edge router sends the encrypted k2 to publisher and appends the encrypted k2 with the content and dispatch it towards consumer. Meanwhile, the publisher verifies the authenticity of consumer, and generates the content decryption key K using K1, K2 and public key. Upon receiving K the consumer can decrypt the content.
{ "cite_N": [ "@cite_33" ], "mid": [ "1567993328" ], "abstract": [ "Shifting from host-oriented to data-oriented, information-centric networking (ICN) adopts several key design principles, e.g., in-network caching, to cope with the tremendous internet growth. In the ICN setting, data to be distributed can be cached by ICN routers anywhere and accessed arbitrarily by customers without data publishers' permission, which imposes new challenges when achieving data access control: (i) security: How can data publishers protect data confidentiality (either data cached by ICN routers or data accessed by authorized users) even when an authorized user's decryption key was revoked or compromised, and (ii) scalability: How can data publishers leverage ICN's promising features and enforce access control without complicated key management or extensive communication. This paper addresses these challenges by using the new proposed dual-phase encryption that uniquely combines the ideas from one-time decryption key, proxy re-encryption and all-or-nothing transformation, while still being able to leverage ICN's features. Our analysis and performance show that our solution is highly efficient and provable secure under the existing security model." ] }
1907.11717
2966432599
The benefits of the ubiquitous caching in information centric networking (ICN) are profound; even though such features make ICN promising for content distribution, but it also introduces a challenge to content protection against the unauthorized access. The protection of a content against unauthorized access requires consumer authentication and involves the conventional end-to-end encryption. However, in ICN, such end-to-end encryption makes the content caching ineffective since encrypted contents stored in a cache are useless for any consumers except those who know the encryption key. For effective caching of encrypted contents in ICN, we propose a secure distribution of protected content (SDPC) scheme, which ensures that only authenticated consumers can access the content. SDPC is lightweight and allows consumers to verify the originality of the published content by using a symmetric key encryption. SDPC also provides protection against privacy leakage. The security of SDPC was proved with the Burrows–Abadi–Needham (BAN) logic and Scyther tool verification, and simulation results show that SDPC can reduce the content download delay.
In another study @cite_7 , the authors presented an access control scheme for the encrypted content in ICN, which is based on the efficient unidirectional proxy re-encryption (EU-PRE) proposed by @cite_3 . The proposed scheme, named efficient unidirectional re-encryption (EU-RE), simplifies EU-PRE by eliminating the need of proxies in the re-encryption operation. However, the EU-RE scheme is still based on asymmetric cryptography, which is not suitable for several resource constraint applications such as, IoT and sensor networks. Moreover, the authors made an assumption that the content provider behaves correctly, i.e., it does not distribute any private content or decryption rights to unauthorized users. However, this assumption falsifies the protocol claims defined in @cite_20 , which means EU-RE is weak against several attacks. To verify the protocol claims, we implemented EU-RE in an automated security protocol analysis tool, Scyther @cite_10 , and presented the results in .
{ "cite_N": [ "@cite_10", "@cite_3", "@cite_20", "@cite_7" ], "mid": [ "", "2116361063", "2160964355", "2291877322" ], "abstract": [ "", "Proxy re-encryption (PRE) allows a semi-trusted proxy to convert a ciphertext originally intended for Alice into one encrypting the same plaintext for Bob. The proxy only needs a re-encryption key given by Alice, and cannot learn anything about the plaintext encrypted. This adds flexibility in various applications, such as confidential email, digital right management and distributed storage. In this paper, we study unidirectional PRE, which the re-encryption key only enables delegation in one direction but not the opposite. In PKC 2009, Shao and Cao proposed a unidirectional PRE assuming the random oracle. However, we show that it is vulnerable to chosen-ciphertext attack (CCA). We then propose an efficient unidirectional PRE scheme (without resorting to pairings). We gain high efficiency and CCA-security using the “token-controlled encryption” technique, under the computational Diffie-Hellman assumption, in the random oracle model and a relaxed but reasonable definition.", "Authentication is one of the foremost goals of many security protocols. It is most often formalised as a form of agreement, which expresses that the communicating partners agree on the values of a number of variables. In this paper we formalise and study an intensional form of authentication which we call synchronisation. Synchronisation expresses that the messages are transmitted exactly as prescribed by the protocol description. Synchronisation is a strictly stronger property than agreement for the standard intruder model, because it can be used to detect preplay attacks. In order to prevent replay attacks on simple protocols, we also define injective synchronisation. Given a synchronising protocol, we show that a sufficient syntactic criterion exists that guarantees that the protocol is injective as well.", "The Information-centric Network (ICN) paradigm is an important initiative toward an Internet architecture more suitable for content distribution. The change it imposes by naming, routing, and forwarding content directly on the network layer empowers the architecture with several interesting characteristics, such as in-network caching. As contents are meaningful for different users, they can be opportunistically cached and easily accessed by them, which improves content delivery and user experience. However, the fact that users can retrieve content through caches without interacting with the content provider raises security concerns regarding unauthorized access and the enforcement of access control policies. In this context, we propose an access control solution for ICN by adapting and optimizing a proxy re-encryption scheme, reducing up to 33 the processing time. The proposed solution is perfectly aligned with ICN demands, simultaneously ensuring content protection against unauthorized access of contents retrieved from unrestricted in-network caches as well as access control policies enforcement for legitimate users." ] }
1907.11718
2964674144
We propose to solve large scale Markowitz mean-variance (MV) portfolio allocation problem using reinforcement learning (RL). By adopting the recently developed continuous-time exploratory control framework, we formulate the exploratory MV problem in high dimensions. We further show the optimality of a multivariate Gaussian feedback policy, with time-decaying variance, in trading off exploration and exploitation. Based on a provable policy improvement theorem, we devise a scalable and data-efficient RL algorithm and conduct large scale empirical tests using data from the S&P 500 stocks. We found that our method consistently achieves over 10 annualized returns and it outperforms econometric methods and the deep RL method by large margins, for both long and medium terms of investment with monthly and daily trading.
The difficulty of seeking the global optimum for Markov Decision Process (MDP) problems under the MV criterion has been previously noted in @cite_30 . In fact, the variance of reward-to-go is nonlinear in expectation and, as a result of Bellman's inconsistency, most of the well-known RL algorithms cannot be applied directly.
{ "cite_N": [ "@cite_30" ], "mid": [ "2038398071" ], "abstract": [ "We consider finite horizon Markov decision processes under performance measures that involve both the mean and the variance of the cumulative reward. We show that either randomized or history-based policies can improve performance. We prove that the complexity of computing a policy that maximizes the mean reward under a variance constraint is NP-hard for some cases, and strongly NP-hard for others. We finally offer pseudopolynomial exact and approximation algorithms." ] }
1907.11718
2964674144
We propose to solve large scale Markowitz mean-variance (MV) portfolio allocation problem using reinforcement learning (RL). By adopting the recently developed continuous-time exploratory control framework, we formulate the exploratory MV problem in high dimensions. We further show the optimality of a multivariate Gaussian feedback policy, with time-decaying variance, in trading off exploration and exploitation. Based on a provable policy improvement theorem, we devise a scalable and data-efficient RL algorithm and conduct large scale empirical tests using data from the S&P 500 stocks. We found that our method consistently achieves over 10 annualized returns and it outperforms econometric methods and the deep RL method by large margins, for both long and medium terms of investment with monthly and daily trading.
Existing works on variance estimation and control generally divide into value based methods and policy based methods. @cite_29 obtained the Bellman's equation for the variance of reward-to-go under a fixed , given policy. @cite_15 further derived the TD(0) learning rule to estimate the variance, followed by @cite_7 which applied this value based method to an MV portfolio selection problem. It is worth noting that due to the definition of the value function (i.e., the variance penalized expected reward-to-go) in @cite_7 , Bellman's optimality principle does not hold. As a result, it is not guaranteed that a greedy policy based on the latest updated value function will eventually lead to the true global optimal policy. The second approach, the policy based RL, was proposed in @cite_16 . They also extended the work to linear function approximators and devised actor-critic algorithms for MV optimization problems for which convergence to the local optimum is guaranteed with probability one ( @cite_17 ). Related works following this line of research include @cite_18 , @cite_4 , among others. Despite the various methods mentioned above, it remains an open and interesting question in RL to search for the global optimum under the MV criterion.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_7", "@cite_29", "@cite_15", "@cite_16", "@cite_17" ], "mid": [ "2141203641", "2963856199", "", "2313791856", "2566307933", "89883662", "1526449679" ], "abstract": [ "In many sequential decision-making problems we may want to manage risk by minimizing some measure of variability in rewards in addition to maximizing a standard criterion. Variance-related risk measures are among the most common risk-sensitive criteria in finance and operations research. However, optimizing many such criteria is known to be a hard problem. In this paper, we consider both discounted and average reward Markov decision processes. For each formulation, we first define a measure of variability for a policy, which in turn gives us a set of risk-sensitive criteria to optimize. For each of these criteria, we derive a formula for computing its gradient. We then devise actor-critic algorithms for estimating the gradient and updating the policy parameters in the ascent direction. We establish the convergence of our algorithms to locally risk-sensitive optimal policies. Finally, we demonstrate the usefulness of our algorithms in a traffic signal control application.", "In many sequential decision-making problems we may want to manage risk by minimizing some measure of variability in rewards in addition to maximizing a standard criterion. Variance related risk measures are among the most common risk-sensitive criteria in finance and operations research. However, optimizing many such criteria is known to be a hard problem. In this paper, we consider both discounted and average reward Markov decision processes. For each formulation, we first define a measure of variability for a policy, which in turn gives us a set of risk-sensitive criteria to optimize. For each of these criteria, we derive a formula for computing its gradient. We then devise actor-critic algorithms that operate on three timescales--a TD critic on the fastest timescale, a policy gradient (actor) on the intermediate timescale, and a dual ascent for Lagrange multipliers on the slowest timescale. In the discounted setting, we point out the difficulty in estimating the gradient of the variance of the return and incorporate simultaneous perturbation approaches to alleviate this. The average setting, on the other hand, allows for an actor update using compatible features to estimate the gradient of the variance. We establish the convergence of our algorithms to locally risk-sensitive optimal policies. Finally, we demonstrate the usefulness of our algorithms in a traffic signal control application.", "", "Formulae are presented for the variance and higher moments of the present value of single-stage rewards in a finite Markov decision process. Similar formulae are exhibited for a semi-Markov decision process. There is a short discussion of the obstacles to using the variance formula in algorithms to maximize the mean minus a multiple of the standard deviation.", "", "In this paper we extend temporal difference policy evaluation algorithms to performance criteria that include the variance of the cumulative reward. Such criteria are useful for risk management, and are important in domains such as finance and process control. We propose variants of both TD(0) and LSTD(λ) with linear function approximation, prove their convergence, and demonstrate their utility in a 4-dimensional continuous state space problem.", "We present an actor-critic framework for MDPs where the objective is the variance-adjusted expected return. Our critic uses linear function approximation, and we extend the concept of compatible features to the variance-adjusted setting. We present an episodic actor-critic algorithm and show that it converges almost surely to a locally optimal point of the objective function." ] }
1907.11830
2966222024
360° images are usually represented in either equirectangular projection (ERP) or multiple perspective projections. Different from the flat 2D images, the detection task is challenging for 360° images due to the distortion of ERP and the inefficiency of perspective projections. However, existing methods mostly focus on one of the above representations instead of both, leading to limited detection performance. Moreover, the lack of appropriate bounding-box annotations as well as the annotated datasets further increases the difficulties of the detection task. In this paper, we present a standard object detection framework for 360° images. Specifically, we adapt the terminologies of the traditional object detection task to the omnidirectional scenarios, and propose a novel two-stage object detector, i.e., Reprojection R-CNN by combining both ERP and perspective projection. Owing to the omnidirectional field-of-view of ERP, Reprojection R-CNN first generates coarse region proposals efficiently by a distortion-aware spherical region proposal network. Then, it leverages the distortion-free perspective projection and refines the proposed regions by a novel reprojection network. We construct two novel synthetic datasets for training and evaluation. Experiments reveal that Reprojection R-CNN outperforms the previous state-of-the-art methods on the mAP metric. In addition, the proposed detector could run at 178ms per image in the panoramic datasets, which implies its practicability in real-world applications.
: Recent advances in 360 @math images resort to geometric information on the sphere. @cite_13 represent the ERP with a weighted graph, and apply the graph convolutional network to generate graph-based representations. @cite_9 propose SO(3) 3D rotation group for retrieval and classification tasks on spherical images. On top of that, @cite_30 suggest transforming the domain space from Euclidean S2 space to a SO(3) representation to reduce the distortion, and encoding rotation equivariance in the network. Meanwhile, some works attempt to solve the distortion in the ERP directly. @cite_15 transfer knowledge from a pre-trained CNN on perspective projections to a novel network on ERP. Other approaches @cite_32 @cite_6 @cite_28 refer to the idea of the deformable convolutional network @cite_18 , and propose the distortion-aware spherical convolution, where the convolutional filter get distorted in the same way as the objects on the ERP. Though SphConv is simple and effective, due to the implicit interpolation, it could not eliminate the distortion as the network grows deeper. To adjust the distortion from SphConv, we introduce a reprojection mechanism in Rep R-CNN, which significantly increases the detection accuracy.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_28", "@cite_9", "@cite_32", "@cite_6", "@cite_15", "@cite_13" ], "mid": [ "", "2950477723", "2807732111", "2796422723", "2895696451", "2895250390", "2963609011", "2738767782" ], "abstract": [ "", "Convolutional neural networks (CNNs) are inherently limited to model geometric transformations due to the fixed geometric structures in its building modules. In this work, we introduce two new modules to enhance the transformation modeling capacity of CNNs, namely, deformable convolution and deformable RoI pooling. Both are based on the idea of augmenting the spatial sampling locations in the modules with additional offsets and learning the offsets from target tasks, without additional supervision. The new modules can readily replace their plain counterparts in existing CNNs and can be easily trained end-to-end by standard back-propagation, giving rise to deformable convolutional networks. Extensive experiments validate the effectiveness of our approach on sophisticated vision tasks of object detection and semantic segmentation. The code would be released.", "", "We address the problem of 3D rotation equivariance in convolutional neural networks. 3D rotations have been a challenging nuisance in 3D classification tasks requiring higher capacity and extended data augmentation in order to tackle it. We model 3D data with multi-valued spherical functions and we propose a novel spherical convolutional network that implements exact convolutions on the sphere by realizing them in the spherical harmonic domain. Resulting filters have local symmetry and are localized by enforcing smooth spectra. We apply a novel pooling on the spectral domain and our operations are independent of the underlying spherical resolution throughout the network. We show that networks with much lower capacity and without requiring data augmentation can exhibit performance comparable to the state of the art in standard retrieval and classification benchmarks.", "Omnidirectional cameras offer great benefits over classical cameras wherever a wide field of view is essential, such as in virtual reality applications or in autonomous robots. Unfortunately, standard convolutional neural networks are not well suited for this scenario as the natural projection surface is a sphere which cannot be unwrapped to a plane without introducing significant distortions, particularly in the polar regions. In this work, we present SphereNet, a novel deep learning framework which encodes invariance against such distortions explicitly into convolutional neural networks. Towards this goal, SphereNet adapts the sampling locations of the convolutional filters, effectively reversing distortions, and wraps the filters around the sphere. By building on regular convolutions, SphereNet enables the transfer of existing perspective convolutional neural network models to the omnidirectional case. We demonstrate the effectiveness of our method on the tasks of image classification and object detection, exploiting two newly created semi-synthetic and real-world omnidirectional datasets.", "There is a high demand of 3D data for 360 (^ ) panoramic images and videos, pushed by the growing availability on the market of specialized hardware for both capturing (e.g., omni-directional cameras) as well as visualizing in 3D (e.g., head mounted displays) panoramic images and videos. At the same time, 3D sensors able to capture 3D panoramic data are expensive and or hardly available. To fill this gap, we propose a learning approach for panoramic depth map estimation from a single image. Thanks to a specifically developed distortion-aware deformable convolution filter, our method can be trained by means of conventional perspective images, then used to regress depth for panoramic images, thus bypassing the effort needed to create annotated panoramic training dataset. We also demonstrate our approach for emerging tasks such as panoramic monocular SLAM, panoramic semantic segmentation and panoramic style transfer.", "While 360° cameras offer tremendous new possibilities in vision, graphics, and augmented reality, the spherical images they produce make core feature extraction non-trivial. Convolutional neural networks (CNNs) trained on images from perspective cameras yield “flat\" filters, yet 360° images cannot be projected to a single plane without significant distortion. A naive solution that repeatedly projects the viewing sphere to all tangent planes is accurate, but much too computationally intensive for real problems. We propose to learn a spherical convolutional network that translates a planar CNN to process 360° imagery directly in its equirectangular projection. Our approach learns to reproduce the flat filter outputs on 360° data, sensitive to the varying distortion effects across the viewing sphere. The key benefits are 1) efficient feature extraction for 360° images and video, and 2) the ability to leverage powerful pre-trained networks researchers have carefully honed (together with massive labeled image training sets) for perspective images. We validate our approach compared to several alternative methods in terms of both raw CNN output accuracy as well as applying a state-of-the-art “flat\" object detector to 360° data. Our method yields the most accurate results while saving orders of magnitude in computation versus the existing exact reprojection solution.", "Omnidirectional cameras are widely used in such areas as robotics and virtual reality as they provide a wide field of view. Their images are often processed with classical methods, which might unfortunately lead to non-optimal solutions as these methods are designed for planar images that have different geometrical properties than omnidirectional ones. In this paper we study image classification task by taking into account the specific geometry of omnidirectional cameras with graph-based representations. In particular, we extend deep learning architectures to data on graphs; we propose a principled way of graph construction such that convolutional filters respond similarly for the same pattern on different positions of the image regardless of lens distortions. Our experiments show that the proposed method outperforms current techniques for the omnidirectional image classification problem." ] }
1907.11830
2966222024
360° images are usually represented in either equirectangular projection (ERP) or multiple perspective projections. Different from the flat 2D images, the detection task is challenging for 360° images due to the distortion of ERP and the inefficiency of perspective projections. However, existing methods mostly focus on one of the above representations instead of both, leading to limited detection performance. Moreover, the lack of appropriate bounding-box annotations as well as the annotated datasets further increases the difficulties of the detection task. In this paper, we present a standard object detection framework for 360° images. Specifically, we adapt the terminologies of the traditional object detection task to the omnidirectional scenarios, and propose a novel two-stage object detector, i.e., Reprojection R-CNN by combining both ERP and perspective projection. Owing to the omnidirectional field-of-view of ERP, Reprojection R-CNN first generates coarse region proposals efficiently by a distortion-aware spherical region proposal network. Then, it leverages the distortion-free perspective projection and refines the proposed regions by a novel reprojection network. We construct two novel synthetic datasets for training and evaluation. Experiments reveal that Reprojection R-CNN outperforms the previous state-of-the-art methods on the mAP metric. In addition, the proposed detector could run at 178ms per image in the panoramic datasets, which implies its practicability in real-world applications.
: The promising modern object detectors are usually based on two-stage approaches. The Region-based CNN (R-CNN) approach @cite_17 attends to a set of candidate region proposals @cite_12 in the first stage, and then uses a convolutional network to regress the bounding boxes and classify the objects in the second stage. Fast R-CNN @cite_25 extends R-CNN by extracting the proposals directly on feature maps using RoI pooling. Faster R-CNN @cite_2 further replaces the slow selective search with a fast region proposal network, achieving improvements on both speed and accuracy. Numerous extensions have been proposed to this framework @cite_1 @cite_7 @cite_19 @cite_26 . Compared with two-stage approaches, the single-stage pipeline skips the object proposal stage and generates detection and classification directly, such as SSD @cite_14 @cite_23 and YOLO @cite_3 @cite_8 @cite_11 . Though these single-stage pipelines attract interests owing to their fast speed, they lack the alignments of the proposals, which is important for 360 @math object detection. Hence, we adopt the two-stage method in this paper.
{ "cite_N": [ "@cite_26", "@cite_14", "@cite_11", "@cite_7", "@cite_8", "@cite_1", "@cite_3", "@cite_19", "@cite_23", "@cite_2", "@cite_25", "@cite_12", "@cite_17" ], "mid": [ "", "2579985080", "2796347433", "2194775991", "2570343428", "", "2963037989", "2565639579", "2193145675", "2613718673", "", "2088049833", "2102605133" ], "abstract": [ "", "The main contribution of this paper is an approach for introducing additional context into state-of-the-art general object detection. To achieve this we first combine a state-of-the-art classifier (Residual-101[14]) with a fast detection framework (SSD[18]). We then augment SSD+Residual-101 with deconvolution layers to introduce additional large-scale context in object detection and improve accuracy, especially for small objects, calling our resulting system DSSD for deconvolutional single shot detector. While these two contributions are easily described at a high-level, a naive implementation does not succeed. Instead we show that carefully adding additional stages of learned transformations, specifically a module for feed-forward connections in deconvolution and a new output module, enables this new approach and forms a potential way forward for further detection research. Results are shown on both PASCAL VOC and COCO detection. Our DSSD with @math input achieves 81.5 mAP on VOC2007 test, 80.0 mAP on VOC2012 test, and 33.2 mAP on COCO, outperforming a state-of-the-art method R-FCN[3] on each dataset.", "We present some updates to YOLO! We made a bunch of little design changes to make it better. We also trained this new network that's pretty swell. It's a little bigger than last time but more accurate. It's still fast though, don't worry. At 320x320 YOLOv3 runs in 22 ms at 28.2 mAP, as accurate as SSD but three times faster. When we look at the old .5 IOU mAP detection metric YOLOv3 is quite good. It achieves 57.9 mAP@50 in 51 ms on a Titan X, compared to 57.5 mAP@50 in 198 ms by RetinaNet, similar performance but 3.8x faster. As always, all the code is online at this https URL", "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "We introduce YOLO9000, a state-of-the-art, real-time object detection system that can detect over 9000 object categories. First we propose various improvements to the YOLO detection method, both novel and drawn from prior work. The improved model, YOLOv2, is state-of-the-art on standard detection tasks like PASCAL VOC and COCO. Using a novel, multi-scale training method the same YOLOv2 model can run at varying sizes, offering an easy tradeoff between speed and accuracy. At 67 FPS, YOLOv2 gets 76.8 mAP on VOC 2007. At 40 FPS, YOLOv2 gets 78.6 mAP, outperforming state-of-the-art methods like Faster RCNN with ResNet and SSD while still running significantly faster. Finally we propose a method to jointly train on object detection and classification. Using this method we train YOLO9000 simultaneously on the COCO detection dataset and the ImageNet classification dataset. Our joint training allows YOLO9000 to predict detections for object classes that dont have labelled detection data. We validate our approach on the ImageNet detection task. YOLO9000 gets 19.7 mAP on the ImageNet detection validation set despite only having detection data for 44 of the 200 classes. On the 156 classes not in COCO, YOLO9000 gets 16.0 mAP. YOLO9000 predicts detections for more than 9000 different object categories, all in real-time.", "", "We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background. Finally, YOLO learns very general representations of objects. It outperforms other detection methods, including DPM and R-CNN, when generalizing from natural images to other domains like artwork.", "Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But pyramid representations have been avoided in recent object detectors that are based on deep convolutional networks, partially because they are slow to compute and memory intensive. In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost. A top-down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, called a Feature Pyramid Network (FPN), shows significant improvement as a generic feature extractor in several applications. Using a basic Faster R-CNN system, our method achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles, surpassing all existing single-model entries including those from the COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU and thus is a practical and accurate solution to multi-scale object detection. Code will be made publicly available.", "We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.", "", "This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99 recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http: disi.unitn.it uijlings SelectiveSearch.html ).", "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn." ] }
1907.11830
2966222024
360° images are usually represented in either equirectangular projection (ERP) or multiple perspective projections. Different from the flat 2D images, the detection task is challenging for 360° images due to the distortion of ERP and the inefficiency of perspective projections. However, existing methods mostly focus on one of the above representations instead of both, leading to limited detection performance. Moreover, the lack of appropriate bounding-box annotations as well as the annotated datasets further increases the difficulties of the detection task. In this paper, we present a standard object detection framework for 360° images. Specifically, we adapt the terminologies of the traditional object detection task to the omnidirectional scenarios, and propose a novel two-stage object detector, i.e., Reprojection R-CNN by combining both ERP and perspective projection. Owing to the omnidirectional field-of-view of ERP, Reprojection R-CNN first generates coarse region proposals efficiently by a distortion-aware spherical region proposal network. Then, it leverages the distortion-free perspective projection and refines the proposed regions by a novel reprojection network. We construct two novel synthetic datasets for training and evaluation. Experiments reveal that Reprojection R-CNN outperforms the previous state-of-the-art methods on the mAP metric. In addition, the proposed detector could run at 178ms per image in the panoramic datasets, which implies its practicability in real-world applications.
: Object detection in spherical images is an emerging task in computer vision, and several efforts @cite_32 @cite_15 @cite_31 have been made to push forward this issue. @cite_15 utilize the network distillation in the network. This approach applies regular CNN to a specific tangent plane with origin aligned to the object center to generate region proposals. They construct a synthetic dataset by projecting objects in 2D images onto a sphere. Specifically, for each image in the dataset, they select a single bounding box in the image and project it onto the 180th meridian of the sphere with different polar angles. @cite_31 exploit a perspective-projection based detector on a real-world dataset. However, they annotate the objects with rectangular regions on ERP, which should have been distorted on the sphere. Meanwhile, @cite_32 attach the rendered 3D car images to the real-world omnidirectional images and create the synthetic FlyingCars dataset. To solve the distortion in ERP, they utilize the spherical convolution, and apply it to a vanilla SSD.
{ "cite_N": [ "@cite_15", "@cite_31", "@cite_32" ], "mid": [ "2963609011", "2804322746", "2895696451" ], "abstract": [ "While 360° cameras offer tremendous new possibilities in vision, graphics, and augmented reality, the spherical images they produce make core feature extraction non-trivial. Convolutional neural networks (CNNs) trained on images from perspective cameras yield “flat\" filters, yet 360° images cannot be projected to a single plane without significant distortion. A naive solution that repeatedly projects the viewing sphere to all tangent planes is accurate, but much too computationally intensive for real problems. We propose to learn a spherical convolutional network that translates a planar CNN to process 360° imagery directly in its equirectangular projection. Our approach learns to reproduce the flat filter outputs on 360° data, sensitive to the varying distortion effects across the viewing sphere. The key benefits are 1) efficient feature extraction for 360° images and video, and 2) the ability to leverage powerful pre-trained networks researchers have carefully honed (together with massive labeled image training sets) for perspective images. We validate our approach compared to several alternative methods in terms of both raw CNN output accuracy as well as applying a state-of-the-art “flat\" object detector to 360° data. Our method yields the most accurate results while saving orders of magnitude in computation versus the existing exact reprojection solution.", "We introduced a high-resolution equirectangular panorama (360-degree, virtual reality) dataset for object detection and propose a multi-projection variant of YOLO detector. The main challenge with equirectangular panorama image are i) the lack of annotated training data, ii) high-resolution imagery and iii) severe geometric distortions of objects near the panorama projection poles. In this work, we solve the challenges by i) using training examples available in the \"conventional datasets\" (ImageNet and COCO), ii) employing only low-resolution images that require only moderate GPU computing power and memory, and iii) our multi-projection YOLO handles projection distortions by making multiple stereographic sub-projections. In our experiments, YOLO outperforms the other state-of-art detector, Faster RCNN and our multi-projection YOLO achieves the best accuracy with low-resolution input.", "Omnidirectional cameras offer great benefits over classical cameras wherever a wide field of view is essential, such as in virtual reality applications or in autonomous robots. Unfortunately, standard convolutional neural networks are not well suited for this scenario as the natural projection surface is a sphere which cannot be unwrapped to a plane without introducing significant distortions, particularly in the polar regions. In this work, we present SphereNet, a novel deep learning framework which encodes invariance against such distortions explicitly into convolutional neural networks. Towards this goal, SphereNet adapts the sampling locations of the convolutional filters, effectively reversing distortions, and wraps the filters around the sphere. By building on regular convolutions, SphereNet enables the transfer of existing perspective convolutional neural network models to the omnidirectional case. We demonstrate the effectiveness of our method on the tasks of image classification and object detection, exploiting two newly created semi-synthetic and real-world omnidirectional datasets." ] }
1907.11357
2965380104
As a pixel-level prediction task, semantic segmentation needs large computational cost with enormous parameters to obtain high performance. Recently, due to the increasing demand for autonomous systems and robots, it is significant to make a tradeoff between accuracy and inference speed. In this paper, we propose a novel Depthwise Asymmetric Bottleneck (DAB) module to address this dilemma, which efficiently adopts depth-wise asymmetric convolution and dilated convolution to build a bottleneck structure. Based on the DAB module, we design a Depth-wise Asymmetric Bottleneck Network (DABNet) especially for real-time semantic segmentation, which creates sufficient receptive field and densely utilizes the contextual information. Experiments on Cityscapes and CamVid datasets demonstrate that the proposed DABNet achieves a balance between speed and precision. Specifically, without any pretrained model and postprocessing, it achieves 70.1 Mean IoU on the Cityscapes test dataset with only 0.76 million parameters and a speed of 104 FPS on a single GTX 1080Ti card.
Real-time semantic segmentation network requires finding a trade-off between high-quality prediction and high-inference speed. ENet @cite_8 is the first network to be designed in real time, it trims a great number of convolution filters to reduce computation. ICNet @cite_21 proposes an image cascade network that incorporates multi-resolution branches. ERFNet @cite_20 uses residual connections and factorized convolutions to remain efficient while retaining remarkable accuracy. More recently, ESPNet @cite_29 introduces an efficient spatial pyramid (ESP), which brings great improvement in both speed and performance. BiSeNet @cite_2 proposes two paths to combine spatial information and context information. These networks successfully made a trade-off between speed and performance, but there is still sufficient space for further improvement.
{ "cite_N": [ "@cite_8", "@cite_29", "@cite_21", "@cite_2", "@cite_20" ], "mid": [ "2419448466", "2790933182", "2611259176", "2886934227", "2762439315" ], "abstract": [ "The ability to perform pixel-wise semantic segmentation in real-time is of paramount importance in practical mobile applications. Recent deep neural networks aimed at this task have the disadvantage of requiring a large number of floating point operations and have long run-times that hinder their usability. In this paper, we propose a novel deep neural network architecture named ENet (efficient neural network), created specifically for tasks requiring low latency operation. ENet is up to 18x faster, requires 75x less FLOPs, has 79x less parameters, and provides similar or better accuracy to existing models. We have tested it on CamVid, Cityscapes and SUN datasets and report on comparisons with existing state-of-the-art methods, and the trade-offs between accuracy and processing time of a network. We present performance measurements of the proposed architecture on embedded systems and suggest possible software improvements that could make ENet even faster.", "We introduce a fast and efficient convolutional neural network, ESPNet, for semantic segmentation of high resolution images under resource constraints. ESPNet is based on a new convolutional module, efficient spatial pyramid (ESP), which is efficient in terms of computation, memory, and power. ESPNet is 22 times faster (on a standard GPU) and 180 times smaller than the state-of-the-art semantic segmentation network PSPNet, while its category-wise accuracy is only 8 less. We evaluated EPSNet on a variety of semantic segmentation datasets including Cityscapes, PASCAL VOC, and a breast biopsy whole slide image dataset. Under the same constraints on memory and computation, ESPNet outperforms all the current efficient CNN networks such as MobileNet, ShuffleNet, and ENet on both standard metrics and our newly introduced performance metrics that measure efficiency on edge devices. Our network can process high resolution images at a rate of 112 and 9 frames per second on a standard GPU and edge device, respectively.", "We focus on the challenging task of real-time semantic segmentation in this paper. It finds many practical applications and yet is with fundamental difficulty of reducing a large portion of computation for pixel-wise label inference. We propose an image cascade network (ICNet) that incorporates multi-resolution branches under proper label guidance to address this challenge. We provide in-depth analysis of our framework and introduce the cascade feature fusion unit to quickly achieve high-quality segmentation. Our system yields real-time inference on a single GPU card with decent quality results evaluated on challenging datasets like Cityscapes, CamVid and COCO-Stuff.", "Semantic segmentation requires both rich spatial information and sizeable receptive field. However, modern approaches usually compromise spatial resolution to achieve real-time inference speed, which leads to poor performance. In this paper, we address this dilemma with a novel Bilateral Segmentation Network (BiSeNet). We first design a Spatial Path with a small stride to preserve the spatial information and generate high-resolution features. Meanwhile, a Context Path with a fast downsampling strategy is employed to obtain sufficient receptive field. On top of the two paths, we introduce a new Feature Fusion Module to combine features efficiently. The proposed architecture makes a right balance between the speed and segmentation performance on Cityscapes, CamVid, and COCO-Stuff datasets. Specifically, for a 2048 ( ) 1024 input, we achieve 68.4 Mean IOU on the Cityscapes test dataset with speed of 105 FPS on one NVIDIA Titan XP card, which is significantly faster than the existing methods with comparable performance.", "Semantic segmentation is a challenging task that addresses most of the perception needs of intelligent vehicles (IVs) in an unified way. Deep neural networks excel at this task, as they can be trained end-to-end to accurately classify multiple object categories in an image at pixel level. However, a good tradeoff between high quality and computational resources is yet not present in the state-of-the-art semantic segmentation approaches, limiting their application in real vehicles. In this paper, we propose a deep architecture that is able to run in real time while providing accurate semantic segmentation. The core of our architecture is a novel layer that uses residual connections and factorized convolutions in order to remain efficient while retaining remarkable accuracy. Our approach is able to run at over 83 FPS in a single Titan X, and 7 FPS in a Jetson TX1 (embedded device). A comprehensive set of experiments on the publicly available Cityscapes data set demonstrates that our system achieves an accuracy that is similar to the state of the art, while being orders of magnitude faster to compute than other architectures that achieve top precision. The resulting tradeoff makes our model an ideal approach for scene understanding in IV applications. The code is publicly available at: https: github.com Eromera erfnet" ] }
1907.11357
2965380104
As a pixel-level prediction task, semantic segmentation needs large computational cost with enormous parameters to obtain high performance. Recently, due to the increasing demand for autonomous systems and robots, it is significant to make a tradeoff between accuracy and inference speed. In this paper, we propose a novel Depthwise Asymmetric Bottleneck (DAB) module to address this dilemma, which efficiently adopts depth-wise asymmetric convolution and dilated convolution to build a bottleneck structure. Based on the DAB module, we design a Depth-wise Asymmetric Bottleneck Network (DABNet) especially for real-time semantic segmentation, which creates sufficient receptive field and densely utilizes the contextual information. Experiments on Cityscapes and CamVid datasets demonstrate that the proposed DABNet achieves a balance between speed and precision. Specifically, without any pretrained model and postprocessing, it achieves 70.1 Mean IoU on the Cityscapes test dataset with only 0.76 million parameters and a speed of 104 FPS on a single GTX 1080Ti card.
Dilated convolution @cite_23 inserts zeros between each pixel in a standard convolution, which leads to a large effective receptive field without increasing parameters, hence it is generally used in semantic segmentation models. In DeepLab series @cite_40 @cite_35 @cite_3 , an atrous spatial pyramid pooling (ASPP) module is introduced which employs multiple parallel filters with different dilation rates to collect multi-scale information. DenseASPP @cite_39 concatenates a set of dilated convolution layers to generate dense multi-scale feature representation. Most of the state-of-the-art networks in semantic segmentation exploit dilated convolution, which proves its effectiveness in pixel-level prediction task.
{ "cite_N": [ "@cite_35", "@cite_3", "@cite_39", "@cite_40", "@cite_23" ], "mid": [ "2630837129", "2787091153", "2799213142", "2412782625", "1610060839" ], "abstract": [ "In this work, we revisit atrous convolution, a powerful tool to explicitly adjust filter's field-of-view as well as control the resolution of feature responses computed by Deep Convolutional Neural Networks, in the application of semantic image segmentation. To handle the problem of segmenting objects at multiple scales, we design modules which employ atrous convolution in cascade or in parallel to capture multi-scale context by adopting multiple atrous rates. Furthermore, we propose to augment our previously proposed Atrous Spatial Pyramid Pooling module, which probes convolutional features at multiple scales, with image-level features encoding global context and further boost performance. We also elaborate on implementation details and share our experience on training our system. The proposed DeepLabv3' system significantly improves over our previous DeepLab versions without DenseCRF post-processing and attains comparable performance with other state-of-art models on the PASCAL VOC 2012 semantic image segmentation benchmark.", "Spatial pyramid pooling module or encode-decoder structure are used in deep neural networks for semantic segmentation task. The former networks are able to encode multi-scale contextual information by probing the incoming features with filters or pooling operations at multiple rates and multiple effective fields-of-view, while the latter networks can capture sharper object boundaries by gradually recovering the spatial information. In this work, we propose to combine the advantages from both methods. Specifically, our proposed model, DeepLabv3+, extends DeepLabv3 by adding a simple yet effective decoder module to refine the segmentation results especially along object boundaries. We further explore the Xception model and apply the depthwise separable convolution to both Atrous Spatial Pyramid Pooling and decoder modules, resulting in a faster and stronger encoder-decoder network. We demonstrate the effectiveness of the proposed model on PASCAL VOC 2012 and Cityscapes datasets, achieving the test set performance of 89.0 and 82.1 without any post-processing. Our paper is accompanied with a publicly available reference implementation of the proposed models in Tensorflow at this https URL .", "Semantic image segmentation is a basic street scene understanding task in autonomous driving, where each pixel in a high resolution image is categorized into a set of semantic labels. Unlike other scenarios, objects in autonomous driving scene exhibit very large scale changes, which poses great challenges for high-level feature representation in a sense that multi-scale information must be correctly encoded. To remedy this problem, atrous convolution[14]was introduced to generate features with larger receptive fields without sacrificing spatial resolution. Built upon atrous convolution, Atrous Spatial Pyramid Pooling (ASPP)[2] was proposed to concatenate multiple atrous-convolved features using different dilation rates into a final feature representation. Although ASPP is able to generate multi-scale features, we argue the feature resolution in the scale-axis is not dense enough for the autonomous driving scenario. To this end, we propose Densely connected Atrous Spatial Pyramid Pooling (DenseASPP), which connects a set of atrous convolutional layers in a dense way, such that it generates multi-scale features that not only cover a larger scale range, but also cover that scale range densely, without significantly increasing the model size. We evaluate DenseASPP on the street scene benchmark Cityscapes[4] and achieve state-of-the-art performance.", "In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed “DeepLab” system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.", "The purpose of this paper is to present a real-time algorithm for the analysis of time-varying signals with the help of the wavelet transform. We shall briefly describe this transformation in the following. For more details, we refer to the literature [1]." ] }
1907.11357
2965380104
As a pixel-level prediction task, semantic segmentation needs large computational cost with enormous parameters to obtain high performance. Recently, due to the increasing demand for autonomous systems and robots, it is significant to make a tradeoff between accuracy and inference speed. In this paper, we propose a novel Depthwise Asymmetric Bottleneck (DAB) module to address this dilemma, which efficiently adopts depth-wise asymmetric convolution and dilated convolution to build a bottleneck structure. Based on the DAB module, we design a Depth-wise Asymmetric Bottleneck Network (DABNet) especially for real-time semantic segmentation, which creates sufficient receptive field and densely utilizes the contextual information. Experiments on Cityscapes and CamVid datasets demonstrate that the proposed DABNet achieves a balance between speed and precision. Specifically, without any pretrained model and postprocessing, it achieves 70.1 Mean IoU on the Cityscapes test dataset with only 0.76 million parameters and a speed of 104 FPS on a single GTX 1080Ti card.
Convolution factorization divides a standard convolution operation into several steps to reduce the computational cost and memory, which is extensively adopted in lightweight CNN models. Inception @cite_11 @cite_25 @cite_1 employ several small-sized convolutions to replace the convolution with large kernel size while maintaining the size of the receptive field. Xception @cite_36 and MobileNet @cite_0 use the depth-wise separable convolution to reduce the amount of computation with only a slight drop in performance. MobileNetV2 @cite_15 proposes an inverted residual block and linear bottlenecks to further improve the performance. ShuffleNet @cite_17 applies the point-wise group convolution with channel shuffle operation to enable information communication between different groups of channels.
{ "cite_N": [ "@cite_36", "@cite_1", "@cite_17", "@cite_0", "@cite_15", "@cite_25", "@cite_11" ], "mid": [ "2531409750", "2274287116", "2963125010", "2612445135", "2963163009", "2183341477", "" ], "abstract": [ "We present an interpretation of Inception modules in convolutional neural networks as being an intermediate step in-between regular convolution and the depthwise separable convolution operation (a depthwise convolution followed by a pointwise convolution). In this light, a depthwise separable convolution can be understood as an Inception module with a maximally large number of towers. This observation leads us to propose a novel deep convolutional neural network architecture inspired by Inception, where Inception modules have been replaced with depthwise separable convolutions. We show that this architecture, dubbed Xception, slightly outperforms Inception V3 on the ImageNet dataset (which Inception V3 was designed for), and significantly outperforms Inception V3 on a larger image classification dataset comprising 350 million images and 17,000 classes. Since the Xception architecture has the same number of parameters as Inception V3, the performance gains are not due to increased capacity but rather to a more efficient use of model parameters.", "Very deep convolutional networks have been central to the largest advances in image recognition performance in recent years. One example is the Inception architecture that has been shown to achieve very good performance at relatively low computational cost. Recently, the introduction of residual connections in conjunction with a more traditional architecture has yielded state-of-the-art performance in the 2015 ILSVRC challenge; its performance was similar to the latest generation Inception-v3 network. This raises the question of whether there are any benefit in combining the Inception architecture with residual connections. Here we give clear empirical evidence that training with residual connections accelerates the training of Inception networks significantly. There is also some evidence of residual Inception networks outperforming similarly expensive Inception networks without residual connections by a thin margin. We also present several new streamlined architectures for both residual and non-residual Inception networks. These variations improve the single-frame recognition performance on the ILSVRC 2012 classification task significantly. We further demonstrate how proper activation scaling stabilizes the training of very wide residual Inception networks. With an ensemble of three residual and one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the ImageNet classification (CLS) challenge", "We introduce an extremely computation-efficient CNN architecture named ShuffleNet, which is designed specially for mobile devices with very limited computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new operations, pointwise group convolution and channel shuffle, to greatly reduce computation cost while maintaining accuracy. Experiments on ImageNet classification and MS COCO object detection demonstrate the superior performance of ShuffleNet over other structures, e.g. lower top-1 error (absolute 7.8 ) than recent MobileNet [12] on ImageNet classification task, under the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet achieves 13A— actual speedup over AlexNet while maintaining comparable accuracy.", "We present a class of efficient models called MobileNets for mobile and embedded vision applications. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks. We introduce two simple global hyper-parameters that efficiently trade off between latency and accuracy. These hyper-parameters allow the model builder to choose the right sized model for their application based on the constraints of the problem. We present extensive experiments on resource and accuracy tradeoffs and show strong performance compared to other popular models on ImageNet classification. We then demonstrate the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization.", "In this paper we describe a new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes. We also describe efficient ways of applying these mobile models to object detection in a novel framework we call SSDLite. Additionally, we demonstrate how to build mobile semantic segmentation models through a reduced form of DeepLabv3 which we call Mobile DeepLabv3. is based on an inverted residual structure where the shortcut connections are between the thin bottleneck layers. The intermediate expansion layer uses lightweight depthwise convolutions to filter features as a source of non-linearity. Additionally, we find that it is important to remove non-linearities in the narrow layers in order to maintain representational power. We demonstrate that this improves performance and provide an intuition that led to this design. Finally, our approach allows decoupling of the input output domains from the expressiveness of the transformation, which provides a convenient framework for further analysis. We measure our performance on ImageNet [1] classification, COCO object detection [2], VOC image segmentation [3]. We evaluate the trade-offs between accuracy, and number of operations measured by multiply-adds (MAdd), as well as actual latency, and the number of parameters.", "Convolutional networks are at the core of most state of-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in various benchmarks. Although increased model size and computational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we are exploring ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classification challenge validation set demonstrate substantial gains over the state of the art: 21:2 top-1 and 5:6 top-5 error for single frame evaluation using a network with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4 models and multi-crop evaluation, we report 3:5 top-5 error and 17:3 top-1 error on the validation set and 3:6 top-5 error on the official test set.", "" ] }
1907.11484
2966798122
In recent years, object detection has shown impressive results using supervised deep learning, but it remains challenging in a cross-domain environment. The variations of illumination, style, scale, and appearance in different domains can seriously affect the performance of detection models. Previous works use adversarial training to align global features across the domain shift and to achieve image information transfer. However, such methods do not effectively match the distribution of local features, resulting in limited improvement in cross-domain object detection. To solve this problem, we propose a multi-level domain adaptive model to simultaneously align the distributions of local-level features and global-level features. We evaluate our method with multiple experiments, including adverse weather adaptation, synthetic data adaptation, and cross camera adaptation. In most object categories, the proposed method achieves superior performance against state-of-the-art techniques, which demonstrates the effectiveness and robustness of our method.
Domain adaptation is a technique that adapts a model trained in one domain to another. Many related works try to define and minimize the distance of feature distributions between the data from different domains @cite_6 @cite_3 @cite_0 @cite_23 @cite_14 @cite_11 . For example, deep domain confusion (DDC) model @cite_14 explores invariant representations between different domains by minimizing the maximum mean discrepancy (MMD) of feature distributions. Long al propose to adapt all task-specific layers and explore multiple kernel variants of MMD @cite_3 . Ganin and Lempitsky report using the adversarial learning to achieve domain adaptation and learning the distance with the discriminator @cite_6 . Saito al propose to maximize the discrepancy between two classifiers’ output to align distributions @cite_10 . Most of the mentioned works above are designed for classification or segmentation.
{ "cite_N": [ "@cite_14", "@cite_3", "@cite_6", "@cite_0", "@cite_23", "@cite_10", "@cite_11" ], "mid": [ "1565327149", "", "2963826681", "2768591600", "2593768305", "2962687275", "2798681837" ], "abstract": [ "Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias on a standard benchmark. Fine-tuning deep models in a new domain can require a significant amount of data, which for many applications is simply not available. We propose a new CNN architecture which introduces an adaptation layer and an additional domain confusion loss, to learn a representation that is both semantically meaningful and domain invariant. We additionally show that a domain confusion metric can be used for model selection to determine the dimension of an adaptation layer and the best position for the layer in the CNN architecture. Our proposed adaptation method offers empirical performance which exceeds previously published results on a standard benchmark visual domain adaptation task.", "", "Top-performing deep architectures are trained on massive amounts of labeled data. In the absence of labeled data for a certain task, domain adaptation often provides an attractive option given that labeled data of similar nature but from a different domain (e.g. synthetic images) are available. Here, we propose a new approach to domain adaptation in deep architectures that can be trained on large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of \"deep\" features that are (i) discriminative for the main learning task on the source domain and (ii) invariant with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a simple new gradient reversal layer. The resulting augmented architecture can be trained using standard back propagation. Overall, the approach can be implemented with little effort using any of the deep-learning packages. The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-of-the-art on Office datasets.", "In human learning, it is common to use multiple sources of information jointly. However, most existing feature learning approaches learn from only a single task. In this paper, we propose a novel multi-task deep network to learn generalizable high-level visual representations. Since multitask learning requires annotations for multiple properties of the same training instance, we look to synthetic images to train our network. To overcome the domain difference between real and synthetic data, we employ an unsupervised feature space domain adaptation method based on adversarial learning. Given an input synthetic RGB image, our network simultaneously predicts its surface normal, depth, and instance contour, while also minimizing the feature space domain differences between real and synthetic data. Through extensive experiments, we demonstrate that our network learns more transferable representations compared to single-task baselines. Our learned representation produces state-of-the-art transfer learning results on PASCAL VOC 2007 classification and 2012 detection.", "Adversarial learning methods are a promising approach to training robust deep networks, and can generate complex samples across diverse domains. They can also improve recognition despite the presence of domain shift or dataset bias: recent adversarial approaches to unsupervised domain adaptation reduce the difference between the training and test domain distributions and thus improve generalization performance. However, while generative adversarial networks (GANs) show compelling visualizations, they are not optimal on discriminative tasks and can be limited to smaller shifts. On the other hand, discriminative approaches can handle larger domain shifts, but impose tied weights on the model and do not exploit a GAN-based loss. In this work, we first outline a novel generalized framework for adversarial adaptation, which subsumes recent state-of-the-art approaches as special cases, and use this generalized view to better relate prior approaches. We then propose a previously unexplored instance of our general framework which combines discriminative modeling, untied weight sharing, and a GAN loss, which we call Adversarial Discriminative Domain Adaptation (ADDA). We show that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and demonstrate the promise of our approach by exceeding state-of-the-art unsupervised adaptation results on standard domain adaptation tasks as well as a difficult cross-modality object classification task.", "In this work, we present a method for unsupervised domain adaptation. Many adversarial learning methods train domain classifier networks to distinguish the features as either a source or target and train a feature generator network to mimic the discriminator. Two problems exist with these methods. First, the domain classifier only tries to distinguish the features as a source or target and thus does not consider task-specific decision boundaries between classes. Therefore, a trained generator can generate ambiguous features near class boundaries. Second, these methods aim to completely match the feature distributions between different domains, which is difficult because of each domain's characteristics. To solve these problems, we introduce a new approach that attempts to align distributions of source and target by utilizing the task-specific decision boundaries. We propose to maximize the discrepancy between two classifiers' outputs to detect target samples that are far from the support of the source. A feature generator learns to generate target features near the support to minimize the discrepancy. Our method outperforms other methods on several datasets of image classification and semantic segmentation. The codes are available at https: github.com mil-tokyo MCD_DA", "In this paper, we propose a new unsupervised domain adaptation approach called Collaborative and Adversarial Network (CAN) through domain-collaborative and domain-adversarial training of neural networks. We add several domain classifiers on multiple CNN feature extraction blocks1, in which each domain classifier is connected to the hidden representations from one block and one loss function is defined based on the hidden presentation and the domain labels (e.g., source and target). We design a new loss function by integrating the losses from all blocks in order to learn domain informative representations from lower blocks through collaborative learning and learn domain uninformative representations from higher blocks through adversarial learning. We further extend our CAN method as Incremental CAN (iCAN), in which we iteratively select a set of pseudo-labelled target samples based on the image classifier and the last domain classifier from the previous training epoch and re-train our CAN model by using the enlarged training set. Comprehensive experiments on two benchmark datasets Office and ImageCLEF-DA clearly demonstrate the effectiveness of our newly proposed approaches CAN and iCAN for unsupervised domain adaptation." ] }
1907.11484
2966798122
In recent years, object detection has shown impressive results using supervised deep learning, but it remains challenging in a cross-domain environment. The variations of illumination, style, scale, and appearance in different domains can seriously affect the performance of detection models. Previous works use adversarial training to align global features across the domain shift and to achieve image information transfer. However, such methods do not effectively match the distribution of local features, resulting in limited improvement in cross-domain object detection. To solve this problem, we propose a multi-level domain adaptive model to simultaneously align the distributions of local-level features and global-level features. We evaluate our method with multiple experiments, including adverse weather adaptation, synthetic data adaptation, and cross camera adaptation. In most object categories, the proposed method achieves superior performance against state-of-the-art techniques, which demonstrates the effectiveness and robustness of our method.
Huang al propose that aligning the distributions of activations of intermediate layers can alleviate the covariate shift @cite_20 . This idea is similar to our work partly. However, instead of using a least squares generative adversarial network(LSGAN) @cite_18 loss to align distributions for semantic segmentation, we use multi-level image patch loss for object detection.
{ "cite_N": [ "@cite_18", "@cite_20" ], "mid": [ "2593414223", "2895168809" ], "abstract": [ "Unsupervised learning with generative adversarial networks (GANs) has proven hugely successful. Regular GANs hypothesize the discriminator as a classifier with the sigmoid cross entropy loss function. However, we found that this loss function may lead to the vanishing gradients problem during the learning process. To overcome such a problem, we propose in this paper the Least Squares Generative Adversarial Networks (LSGANs) which adopt the least squares loss function for the discriminator. We show that minimizing the objective function of LSGAN yields minimizing the Pearson X2 divergence. There are two benefits of LSGANs over regular GANs. First, LSGANs are able to generate higher quality images than regular GANs. Second, LSGANs perform more stable during the learning process. We evaluate LSGANs on LSUN and CIFAR-10 datasets and the experimental results show that the images generated by LSGANs are of better quality than the ones generated by regular GANs. We also conduct two comparison experiments between LSGANs and regular GANs to illustrate the stability of LSGANs.", "We introduce a layer-wise unsupervised domain adaptation approach for semantic segmentation. Instead of merely matching the output distributions of the source and target domains, our approach aligns the distributions of activations of intermediate layers. This scheme exhibits two key advantages. First, matching across intermediate layers introduces more constraints for training the network in the target domain, making the optimization problem better conditioned. Second, the matched activations at each layer provide similar inputs to the next layer for both training and adaptation, and thus alleviate covariate shift. We use a Generative Adversarial Network (or GAN) to align activation distributions. Experimental results show that our approach achieves state-of-the-art results on a variety of popular domain adaptation tasks, including (1) from GTA to Cityscapes for semantic segmentation, (2) from SYNTHIA to Cityscapes for semantic segmentation, and (3) adaptations on USPS and MNIST for image classification (The website of this paper is https: rsents.github.io dam.html)." ] }
1907.11468
2965936489
Deep learning has been shown to achieve impressive results in several domains like computer vision and natural language processing. Deep architectures are typically trained following a supervised scheme and, therefore, they rely on the availability of a large amount of labeled training data to effectively learn their parameters. Neuro-symbolic approaches have recently gained popularity to inject prior knowledge into a deep learner without requiring it to induce this knowledge from data. These approaches can potentially learn competitive solutions with a significant reduction of the amount of supervised data. A large class of neuro-symbolic approaches is based on First-Order Logic to represent prior knowledge, that is relaxed to a differentiable form using fuzzy logic. This paper shows that the loss function expressing these neuro-symbolic learning tasks can be unambiguously determined given the selection of a t-norm generator. When restricted to simple supervised learning, the presented theoretical apparatus provides a clean justification to the popular cross-entropy loss, that has been shown to provide faster convergence and to reduce the vanishing gradient problem in very deep structures. One advantage of the proposed learning formulation is that it can be extended to all the knowledge that can be represented by a neuro-symbolic method, and it allows the development of a novel class of loss functions, that the experimental results show to lead to faster convergence rates than other approaches previously proposed in the literature.
Neuro-symbolic approaches @cite_30 express the internal or output structure of the learner using logic. First-Order Logic (FOL) is often selected as the declarative framework for the knowledge because of its flexibility and expressive power. This class of methodologies is rooted in previous work from the Statistical Relational Learning community, which developed frameworks for performing logic inference in the presence of uncertainty. For example, Markov Logic Networks @cite_2 and Probabilistic Soft Logic @cite_19 integrate First Order Logic (FOL) and graphical models. A common solution to integrate logic reasoning with uncertainty and deep learning relies on using deep networks to approximate the FOL predicates, and the overall architecture is optimized end-to-end by relaxing the FOL into a differentiable form, which translates into a set of constraints. This approach is followed with minor variants by Semantic Based Regularization @cite_28 , the Lyrics framework @cite_17 , Logic Tensor Networks @cite_14 , the Semantic Loss @cite_0 and DeepProbLog @cite_4 extending the ProbLog @cite_13 @cite_15 framework by using predicates approximated by jointly learned functions.
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_4", "@cite_28", "@cite_0", "@cite_19", "@cite_2", "@cite_15", "@cite_13", "@cite_17" ], "mid": [ "1545139845", "2963277062", "2804796373", "2201744460", "2772282934", "2963572185", "1977970897", "", "1824971879", "2921865277" ], "abstract": [ "1. Introduction and Overview.- 1.1 Why Integrate Neurons and Symbols?.- 1.2 Strategies of Neural-Symbolic Integration.- 1.3 Neural-Symbolic Learning Systems.- 1.4 A Simple Example.- 1.5 How to Read this Book.- 1.6 Summary.- 2. Background.- 2.1 General Preliminaries.- 2.2 Inductive Learning.- 2.3 Neural Networks.- 2.3.1 Architectures.- 2.3.2 Learning Strategy.- 2.3.3 Recurrent Networks.- 2.4 Logic Programming.- 2.4.1 What is Logic Programming?.- 2.4.2 Fixpoints and Definite Programs.- 2.5 Nonmonotonic Reasoning.- 2.5.1 Stable Models and Acceptable Programs.- 2.6 Belief Revision.- 2.6.1 Truth Maintenance Systems.- 2.6.2 Compromise Revision.- I. Knowledge Refinement in Neural Networks.- 3. Theory Refinement in Neural Networks.- 3.1 Inserting Background Knowledge.- 3.2 Massively Parallel Deduction.- 3.3 Performing Inductive Learning.- 3.4 Adding Classical Negation.- 3.5 Adding Met alevel Priorities.- 3.6 Summary and Further Reading.- 4. Experiments on Theory Refinement.- 4.1 DNA Sequence Analysis.- 4.2 Power Systems Fault Diagnosis.- 4.3.Discussion.- 4.4.Appendix.- II. Knowledge Extraction from Neural Networks.- 5. Knowledge Extraction from Trained Networks.- 5.1 The Extraction Problem.- 5.2 The Case of Regular Networks.- 5.2.1 Positive Networks.- 5.2.2 Regular Networks.- 5.3 The General Case Extraction.- 5.3.1 Regular Subnetworks.- 5.3.2 Knowledge Extraction from Subnetworks.- 5.3.3 Assembling the Final Rule Set.- 5.4 Knowledge Representation Issues.- 5.5 Summary and Further Reading.- 6. Experiments on Knowledge Extraction.- 6.1 Implementation.- 6.2 The Monk's Problems.- 6.3 DNA Sequence Analysis.- 6.4 Power Systems Fault Diagnosis.- 6.5 Discussion.- III. Knowledge Revision in Neural Networks.- 7. Handling Inconsistencies in Neural Networks.- 7.1 Theory Revision in Neural Networks.- 7.1.1The Equivalence with Truth Maintenance Systems.- 7.1.2Minimal Learning.- 7.2 Solving Inconsistencies in Neural Networks.- 7.2.1 Compromise Revision.- 7.2.2 Foundational Revision.- 7.2.3 Nonmonotonic Theory Revision.- 7.3 Summary of the Chapter.- 8. Experiments on Handling Inconsistencies.- 8.1 Requirements Specifications Evolution as Theory Refinement.- 8.1.1Analysing Specifications.- 8.1.2Revising Specifications.- 8.2 The Automobile Cruise Control System.- 8.2.1Knowledge Insertion.- 8.2.2Knowledge Revision: Handling Inconsistencies.- 8.2.3Knowledge Extraction.- 8.3 Discussion.- 8.4 Appendix.- 9. Neural-Symbolic Integration: The Road Ahead.- 9.1 Knowledge Extraction.- 9.2 Adding Disjunctive Information.- 9.3 Extension to the First-Order Case.- 9.4 Adding Modalities.- 9.5 New Preference Relations.- 9.6 A Proof Theoretical Approach.- 9.7 The \"Forbidden Zone\" [Amax, Amin].- 9.8 Acceptable Programs and Neural Networks.- 9.9 Epilogue.", "Semantic Image Interpretation (SII) is the task of extracting structured semantic descriptions from images. It is widely agreed that the combined use of visual data and background knowledge is of great importance for SII. Recently, Statistical Relational Learning (SRL) approaches have been developed for reasoning under uncertainty and learning in the presence of data and rich knowledge. Logic Tensor Networks (LTNs) are a SRL framework which integrates neural networks with first-order fuzzy logic to allow (i) efficient learning from noisy data in the presence of logical constraints, and (ii) reasoning with logical formulas describing general properties of the data. In this paper, we develop and apply LTNs to two of the main tasks of SII, namely, the classification of an image's bounding boxes and the detection of the relevant part-of relations between objects. To the best of our knowledge, this is the first successful application of SRL to such SII tasks. The proposed approach is evaluated on a standard image processing benchmark. Experiments show that background knowledge in the form of logical constraints can improve the performance of purely data-driven approaches, including the state-of-theart Fast Region-based Convolutional Neural Networks (Fast R-CNN). Moreover, we show that the use of logical background knowledge adds robustness to the learning system when errors are present in the labels of the training data.", "We introduce DeepProbLog, a probabilistic logic programming language that incorporates deep learning by means of neural predicates. We show how existing inference and learning techniques can be adapted for the new language. Our experiments demonstrate that DeepProbLog supports both symbolic and subsymbolic representations and inference, 1) program induction, 2) probabilistic (logic) programming, and 3) (deep) learning from examples. To the best of our knowledge, this work is the first to propose a framework where general-purpose neural networks and expressive probabilistic-logical modeling and reasoning are integrated in a way that exploits the full expressiveness and strengths of both worlds and can be trained end-to-end based on examples.", "Abstract This paper proposes a unified approach to learning from constraints, which integrates the ability of classical machine learning techniques to learn from continuous feature-based representations with the ability of reasoning using higher-level semantic knowledge typical of Statistical Relational Learning. Learning tasks are modeled in the general framework of multi-objective optimization, where a set of constraints must be satisfied in addition to the traditional smoothness regularization term. The constraints translate First Order Logic formulas, which can express learning-from-example supervisions and general prior knowledge about the environment by using fuzzy logic. By enforcing the constraints also on the test set, this paper presents a natural extension of the framework to perform collective classification. Interestingly, the theory holds for both the case of data represented by feature vectors and the case of data simply expressed by pattern identifiers, thus extending classic kernel machines and graph regularization, respectively. This paper also proposes a probabilistic interpretation of the proposed learning scheme, and highlights intriguing connections with probabilistic approaches like Markov Logic Networks. Experimental results on classic benchmarks provide clear evidence of the remarkable improvements that are obtained with respect to related approaches.", "This paper develops a novel methodology for using symbolic knowledge in deep learning. From first principles, we derive a semantic loss function that bridges between neural output vectors and logical constraints. This loss function captures how close the neural network is to satisfying the constraints on its output. An experimental evaluation shows that our semantic loss function effectively guides the learner to achieve (near-)state-of-the-art results on semi-supervised multi-class classification. Moreover, it significantly increases the ability of the neural network to predict structured objects, such as rankings and paths. These discrete concepts are tremendously difficult to learn, and benefit from a tight integration of deep learning and symbolic reasoning methods.", "A fundamental challenge in developing high-impact machine learning technologies is balancing the need to model rich, structured domains with the ability to scale to big data. Many important problem areas are both richly structured and large scale, from social and biological networks, to knowledge graphs and the Web, to images, video, and natural language. In this paper, we introduce two new formalisms for modeling structured data, and show that they can both capture rich structure and scale to big data. The first, hingeloss Markov random fields (HL-MRFs), is a new kind of probabilistic graphical model that generalizes different approaches to convex inference. We unite three approaches from the randomized algorithms, probabilistic graphical models, and fuzzy logic communities, showing that all three lead to the same inference objective. We then define HL-MRFs by generalizing this unified objective. The second new formalism, probabilistic soft logic (PSL), is a probabilistic programming language that makes HL-MRFs easy to define using a syntax based on first-order logic. We introduce an algorithm for inferring most-probable variable assignments (MAP inference) that is much more scalable than general-purpose convex optimization methods, because it uses message passing to take advantage of sparse dependency structures. We then show how to learn the parameters of HL-MRFs. The learned HL-MRFs are as accurate as analogous discrete models, but much more scalable. Together, these algorithms enable HL-MRFs and PSL to model rich, structured data at scales not previously possible.", "We propose a simple approach to combining first-order logic and probabilistic graphical models in a single representation. A Markov logic network (MLN) is a first-order knowledge base with a weight attached to each formula (or clause). Together with a set of constants representing objects in the domain, it specifies a ground Markov network containing one feature for each possible grounding of a first-order formula in the KB, with the corresponding weight. Inference in MLNs is performed by MCMC over the minimal subset of the ground network required for answering the query. Weights are efficiently learned from relational databases by iteratively optimizing a pseudo-likelihood measure. Optionally, additional clauses are learned using inductive logic programming techniques. Experiments with a real-world database and knowledge base in a university domain illustrate the promise of this approach.", "", "We introduce ProbLog, a probabilistic extension of Prolog. A ProbLog program defines a distribution over logic programs by specifying for each clause the probability that it belongs to a randomly sampled program, and these probabilities are mutually independent. The semantics of ProbLog is then defined by the success probability of a query, which corresponds to the probability that the query succeeds in a randomly sampled program. The key contribution of this paper is the introduction of an effective solver for computing success probabilities. It essentially combines SLD-resolution with methods for computing the probability of Boolean formulae. Our implementation further employs an approximation algorithm that combines iterative deepening with binary decision diagrams. We report on experiments in the context of discovering links in real biological networks, a demonstration of the practical usefulness of the approach.", "In spite of the amazing results obtained by deep learning in many applications, a real intelligent behavior of an agent acting in a complex environment is likely to require some kind of higher-level symbolic inference. Therefore, there is a clear need for the definition of a general and tight integration between low-level tasks, processing sensorial data that can be effectively elaborated using deep learning techniques, and the logic reasoning that allows humans to take decisions in complex environments. This paper presents LYRICS, a generic interface layer for AI, which is implemented in TersorFlow (TF). LYRICS provides an input language that allows to define arbitrary First Order Logic (FOL) background knowledge. The predicates and functions of the FOL knowledge can be bound to any TF computational graph, and the formulas are converted into a set of real-valued constraints, which participate to the overall optimization problem. This allows to learn the weights of the learners, under the constraints imposed by the prior knowledge. The framework is extremely general as it imposes no restrictions in terms of which models or knowledge can be integrated. In this paper, we show the generality of the approach showing some use cases of the presented language, including generative models, logic reasoning, model checking and supervised learning." ] }
1907.11468
2965936489
Deep learning has been shown to achieve impressive results in several domains like computer vision and natural language processing. Deep architectures are typically trained following a supervised scheme and, therefore, they rely on the availability of a large amount of labeled training data to effectively learn their parameters. Neuro-symbolic approaches have recently gained popularity to inject prior knowledge into a deep learner without requiring it to induce this knowledge from data. These approaches can potentially learn competitive solutions with a significant reduction of the amount of supervised data. A large class of neuro-symbolic approaches is based on First-Order Logic to represent prior knowledge, that is relaxed to a differentiable form using fuzzy logic. This paper shows that the loss function expressing these neuro-symbolic learning tasks can be unambiguously determined given the selection of a t-norm generator. When restricted to simple supervised learning, the presented theoretical apparatus provides a clean justification to the popular cross-entropy loss, that has been shown to provide faster convergence and to reduce the vanishing gradient problem in very deep structures. One advantage of the proposed learning formulation is that it can be extended to all the knowledge that can be represented by a neuro-symbolic method, and it allows the development of a novel class of loss functions, that the experimental results show to lead to faster convergence rates than other approaches previously proposed in the literature.
Within this class of approaches, it is of fundamental importance to define how to perform the fuzzy relaxation of the formulas in the knowledge base. For instance, @cite_31 introduces a learning framework where formulas are converted according to ukasiewicz logic t-norm and t-conorms. @cite_18 also proposes to convert the formulas according to ukasiewicz logic, however this paper exploits the weak conjunction in place of the t-norms to get convex functional constraints. A more practical approach has been considered in (SBR), where all the fundamental t-norms have been evaluated on different learning tasks @cite_28 . However, it does not emerge from this prior work a unified principle to express the cost function to be optimized with respect to the selected fuzzy logic. For example, all the aforementioned approaches rely on a fixed loss function linearly measuring the distance of the formulas from the 1-value. Even if it may be justified from a logical point of view ( @math ), it is not clear whether this choice is principled from a learning standpoint, since all deep learning approaches use very different loss functions to enforce the fitting of the supervised data.
{ "cite_N": [ "@cite_28", "@cite_31", "@cite_18" ], "mid": [ "2201744460", "2621167405", "2892114952" ], "abstract": [ "Abstract This paper proposes a unified approach to learning from constraints, which integrates the ability of classical machine learning techniques to learn from continuous feature-based representations with the ability of reasoning using higher-level semantic knowledge typical of Statistical Relational Learning. Learning tasks are modeled in the general framework of multi-objective optimization, where a set of constraints must be satisfied in addition to the traditional smoothness regularization term. The constraints translate First Order Logic formulas, which can express learning-from-example supervisions and general prior knowledge about the environment by using fuzzy logic. By enforcing the constraints also on the test set, this paper presents a natural extension of the framework to perform collective classification. Interestingly, the theory holds for both the case of data represented by feature vectors and the case of data simply expressed by pattern identifiers, thus extending classic kernel machines and graph regularization, respectively. This paper also proposes a probabilistic interpretation of the proposed learning scheme, and highlights intriguing connections with probabilistic approaches like Markov Logic Networks. Experimental results on classic benchmarks provide clear evidence of the remarkable improvements that are obtained with respect to related approaches.", "This paper presents a revision of Real Logic and its implementation with Logic Tensor Networks and its application to Semantic Image Interpretation. Real Logic is a framework where learning from numerical data and logical reasoning are integrated using first order logic syntax. The symbols of the signature of Real Logic are interpreted in the data-space, i.e, on the domain of real numbers. The integration of learning and reasoning obtained in Real Logic allows us to formalize learning as approximate satisfiability in the presence of logical constraints, and to perform inference on symbolic and numerical data. After introducing a refined version of the formalism, we describe its implementation into Logic Tensor Networks which uses deep learning within Google's T ensor F low ™. We evaluate LTN on the task of classifying objects and their parts in images, where we combine state-of-the-art-object detectors with a part-of ontology. LTN outperforms the state-of-the-art on object classification, and improves the performances on part-of relation detection with respect to a rule-based baseline.", "In this paper, we introduce the convex fragment of Łukasiewicz logic and discuss its possible applications in different learning schemes. The provided theoretical results are highly general because they can be exploited in any learning framework involving logical constraints. The method is of particular interest since the fragment guarantees to deal with convex constraints, which are shown to be equivalent to a set of linear constraints. Within this framework, we are able to formulate learning with kernel machines as well as collective classification as a quadratic programming problem." ] }
1907.11468
2965936489
Deep learning has been shown to achieve impressive results in several domains like computer vision and natural language processing. Deep architectures are typically trained following a supervised scheme and, therefore, they rely on the availability of a large amount of labeled training data to effectively learn their parameters. Neuro-symbolic approaches have recently gained popularity to inject prior knowledge into a deep learner without requiring it to induce this knowledge from data. These approaches can potentially learn competitive solutions with a significant reduction of the amount of supervised data. A large class of neuro-symbolic approaches is based on First-Order Logic to represent prior knowledge, that is relaxed to a differentiable form using fuzzy logic. This paper shows that the loss function expressing these neuro-symbolic learning tasks can be unambiguously determined given the selection of a t-norm generator. When restricted to simple supervised learning, the presented theoretical apparatus provides a clean justification to the popular cross-entropy loss, that has been shown to provide faster convergence and to reduce the vanishing gradient problem in very deep structures. One advantage of the proposed learning formulation is that it can be extended to all the knowledge that can be represented by a neuro-symbolic method, and it allows the development of a novel class of loss functions, that the experimental results show to lead to faster convergence rates than other approaches previously proposed in the literature.
From a learning point of view, different quantifier conversions can be taken into account and validated, as well. For instance, the arithmetic mean and maximum operator have been used to convert the universal and existential quantifiers in @cite_28 , respectively. Different possibilities have been considered for the universal quantifier in @cite_14 , while the existential quantifier depends on this choice via the application of the strong negation using the DeMorgan law. The Arithmetic mean operator has been shown to achieve better performances in the conversion of the universal quantifier @cite_14 , with the existential quantifier implemented by Skolemization. However, the universal and existential quantifiers can be thought of as a generalized AND and OR, respectively. Therefore, converting the quantifiers using a mean operator has no direct justification inside a logic theory.
{ "cite_N": [ "@cite_28", "@cite_14" ], "mid": [ "2201744460", "2963277062" ], "abstract": [ "Abstract This paper proposes a unified approach to learning from constraints, which integrates the ability of classical machine learning techniques to learn from continuous feature-based representations with the ability of reasoning using higher-level semantic knowledge typical of Statistical Relational Learning. Learning tasks are modeled in the general framework of multi-objective optimization, where a set of constraints must be satisfied in addition to the traditional smoothness regularization term. The constraints translate First Order Logic formulas, which can express learning-from-example supervisions and general prior knowledge about the environment by using fuzzy logic. By enforcing the constraints also on the test set, this paper presents a natural extension of the framework to perform collective classification. Interestingly, the theory holds for both the case of data represented by feature vectors and the case of data simply expressed by pattern identifiers, thus extending classic kernel machines and graph regularization, respectively. This paper also proposes a probabilistic interpretation of the proposed learning scheme, and highlights intriguing connections with probabilistic approaches like Markov Logic Networks. Experimental results on classic benchmarks provide clear evidence of the remarkable improvements that are obtained with respect to related approaches.", "Semantic Image Interpretation (SII) is the task of extracting structured semantic descriptions from images. It is widely agreed that the combined use of visual data and background knowledge is of great importance for SII. Recently, Statistical Relational Learning (SRL) approaches have been developed for reasoning under uncertainty and learning in the presence of data and rich knowledge. Logic Tensor Networks (LTNs) are a SRL framework which integrates neural networks with first-order fuzzy logic to allow (i) efficient learning from noisy data in the presence of logical constraints, and (ii) reasoning with logical formulas describing general properties of the data. In this paper, we develop and apply LTNs to two of the main tasks of SII, namely, the classification of an image's bounding boxes and the detection of the relevant part-of relations between objects. To the best of our knowledge, this is the first successful application of SRL to such SII tasks. The proposed approach is evaluated on a standard image processing benchmark. Experiments show that background knowledge in the form of logical constraints can improve the performance of purely data-driven approaches, including the state-of-theart Fast Region-based Convolutional Neural Networks (Fast R-CNN). Moreover, we show that the use of logical background knowledge adds robustness to the learning system when errors are present in the labels of the training data." ] }
1907.11322
2966078947
By expanding the connection of objects to the Internet and their entry to human life, the issue of security and privacy has become important. In order to enhance security and privacy on the Internet, many security protocols have been developed. Unfortunately, the security analyzes that have been carried out on these protocols show that they are vulnerable to one or few attacks, which eliminates the use of these protocols. Therefore, the need for a security protocol on the Internet of Things (IoT) has not yet been resolved. Recently, Khor and Sidorov cryptanalyzed the protocol and presented an improved version of it. In this paper, at first, we show that this protocol also does not have sufficient security and so it is not recommended to be used in any application. More precisely, we present a full secret disclosure attack against this protocol, which extracted the whole secrets of the protocol by two communication with the target tag. In addition, recently proposed an ultralightweight mutual authentication RFID protocol for blockchain enabled supply chains, supported by formal and informal security proofs. However, we present a full secret disclosure attack against this protocol as well.
A factor that can help expand the IoT is to create confidence for users of this technology in terms of maintaining their privacy and security, because if there is no proper security in its infrastructure, damage to IoT-based equipment, the possibility of losing personal information, the loss of privacy, and even the disclosure of economic and other data, are highly likely, and this may lead to the inability to use it in critical applications. Ronen and Shamir in @cite_6 pointed out that if security is not taken into account in the IoT-based infrastructure, the technology will threaten the future of the world as a nuclear bomb.
{ "cite_N": [ "@cite_6" ], "mid": [ "2686848947" ], "abstract": [ "Within the next few years, billions of IoT devices will densely populate our cities. In this paper we describe a new type of threat in which adjacent IoT devices will infect each other with a worm that will rapidly spread over large areas, provided that the density of compatible IoT devices exceeds a certain critical mass. In particular, we developed and verified such an infection using the popular Philips Hue smart lamps as a platform. The worm spreads by jumping directly from one lamp to its neighbors, using only their built-in ZigBee wireless connectivity and their physical proximity. The attack can start by plugging in a single infected bulb anywhere in the city, and then catastrophically spread everywhere within minutes. It enables the attacker to turn all the city lights on or off, to permanently brick them, or to exploit them in a massive DDOS attack. To demonstrate the risks involved, we use results from percolation theory to estimate the critical mass of installed devices for a typical city such as Paris whose area is about 105 square kilometers: The chain reaction will fizzle if there are fewer than about 15,000 randomly located smart lamps in the whole city, but will spread everywhere when the number exceeds this critical mass (which had almost certainly been surpassed already). To make such an attack possible, we had to find a way to remotely yank already installed lamps from their current networks, and to perform over-the-air firmware updates. We overcame the first problem by discovering and exploiting a major bug in the implementation of the Touchlink part of the ZigBee Light Link protocol, which is supposed to stop such attempts with a proximity test. To solve the second problem, we developed a new version of a side channel attack to extract the global AES-CCM key (for each device type) that Philips uses to encrypt and authenticate new firmware. We used only readily available equipment costing a few hundred dollars, and managed to find this key without seeing any actual updates. This demonstrates once again how difficult it is to get security right even for a large company that uses standard cryptographic techniques to protect a major product." ] }
1907.11322
2966078947
By expanding the connection of objects to the Internet and their entry to human life, the issue of security and privacy has become important. In order to enhance security and privacy on the Internet, many security protocols have been developed. Unfortunately, the security analyzes that have been carried out on these protocols show that they are vulnerable to one or few attacks, which eliminates the use of these protocols. Therefore, the need for a security protocol on the Internet of Things (IoT) has not yet been resolved. Recently, Khor and Sidorov cryptanalyzed the protocol and presented an improved version of it. In this paper, at first, we show that this protocol also does not have sufficient security and so it is not recommended to be used in any application. More precisely, we present a full secret disclosure attack against this protocol, which extracted the whole secrets of the protocol by two communication with the target tag. In addition, recently proposed an ultralightweight mutual authentication RFID protocol for blockchain enabled supply chains, supported by formal and informal security proofs. However, we present a full secret disclosure attack against this protocol as well.
Reviewing the proposed mechanisms and designing new models that are compatible with IoT-based devices are also very important. Given that an IoT system can include many objects with limited resources, it requires special protocols to ensure that privacy and security are guaranteed. Therefore, with the further development of IoT, its security concerns are expected to receive more attention. So far, several security protocols have been proposed to ensure IoT security, e.g. @cite_14 @cite_26 @cite_2 @cite_22 @cite_12 , however, most of them have failed in providing their security goals @cite_8 @cite_4 @cite_0 @cite_30 @cite_9 @cite_29 and various attacks, such as the protocol's secret values disclosure, DoS, traceability, impersonation and etc. were reported against them. The presentation of these attacks resulted in the development of the protocol's designing knowledge and the protocol designers are designing their protocols in such a way as to be as safe as the published attacks so far. Unfortunately, there are still attacks against newly designed protocols, and this science has not yet matured.
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_26", "@cite_22", "@cite_4", "@cite_8", "@cite_9", "@cite_29", "@cite_0", "@cite_2", "@cite_12" ], "mid": [ "2755191230", "2737663594", "2737418893", "2608333876", "2732131698", "2345110324", "2043127616", "2299880810", "", "2766595485", "2625844534" ], "abstract": [ "In recent years, RFID (radio-frequency identification) systems are widely used in many applications. One of the most important applications for this technology is the Internet of things (IoT). Therefore, researchers have proposed several authentication protocols that can be employed in RFID-based IoT systems, and they have claimed that their protocols can satisfy all security requirements of these systems. However, in RFID-based IoT systems we have mobile readers that can be compromised by the adversary. Due to this attack, the adversary can compromise a legitimate reader and obtain its secrets. So, the protocol designers must consider the security of their proposals even in the reader compromised scenario. In this paper, we consider the security of the ultra-lightweight RFID mutual authentication (ULRMAPC) protocol recently proposed by They claimed that their protocol could be applied in the IoT systems and provide strong security. However, in this paper we show that their protocol is vulnerable to denial of service, reader and tag impersonation and de-synchronization attacks. To provide a solution, we present a new authentication protocol, which is more secure than the ULRMAPC protocol and also can be employed in RFID-based IoT systems.", "IoT (Internet of Things) devices such as sensors have been actively used in 'fogs' to provide critical data during e.g., disaster response scenarios or in-home healthcare. Since IoT devices typically operate in resource-constrained computing environments at the network-edge, data transfer performance to the cloud as well as end-to-end security have to be robust and customizable. In this paper, we present the design and implementation of a middleware featuring \"intermittent\" and \"flexible\" end-to-end security for cloud-fog communications. Intermittent security copes with unreliable network connections, and flexibility is achieved through security configurations that are tailored to application needs. Our experiment results show how our middleware that leverages static pre-shared keys forms a promising solution for delivering light-weight, fast and resource-aware security for a variety of IoT-based applications.", "The Internet of Things (IoT) enables the integration of data from virtual and physical worlds. 1 It involves smart objects that can understand and react to their environment in a variety of industrial, commercial and household settings. 2 As the IoT expands the number of connected devices, there is the potential to allow cyber-attackers into the physical world in which we live, as they seize on security holes in these new systems. 3 New security issues arise through the heterogeneity of IoT applications and devices and their large-scale deployment. The Internet of Things (IoT) enables the integration of data from virtual and physical worlds. But as the IoT expands it becomes more vulnerable to cyber-attackers. New security issues arise through the heterogeneity of IoT applications and devices and their large-scale deployment. Mark Taylor, Denis Reilly and Brett Lempereur of Liverpool John Moores University present a multi-tiered security approach for IoT devices that incorporates physical proximity controls, geo-location checking, instruction encryption, embedded controls and exception reporting.", "As one of the core techniques in 5G, the Internet of Things (IoT) is increasingly attracting people’s attention. Meanwhile, as an important part of IoT, the Near Field Communication (NFC) is widely used on mobile devices and makes it possible to take advantage of NFC system to complete mobile payment and merchandise information reading. But with the development of NFC, its problems are increasingly exposed, especially the security and privacy of authentication. Many NFC authentication protocols have been proposed for that, some of them only improve the function and performance without considering the security and privacy, and most of the protocols are heavyweight. In order to overcome these problems, this paper proposes an ultralightweight mutual authentication protocol, named ULMAP. ULMAP only uses Bit and XOR operations to complete the mutual authentication and prevent the denial of service (DoS) attack. In addition, it uses subkey and subindex number into its key update process to achieve the forward security. The most important thing is that the computation and storage overhead of ULMAP are few. Compared with some traditional schemes, our scheme is lightweight, economical, practical, and easy to protect against synchronization attack.", "Recently, Tewari and Gupta proposed a ultra-lightweight mutual authentication protocol in IoT environments for RFID tags. Their protocol aims to provide secure communication with least cost in both storage and computation. Unfortunately, in this paper, we exploit the vulnerability of this protocol. In this attack, an attacker can obtain the key shared between a back-end database server and a tag. We also explore the possibility in patching the system with some modifications.", "Internet of things (IoT) or Web of Things (WoT) is a wireless network between smart products or smart things connected to the internet. It is a new and fast developing market which not only connects objects and people but also billions of gadgets and smart devices. With the rapid growth of IoT, there is also a steady increase in security vulnerabilities of the linked objects. For example, a car manufacturer may want to link the systems within a car to smart home network networks to increase sales, but if all the various people involved do not embrace security the system will be exposed to security risks. As a result, there are several new published protocols of IoT, which focus on protecting critical data. However, these protocols face challenges and in this paper, numerous solutions are provided to overcome these problems. The widely used protocols such as, 802.15.4, 6LoWPAN, and RPL are the resenting of the IoT layers PHY MAC, Adoption and Network. While CoAP (Constrained Application Protocol) is the application layer protocol designed as replication of the HTTP to serve the small devices coming under class 1 and 2. Many implementations of CoAP has been accomplished which indicates it's crucial amd upcoming role in the future of IoT applications. This research article explored the security of CoAP over DTLS incurring many issues and proposed solutions as well as open challenges for future research.", "With the advancement of Internet of Things (IoT) technology and rapid growth of WSN applications, provides an opportunity to connect WSN to IoT, which results in the secure sensor data can be accessible via in secure Internet. The integration of WSN and IoT effects lots of security challenges and requires strict user authentication mechanism. Quite a few isolated user verification or authentication schemes using the password, the biometrics and the smart card have been proposed in the literature. In 2013, A.K designed a biometric-based remote user verification scheme using smart card for heterogeneous wireless sensor networks. A.K insisted that their scheme is secure against several known cryptographic attacks. Unfortunately, in this manuscript we will show that their scheme fails to resist replay attack, user impersonation attack, failure to accomplish mutual authentication and failure to provide data privacy.", "The term \"Internet of Things (IoT)\" expresses a huge network of smart and connected objects which can interact with other devices without our interposition. Radio frequency identification (RFID) is a great technology and an interesting candidate to provide communications for IoT networks, but numerous security and privacy issues need to be considered. In this paper, we analyze the security and the privacy of a new RFID authentication protocol proposed by in 2014. We prove that although have tried to present a secure and untraceable authentication protocol, their protocol still suffers from several security and privacy weaknesses which make it vulnerable to various security and privacy attacks. We present our privacy analysis based on a well-known formal privacy model which is presented by Ouafi and Phan in 2008. Moreover, to stop such attacks on the protocol and increase the performance of ’s scheme, we present some modifications and propound an improved version of the protocol. Finally, the security and the privacy of the proposed protocol were analyzed against various attacks. KEWORDS Internet of things; RFID authentication protocols; Security and privacy; Ouafi-Phan privacy model; EPC C1 G2 standard. * Corresponding author email: mj.emadi@aut.ac.ir", "", "Abstract In large-scale Internet of Things (IoT) systems, huge volumes of data are collected from anywhere at any time, which may invade people’s privacy, especially when the systems are used in medical or daily living environments. Preserving privacy is an important issue, and higher privacy demands usually tend to require weaker identity. However, previous research has indicated that strong security tends to demand strong identity, especially in authentication processes. Thus, defining a good tradeoff between privacy and security remains a challenging problem. This motivates us to develop a privacy-preserving and accountable authentication protocol for IoT end-devices with weaker identity, which integrates an adapted construction of short group signatures and Shamir’s secret sharing scheme. We analyze the security properties of our protocol in the context of six typical attacks and verify the formal security using the Proverif tool. Experiments using our implementation in MacBook Pro and Intel Edison development platforms show that our authentication protocol is feasible in practice.", "The Internet of Things enables the interconnection of smart physical and virtual objects, managed by highly developed technologies. WSN, is an essential part of this paradigm. The WSN uses smart, autonomous and usually limited capacity devices in order to sense and monitor industrial environments. However, if no authentication mechanism is deployed, this system can be accessible, used and controlled by non-authorized users. In this paper, we propose a robust WSN mutual authentication protocol. A real implementation of the protocol was realized on OCARI, one of the most interesting Wireless Sensor Network technologies. All nodes wanting to access the network should be authenticated at the MAC sub-layer of OCARI. This protocol is especially designed to be implemented on devices with low storage and computing capacities." ] }
1907.11322
2966078947
By expanding the connection of objects to the Internet and their entry to human life, the issue of security and privacy has become important. In order to enhance security and privacy on the Internet, many security protocols have been developed. Unfortunately, the security analyzes that have been carried out on these protocols show that they are vulnerable to one or few attacks, which eliminates the use of these protocols. Therefore, the need for a security protocol on the Internet of Things (IoT) has not yet been resolved. Recently, Khor and Sidorov cryptanalyzed the protocol and presented an improved version of it. In this paper, at first, we show that this protocol also does not have sufficient security and so it is not recommended to be used in any application. More precisely, we present a full secret disclosure attack against this protocol, which extracted the whole secrets of the protocol by two communication with the target tag. In addition, recently proposed an ultralightweight mutual authentication RFID protocol for blockchain enabled supply chains, supported by formal and informal security proofs. However, we present a full secret disclosure attack against this protocol as well.
Among different designing strategies of security protocols, attempts to design a secure ultralightweight protocol for constrained environment has a long (unsuccessful) history. Pioneer examples include SASI @cite_27 , RAPP @cite_1 , SLAP @cite_25 , LMAP @cite_10 and R @math AP @cite_11 and among the recent proposals is SecLAP @cite_15 , and many other broken protocols that have been compromised by the later third parties analysis @cite_23 @cite_13 @cite_5 @cite_3 @cite_17 @cite_19 @cite_28 @cite_20 . All those protocols tried to provide enough security only using few lightweight operations such as bitwise operations, e.g. logical AND, OR, XOR and rotation. However, the mentioned analysis have shown that it is not easy to design a strong protocol using cryptographically-weak components.
{ "cite_N": [ "@cite_13", "@cite_28", "@cite_1", "@cite_17", "@cite_3", "@cite_19", "@cite_27", "@cite_23", "@cite_5", "@cite_15", "@cite_10", "@cite_25", "@cite_20", "@cite_11" ], "mid": [ "2082342720", "1484207026", "2093422627", "2970373341", "2574005192", "2573616158", "1965515427", "2081711536", "2099841931", "2962487379", "2430093443", "2574005192", "2957682351", "" ], "abstract": [ "RAPP (RFID Authentication Protocol with Permutation) is a recently proposed and efficient ultralightweight authentication protocol. Although it maintains the structure of the other existing ultralightweight protocols, the operation used in it is totally different due to the use of new introduced data dependent permutations and avoidance of modular arithmetic operations and biased logical operations such as AND and OR. The designers of RAPP claimed that this protocol resists against desynchronization attacks since the last messages of the protocol is sent by the reader and not by the tag. This letter challenges this assumption and shows that RAPP is vulnerable against desynchronization attack. This attack has a reasonable probability of success and is effective whether Hamming weight-based or modular-based rotations are used by the protocol.", "proposed a novel ultralightweight RFID mutual authentication protocol [1] that has recently been analyzed in several articles. In this letter, we first propose a desynchronization attack that succeeds with probability almost 1, which improves upon the 0.25 given in a previous analysis by We also show that the bad properties of the proposed permutation function can be exploited to disclose several bits of the tag's secret rather than just 1bit as previously shown by , which increases the power of a traceability attack. Finally, we show how to extend the aforementioned attack to run a full disclosure attack, which requires to eavesdrop less protocol runs than the proposed attack by i.e., 192<<2 30. Copyright © 2013 John Wiley & Sons, Ltd.", "One of the key problems in RFID is security and privacy. The implementation of authentication protocols is a flexible and effective way to solve this problem. This letter proposes a new ultralightweight RFID authentication protocol with permutation (RAPP). RAPP avoids using unbalanced OR and AND operations and introduces a new operation named permutation. The tags only involve three operations: bitwise XOR, left rotation and permutation. In addition, unlike other existing ultralightweight protocols, the last messages exchanged in RAPP are sent by the reader so as to resist de-synchronization attacks. Security analysis shows that RAPP achieves the functionalities of the authentication protocol and is resistant to various attacks. Performance evaluation illustrates that RAPP uses fewer resources on tags in terms of computation operation, storage requirement and communication cost.", "", "", "Internet of Things (IoT) is a technology in which for any object the ability to send data via communications networks is provided. Ensuring the security of Internet services and applications is an important factor in attracting users to use this platform. In the other words, if people are unable to trust that the equipment and information will be reasonably safe against damage, abuse and the other security threats, this lack of trust leads to a reduction in the use of IoT-based applications. Recently, Tewari and Gupta (J Supercomput 1–18, 2016) have proposed an ultralightweight RFID authentication protocol to provide desired security for objects in IoT. In this paper, we consider the security of the proposed protocol and present a passive secret disclosure attack against it. The success probability of the attack is ‘1’ while the complexity of the attack is only eavesdropping one session of the protocol. The presented attack has negligible complexity. We verify the correctness of the presented attack by simulation.", "As low-cost RFIDs become more and more popular, it is imperative to design ultralightweight RFID authentication protocols to resist all possible attacks and threats. However, all of the previous ultralightweight authentication schemes are vulnerable to various attacks. In this paper, we propose a new ultralightweight RFID authentication protocol that provides strong authentication and strong integrity protection of its transmission and of updated data. The protocol requires only simple bit-wise operations on the tag and can resist all the possible attacks. These features make it very attractive to low-cost RFIDs and very low-cost RFIDs.", "In the recent years, there has been an increasing interest in the development of secure and private authentication protocols for RFID. In order to suit the very lightweight nature of RFID tags, a number of proposals have focused on the design of very efficient authentication protocols using no classical cryptographic primitives. This article presents the state of the art in this field by summarizing this family of protocols and the most important attacks against them. The contribution also consists of a passive full-disclosure attack on the SASI and Yeh-Lo-Winata ultralightweight authentication protocols.", "Since RFID tags are ubiquitous and at times even oblivious to the human user, all modern RFID protocols are designed to resist tracking so that the location privacy of the human RFID user is not violated. Another design criterion for RFIDs is the low computational effort required for tags, in view that most tags are passive devices that derive power from an RFID reader's signals. Along this vein, a class of ultralightweight RFID authentication protocols has been designed, which uses only the most basic bitwise and arithmetic operations like exclusive-OR, OR, addition, rotation, and so forth. In this paper, we analyze the security of the SASI protocol, a recently proposed ultralightweight RFID protocol with better claimed security than earlier protocols. We show that SASI does not achieve resistance to tracking, which is one of its design objectives.", "Abstract The safety of medical data and equipment plays a vital role in today’s world of Medical Internet of Things (MIoT). These IoT devices have many constraints (e.g., memory size, processing capacity, and power consumption) that make it challenging to use cost-effective and energy-efficient security solutions. Recently, researchers have proposed a few Radio-Frequency Identification (RFID) based security solutions for MIoT. The use of RFID technology in securing IoT systems is rapidly increasing because it provides secure and lightweight safety mechanisms for these systems. More recently, authors have proposed a lightweight RFID mutual authentication (LRMI) protocol. The authors argue that LRMI meets the necessary security requirements for RFID systems, and the same applies to MIoT applications as well. In this paper, our contribution has two-folds, firstly we analyze the LRMI protocol’s security to demonstrate that it is vulnerable to various attacks such as secret disclosure, reader impersonation, and tag traceability. Also, it is not able to preserve the anonymity of the tag and the reader. Secondly, we propose a new secure and lightweight mutual RFID authentication (SecLAP) protocol, which provides secure communication and preserves privacy in MIoT systems. Our security analysis shows that the SecLAP protocol is robust against de-synchronization, replay, reader tag impersonation, and traceability attacks, and it ensures forward and backward data communication security. We use Burrows-Abadi-Needham (BAN) logic to validate the security features of SecLAP. Moreover, we compare SecLAP with the state-of-the-art and validate its performance through a Field Programmable Gate Array (FPGA) implementation, which shows that it is lightweight, consumes fewer resources on tags concerning computation functions, and requires less number of flows.", "Data security is crucial for a RFID system. Since the existing RFID mutual authentication protocols encounter the challenges such as security risks, poor performance, an ultra-lightweight authentication protocol named Succinct and Lightweight Authentication Protocol (SLAP) is proposed. SLAP is only composed of bitwise operations like XOR, left rotation and conversion which is easy to implement on a passive tag. The proposed conversion operation as the main security component guarantees the security of RFID system with the properties such as irreversibility, sensibility, full confusion and low complexity, which better performed or even absent in other previous protocols. Security analysis shows that SLAP guarantees the functionalities of mutual authentication as well as resistance to various attacks such as de-synchronization attack, replay attack and traceability attack, etc. Furthermore, performance evaluation also indicates that the proposed scheme outperforms the existing protocols in terms of less computation requirement and fewer communication messages during authentication process.", "", "", "" ] }
1907.11322
2966078947
By expanding the connection of objects to the Internet and their entry to human life, the issue of security and privacy has become important. In order to enhance security and privacy on the Internet, many security protocols have been developed. Unfortunately, the security analyzes that have been carried out on these protocols show that they are vulnerable to one or few attacks, which eliminates the use of these protocols. Therefore, the need for a security protocol on the Internet of Things (IoT) has not yet been resolved. Recently, Khor and Sidorov cryptanalyzed the protocol and presented an improved version of it. In this paper, at first, we show that this protocol also does not have sufficient security and so it is not recommended to be used in any application. More precisely, we present a full secret disclosure attack against this protocol, which extracted the whole secrets of the protocol by two communication with the target tag. In addition, recently proposed an ultralightweight mutual authentication RFID protocol for blockchain enabled supply chains, supported by formal and informal security proofs. However, we present a full secret disclosure attack against this protocol as well.
In the line of designing ultralightweight protocols, in @cite_7 , Tewari and Gupta proposed a new ultralightweight authentication protocol for IoT and claimed that their protocol satisfies all security requirements. However, in @cite_19 , an efficient passive secret disclosure attack is applied to this protocol. Moreover, in @cite_4 , Wang cryptanalyzed the Tewari and Gupta protocol and also proposed an improved version of it. This protocol later analysed by Khor and Sidorov @cite_16 , where they also proposed an improved protocol following the same designing paradigm. In this paper, we consider the security of this improved protocol which has been proposed by Khor and Sidorov, and for simplicity, we call it KSP (stands for Khor and Sidorov protocol) and show that KSP is vulnerable to desynchronization attack and also against secret disclosure attack.
{ "cite_N": [ "@cite_19", "@cite_16", "@cite_4", "@cite_7" ], "mid": [ "2573616158", "2887736592", "2732131698", "2516089876" ], "abstract": [ "Internet of Things (IoT) is a technology in which for any object the ability to send data via communications networks is provided. Ensuring the security of Internet services and applications is an important factor in attracting users to use this platform. In the other words, if people are unable to trust that the equipment and information will be reasonably safe against damage, abuse and the other security threats, this lack of trust leads to a reduction in the use of IoT-based applications. Recently, Tewari and Gupta (J Supercomput 1–18, 2016) have proposed an ultralightweight RFID authentication protocol to provide desired security for objects in IoT. In this paper, we consider the security of the proposed protocol and present a passive secret disclosure attack against it. The success probability of the attack is ‘1’ while the complexity of the attack is only eavesdropping one session of the protocol. The presented attack has negligible complexity. We verify the correctness of the presented attack by simulation.", "Internet of Things (IoT) has stimulated great interest in many researchers owing to its capability to connect billions of physical devices to the internet via heterogeneous access network. Security is a paramount aspect of IoT that needs to be addressed urgently to keep sensitive data private. However, from previous research studies, a number of security flaws in terms of keeping data private can be identified. Tewari and Gupta proposed an ultra-lightweight mutual authentication pRotocol that utilizes bitwise operation to achieve security in IoT networks that use RFID tags. The pRotocol is improved by Wang et. al. to prevent a full key disclosure attack. However, this paper shows that both of the pRotocols are susceptible to full disclosure, man-in-the-middle, tracking, and de-synchronization attacks. A detailed security analysis is conducted and results are presented to prove their vulnerability. Based on the aforementioned analysis, the pRotocol is modified and improved using a three pass mutual authentication. GNY logic is used to formally verify the security of the pRotocol.", "Recently, Tewari and Gupta proposed a ultra-lightweight mutual authentication protocol in IoT environments for RFID tags. Their protocol aims to provide secure communication with least cost in both storage and computation. Unfortunately, in this paper, we exploit the vulnerability of this protocol. In this attack, an attacker can obtain the key shared between a back-end database server and a tag. We also explore the possibility in patching the system with some modifications.", "Internet of Things (IoT) is an evolving architecture which connects multiple devices to Internet for communication or receiving updates from a cloud or a server. In future, the number of these connected devices will increase immensely making them an indistinguishable part of our daily lives. Although these devices make our lives more comfortable, they also put our personal information at risk. Therefore, security of these devices is also a major concern today. In this paper, we propose an ultra-lightweight mutual authentication protocol which uses only bitwise operation and thus is very efficient in terms of storage and communication cost. In addition, the computation overhead is very low. We have also compared our proposed work with the existing ones which verifies the strength of our protocol, as obtained results are promising. A brief cryptanalysis of our protocol that ensures untraceability is also presented." ] }
1907.11322
2966078947
By expanding the connection of objects to the Internet and their entry to human life, the issue of security and privacy has become important. In order to enhance security and privacy on the Internet, many security protocols have been developed. Unfortunately, the security analyzes that have been carried out on these protocols show that they are vulnerable to one or few attacks, which eliminates the use of these protocols. Therefore, the need for a security protocol on the Internet of Things (IoT) has not yet been resolved. Recently, Khor and Sidorov cryptanalyzed the protocol and presented an improved version of it. In this paper, at first, we show that this protocol also does not have sufficient security and so it is not recommended to be used in any application. More precisely, we present a full secret disclosure attack against this protocol, which extracted the whole secrets of the protocol by two communication with the target tag. In addition, recently proposed an ultralightweight mutual authentication RFID protocol for blockchain enabled supply chains, supported by formal and informal security proofs. However, we present a full secret disclosure attack against this protocol as well.
As a new emerging technology, blockchain is believed to provide higher data protection, reliability, transparency, and lower management costs compared to a conventional centralized database. Hence, it could be a promising solution for large scale IoT systems. Targeting those benefits Sidorov recently proposed an ultralightweight mutual authentication RFID protocol for blockchain-enabled supply chains @cite_24 . Although they have claimed security against various attacks, we present an efficient secret disclosure attack on it. For the sake of simplicity, we call this protocol SOVNOKP .
{ "cite_N": [ "@cite_24" ], "mid": [ "2907112888" ], "abstract": [ "Previous research studies mostly focused on enhancing the security of radio frequency identification (RFID) protocols for various RFID applications that rely on a centralized database. However, blockchain technology is quickly emerging as a novel distributed and decentralized alternative that provides higher data protection, reliability, immutability, transparency, and lower management costs compared with a conventional centralized database. These properties make it extremely suitable for integration in a supply chain management system. In order to successfully fuse RFID and blockchain technologies together, a secure method of communication is required between the RFID tagged goods and the blockchain nodes. Therefore, this paper proposes a robust ultra-lightweight mutual authentication RFID protocol that works together with a decentralized database to create a secure blockchain-enabled supply chain management system. Detailed security analysis is performed to prove that the proposed protocol is secure from key disclosure, replay, man-in-the-middle, de-synchronization, and tracking attacks. In addition to that, a formal analysis is conducted using Gong, Needham, and Yahalom logic and automated validation of internet security protocols and applications tool to verify the security of the proposed protocol. The protocol is proven to be efficient with respect to storage, computational, and communication costs. In addition to that, a further step is taken to ensure the robustness of the protocol by analyzing the probability of data collision written to the blockchain." ] }
1907.11481
2966777976
Good code quality is a prerequisite for efficiently developing maintainable software. In this paper, we present a novel approach to generate exploranative (explanatory and exploratory) data-driven documents that report code quality in an interactive, exploratory environment. We employ a template-based natural language generation method to create textual explanations about the code quality, dependent on data from software metrics. The interactive document is enriched by different kinds of visualization, including parallel coordinates plots and scatterplots for data exploration and graphics embedded into text. We devise an interaction model that allows users to explore code quality with consistent linking between text and visualizations; through integrated explanatory text, users are taught background knowledge about code quality aspects. Our approach to interactive documents was developed in a design study process that included software engineering and visual analytics experts. Although the solution is specific to the software engineering scenario, we discuss how the concept could generalize to multivariate data and report lessons learned in a broader scope.
-- Code quality is multi-faceted and covers, for instance, testability, maintainability, and readability. To examine these aspects, certain quality attributes (e.g., coupling, complexity, size) are quantified by underlying software metrics; for instance, McCabe's software complexity metrics measure readability aspects of the code @cite_13 . For object-oriented systems, a popular set of metrics is the CK suite introduced by Chidamber and Kemerer @cite_43 and the QMOOD metrics (Quality Model for Object-Oriented Design) @cite_15 . Many approaches employ such metrics suites to distinguish parts of the source code in terms of good, acceptable, or bad quality @cite_64 @cite_44 or to identify code smells (problematic properties and anti-patterns of the code) @cite_51 . We also build on object-oriented metrics and use threshold-based approaches to analyze code quality and smells (see ).
{ "cite_N": [ "@cite_64", "@cite_44", "@cite_43", "@cite_51", "@cite_15", "@cite_13" ], "mid": [ "", "", "2158864412", "2039978418", "2167363007", "1964962870" ], "abstract": [ "", "", "Given the central role that software development plays in the delivery and application of information technology, managers are increasingly focusing on process improvement in the software development area. This demand has spurred the provision of a number of new and or improved approaches to software development, with perhaps the most prominent being object-orientation (OO). In addition, the focus on process improvement has increased the demand for software measures, or metrics with which to manage the process. The need for such metrics is particularly acute when an organization is adopting a new technology for which established practices have yet to be developed. This research addresses these needs through the development and implementation of a new suite of metrics for OO design. Metrics developed in previous research, while contributing to the field's understanding of software development processes, have generally been subject to serious criticisms, including the lack of a theoretical base. Following Wand and Weber (1989), the theoretical base chosen for the metrics was the ontology of Bunge (1977). Six design metrics are developed, and then analytically evaluated against Weyuker's (1988) proposed set of measurement principles. An automated data collection tool was then developed and implemented to collect an empirical sample of these metrics at two field sites in order to demonstrate their feasibility and suggest ways in which managers may use these metrics for process improvement. >", "Software defects often lead to bugs, runtime errors and software maintenance difficulties. They should be systematically prevented, found, removed or fixed all along the software lifecycle. However, detecting and fixing these defects is still, to some extent, a difficult, time-consuming and manual process. In this paper, we propose a two-step automated approach to detect and then to correct various types of maintainability defects in source code. Using Genetic Programming, our approach allows automatic generation of rules to detect defects, thus relieving the designer from a fastidious manual rule definition task. Then, we correct the detected defects while minimizing the correction effort. A correction solution is defined as the combination of refactoring operations that should maximize as much as possible the number of corrected defects with minimal code modification effort. We use the Non-dominated Sorting Genetic Algorithm (NSGA-II) to find the best compromise. For six open source projects, we succeeded in detecting the majority of known defects, and the proposed corrections fixed most of them with minimal effort.", "The paper describes an improved hierarchical model for the assessment of high-level design quality attributes in object-oriented designs. In this model, structural and behavioral design properties of classes, objects, and their relationships are evaluated using a suite of object-oriented design metrics. This model relates design properties such as encapsulation, modularity, coupling, and cohesion to high-level quality attributes such as reusability, flexibility, and complexity using empirical and anecdotal information. The relationship or links from design properties to quality attributes are weighted in accordance with their influence and importance. The model is validated by using empirical and expert opinion to compare with the model results on several large commercial object-oriented systems. A key attribute of the model is that it can be easily modified to include different relationships and weights, thus providing a practical quality assessment tool adaptable to a variety of demands.", "This paper describes a graph-theoretic complexity measure and illustrates how it can be used to manage and control program complexity. The paper first explains how the graph-theory concepts apply and gives an intuitive explanation of the graph concepts in programming terms. The control graphs of several actual Fortran programs are then presented to illustrate the correlation between intuitive complexity and the graph-theoretic complexity. Several properties of the graph-theoretic complexity are then proved which show, for example, that complexity is independent of physical size (adding or subtracting functional statements leaves complexity unchanged) and complexity depends only on the decision structure of a program." ] }
1907.11481
2966777976
Good code quality is a prerequisite for efficiently developing maintainable software. In this paper, we present a novel approach to generate exploranative (explanatory and exploratory) data-driven documents that report code quality in an interactive, exploratory environment. We employ a template-based natural language generation method to create textual explanations about the code quality, dependent on data from software metrics. The interactive document is enriched by different kinds of visualization, including parallel coordinates plots and scatterplots for data exploration and graphics embedded into text. We devise an interaction model that allows users to explore code quality with consistent linking between text and visualizations; through integrated explanatory text, users are taught background knowledge about code quality aspects. Our approach to interactive documents was developed in a design study process that included software engineering and visual analytics experts. Although the solution is specific to the software engineering scenario, we discuss how the concept could generalize to multivariate data and report lessons learned in a broader scope.
-- Visualizations included into the lines or paragraphs of a text are known as @cite_56 , @cite_36 , or graphics @cite_12 . They allow close and coherent integration of the textual and visual representations of data. Some approaches apply these in the context of software engineering and embed them into the code to assist developers in understanding a program. @cite_7 and Sul ' @cite_45 suggest augmenting the source code with visualizations to keep track of the state and properties of the code. @cite_46 @cite_21 implement embedded visualizations for understanding program behavior and performance bottlenecks. Similarly, @cite_41 and @cite_33 augment source code with visualizations to aid understanding of runtime behavior. We embed visualizations into natural language text (not into source code) to support better understanding of the quality of the source code.
{ "cite_N": [ "@cite_33", "@cite_7", "@cite_36", "@cite_41", "@cite_21", "@cite_56", "@cite_45", "@cite_46", "@cite_12" ], "mid": [ "2040362995", "2050888673", "2590534375", "2796075754", "2108958711", "", "2898193680", "2005775881", "1995073788" ], "abstract": [ "User interfaces for source code editing are a crucial component in any software development environment, and in many editors visual annotations (overlaid on the textual source code) are used to provide important contextual information to the programmer. This paper focuses on the real-time programming activity of cyberphysical' programming, and considers the type of visual annotations which may be helpful in this programming context.", "Software engineers need to design, implement, comprehend and maintain large and complex software systems. Awareness of information about the properties and state of individual artifacts, and the process being enacted to produce them, can make these activities less error-prone and more efficient. In this paper we advocate the use of code colouring to augment development environments with rich information overlays. These in situ visualisations are delivered within the existing IDE interface and deliver valuable information with minimal overhead. We present CoderChrome, a code colouring plug-in for Eclipse, and describe how it can be used to support and enhance software engineering activities.", "Generating visualizations at the size of a word creates dense information representations often called sparklines . The integration of word-sized graphics into text could avoid additional cognitive load caused by splitting the readers’ attention between figures and text. In scientific publications, these graphics make statements easier to understand and verify because additional quantitative information is available where needed. In this work, we perform a literature review to find out how researchers have already applied such word-sized representations. Illustrating the versatility of the approach, we leverage these representations for reporting empirical and bibliographic data in three application examples. For interactive Web-based publications, we explore levels of interactivity and discuss interaction patterns to link visualization and text. We finally call the visualization community to be a pioneer in exploring new visualization-enriched and interactive publication formats.", "Programmers must draw explicit connections between their code and runtime state to properly assess the correctness of their programs. However, debugging tools often decouple the program state from the source code and require explicitly invoked views to bridge the rift between program editing and program understanding. To unobtrusively reveal runtime behavior during both normal execution and debugging, we contribute techniques for visualizing program variables directly within the source code. We describe a design space and placement criteria for embedded visualizations. We evaluate our in situ visualizations in an editor for the Vega visualization grammar. Compared to a baseline development environment, novice Vega users improve their overall task grade by about 2 points when using the in situ visualizations and exhibit significant positive effects on their self-reported speed and accuracy.", "Finding and fixing performance bottlenecks requires sound knowledge of the program that is to be optimized. In this paper, we propose an approach for presenting performance-related information to software engineers by visually augmenting source code shown in an editor. Small diagrams at each method declaration and method call visualize the propagation of runtime consumption through the program as well as the interplay of threads in parallelized programs. Advantages of in situ visualization like this over traditional representations, where code and profiling information are shown in different places, promise to be the prevention of a split-attention effect caused by multiple views; information is presented where required, which supports understanding and navigation. We implemented the approach as an IDE plug-in and tested it in a user study with four developers improving the performance of their own programs. The user study provides insights into the process of understanding performance bottlenecks with our approach.", "", "Abstract Source code written in textual programming languages is typically edited in integrated development environments (IDEs) or specialized code editors. These tools often display various visual items, such as icons, color highlights or more advanced graphical overlays directly in the main editable source code view. We call such visualizations source code editor augmentation. In this paper, we present a first systematic mapping study of source code editor augmentation tools and approaches. We manually reviewed the metadata of 5553 articles published during the last twenty years in two phases – keyword search and references search. The result is a list of 103 relevant articles and a taxonomy of source code editor augmentation tools with seven dimensions, which we used to categorize the resulting list of the surveyed articles. We also provide the definition of the term source code editor augmentation, along with a brief overview of historical development and augmentations available in current industrial IDEs.", "Numeric variables are one of the most frequently used data types. During the execution of a program, their values might change often. Tracing these changes can be necessary for understanding specific behavior of the program or for locating bugs. However, using a breakpoint debugger requires tedious stepping, and logging changes implies analyzing large text files. To make the monitoring of numeric variables easier, this work introduces a visualization approach that augments the source code view of an IDE by small, word-sized graphics: the visualizations accompanying the declarations of monitored variables plot read and write accesses on a timeline; detail views can be retrieved on demand. As suggested by a case study, this approach might support program comprehension and debugging.", "We present an exploration and a design space that characterize the usage and placement of word-scale visualizations within text documents. Word-scale visualizations are a more general version of sparklines-small, word-sized data graphics that allow meta-information to be visually presented in-line with document text. In accordance with Edward Tufte's definition, sparklines are traditionally placed directly before or after words in the text. We describe alternative placements that permit a wider range of word-scale graphics and more flexible integration with text layouts. These alternative placements include positioning visualizations between lines, within additional vertical and horizontal space in the document, and as interactive overlays on top of the text. Each strategy changes the dimensions of the space available to display the visualizations, as well as the degree to which the text must be adjusted or reflowed to accommodate them. We provide an illustrated design space of placement options for word-scale visualizations and identify six important variables that control the placement of the graphics and the level of disruption of the source text. We also contribute a quantitative analysis that highlights the effect of different placements on readability and text disruption. Finally, we use this analysis to propose guidelines to support the design and placement of word-scale visualizations." ] }