{"source": "Mixed precision training (MPT) is becoming a practical technique to improve the speed and energy efficiency of training deep neural networks by leveraging the fast hardware support for IEEE half-precision floating point that is available in existing GPUs.MPT is typically used in combination with a technique called loss scaling, that works by scaling up the loss value up before the start of backpropagation in order to minimize the impact of numerical underflow on training.Unfortunately, existing methods make this loss scale value a hyperparameter that needs to be tuned per-model, and a single scale cannot be adapted to different layers at different training stages.We introduce a loss scaling-based training method called adaptive loss scaling that makes MPT easier and more practical to use, by removing the need to tune a model-specific loss scale hyperparameter.We achieve this by introducing layer-wise loss scale values which are automatically computed during training to deal with underflow more effectively than existing methods.We present experimental results on a variety of networks and tasks that show our approach can shorten the time to convergence and improve accuracy, compared with using the existing state-of-the-art MPT and single-precision floating point.", "target": ["Wir entwickeln eine adaptive Verlustskalierung, um das Training mit gemischter Präzision zu verbessern, das die Ergebnisse des Stands der Technik übertrifft.", "Vorschlag für eine adaptive Verlustskalierungsmethode während der Backpropagation für das Training mit gemischter Präzision, bei der die Skalierungsrate automatisch festgelegt wird, um die Rückflussrate zu reduzieren.", "Die Autoren schlagen eine Methode zum Trainieren von Modellen mit FP16-Präzision vor, die eine ausgefeiltere Methode zur gleichzeitigen und automatischen Minimierung der Rückflussrate in jeder Schicht verwendet."]} {"source": "Many real-world problems, e.g. object detection, have outputs that are naturally expressed as sets of entities.This creates a challenge for traditional deep neural networks which naturally deal with structured outputs such as vectors, matrices or tensors.We present a novel approach for learning to predict sets with unknown permutation and cardinality using deep neural networks.Specifically, in our formulation we incorporate the permutation as unobservable variable and estimate its distribution during the learning process using alternating optimization.We demonstrate the validity of this new formulation on two relevant vision problems: object detection, for which our formulation outperforms state-of-the-art detectors such as Faster R-CNN and YOLO, and a complex CAPTCHA test, where we observe that, surprisingly, our set based network acquired the ability of mimicking arithmetics without any rules being coded.", "target": ["Wir präsentieren einen neuartigen Ansatz für das Erlernen der Vorhersage von Mengen mit unbekannter Permutation und Kardinalität unter Verwendung von Feed-Forward Deep Neural Networks.", "Eine Formulierung zum Erlernen der Verteilung über unbeobachtbare Permutationsvariablen auf der Grundlage von tiefen Netzwerken für das Problem der Mengenvorhersage."]} {"source": "Foveation is an important part of human vision, and a number of deep networks have also used foveation.However, there have been few systematic comparisons between foveating and non-foveating deep networks, and between different variable-resolution downsampling methods.Here we define several such methods, and compare their performance on ImageNet recognition with a Densenet-121 network.The best variable-resolution method slightly outperforms uniform downsampling.Thus in our experiments, foveation does not substantially help or hinder object recognition in deep networks.", "target": ["Wir vergleichen die Leistung der Objekterkennung bei Bildern, die einheitlich und mit drei verschiedenen Foveationsschemen abgetastet wurden."]} {"source": "We explore the concept of co-design in the context of neural network verification.Specifically, we aim to train deep neural networks that not only are robust to adversarial perturbations but also whose robustness can be verified more easily.To this end, we identify two properties of network models - weight sparsity and so-called ReLU stability - that turn out to significantly impact the complexity of the corresponding verification task.We demonstrate that improving weight sparsity alone already enables us to turn computationally intractable verification problems into tractable ones.Then, improving ReLU stability leads to an additional 4-13x speedup in verification times.An important feature of our methodology is its \"universality,\" in the sense that it can be used with a broad range of training procedures and verification approaches.", "target": ["Wir entwickeln Methoden, um tiefe neuronale Modelle zu trainieren, die sowohl robust gegenüber negativen Störungen sind als auch deren Robustheit wesentlich einfacher zu verifizieren ist.", "Diese Arbeit stellt verschiedene Möglichkeiten vor, einfache ReLU-Netze zu regularisieren, um die Robustheit gegenüber Angreifern, die nachweisbare Robustheit gegenüber Angreifern und die Verifikationsgeschwindigkeit zu optimieren.", "In diesem Beitrag werden Methoden zur Ausbildung robuster neuronaler Netze vorgeschlagen, die schneller überprüft werden können. Dabei werden Pruning Methoden zur Förderung der Gewichtsdistanz und Regularisierungen zur Förderung der ReLU-Stabilität eingesetzt."]} {"source": "Batch Normalization (BatchNorm) has shown to be effective for improving and accelerating the training of deep neural networks.However, recently it has been shown that it is also vulnerable to adversarial perturbations.In this work, we aim to investigate the cause of adversarial vulnerability of the BatchNorm.We hypothesize that the use of different normalization statistics during training and inference (mini-batch statistics for training and moving average of these values at inference) is the main cause of this adversarial vulnerability in the BatchNorm layer.We empirically proved this by experiments on various neural network architectures and datasets.Furthermore, we introduce Robust Normalization (RobustNorm) and experimentally show that it is not only resilient to adversarial perturbation but also inherit the benefits of BatchNorm.", "target": ["Untersuchung, wie BatchNorm eine Sicherheitslücke verursacht und wie man sie vermeiden kann. ", "Dieses Werk befasst sich mit der Anfälligkeit von BatchNorm für unerwünschte Störungen und schlägt eine Alternative namens RobustNorm vor, die anstelle der Normalisierung eine Min-Max-Reskalierung verwendet.", "Dieses Papier untersucht den Grund für die Anfälligkeit von BatchNorm und schlägt Robust Normalization vor, eine Normalisierungsmethode, die unter einer Vielzahl von Angriffsmethoden deutlich bessere Ergebnisse erzielt."]} {"source": "Electronic Health Records (EHR) comprise of longitudinal clinical observations portrayed with sparsity, irregularity, and high-dimensionality which become the major obstacles in drawing reliable downstream outcome.Despite greatly numbers of imputation methods are being proposed to tackle these issues, most of the existing methods ignore correlated features or temporal dynamics and entirely put aside the uncertainty.In particular, since the missing values estimates have the risk of being imprecise, it motivates us to pay attention to reliable and less certain information differently.In this work, we propose a novel variational-recurrent imputation network (V-RIN), which unified imputation and prediction network, by taking into account the correlated features, temporal dynamics, and further utilizing the uncertainty to alleviate the risk of biased missing values estimates.Specifically, we leverage the deep generative model to estimate the missing values based on the distribution among variables and a recurrent imputation network to exploit the temporal relations in conjunction with utilization of the uncertainty.We validated the effectiveness of our proposed model with publicly available real-world EHR dataset, PhysioNet Challenge 2012, and compared the results with other state-of-the-art competing methods in the literature.", "target": ["Unser variational-rekurrentes Imputationsnetzwerk (V-RIN) berücksichtigt die korrelierten Merkmale, die zeitliche Dynamik und nutzt die Unsicherheit, um das Risiko einer verzerrten Schätzung fehlender Werte zu verringern.", "Ein Netzwerk zur Imputation fehlender Daten, das Korrelationen, zeitliche Beziehungen und Datenunsicherheiten für das Problem der spärlichen Daten in EHRs berücksichtigt und eine höhere AUC bei der Klassifizierung von Mortalitätsraten erzielt.", "In dem Papier wird eine Methode vorgestellt, die VAE und GRU mit Unsicherheitsfaktoren für die sequentielle Imputation fehlender Daten und die Ergebnisvorhersage kombiniert."]} {"source": "Despite the state-of-the-art accuracy of Deep Neural Networks (DNN) in various classification problems, their deployment onto resource constrained edge computing devices remains challenging due to their large size and complexity.Several recent studies have reported remarkable results in reducing this complexity through quantization of DNN models.However, these studies usually do not consider the changes in the loss function when performing quantization, nor do they take the different importances of DNN model parameters to the accuracy into account.We address these issues in this paper by proposing a new method, called adaptive quantization, which simplifies a trained DNN model by finding a unique, optimal precision for each network parameter such that the increase in loss is minimized.The optimization problem at the core of this method iteratively uses the loss function gradient to determine an error margin for each parameter and assigns it a precision accordingly.Since this problem uses linear functions, it is computationally cheap and, as we will show, has a closed-form approximate solution.Experiments on MNIST, CIFAR, and SVHN datasets showed that the proposed method can achieve near or better than state-of-the-art reduction in model size with similar error rates.Furthermore, it can achieve compressions close to floating-point model compression methods without loss of accuracy.", "target": ["Eine adaptive Methode für die Festkomma-Quantisierung neuronaler Netze, die eher auf theoretischer Analyse als auf Heuristiken beruht. ", "Schlägt eine Methode zur Quantisierung neuronaler Netze vor, die es ermöglicht, Gewichte je nach ihrer Wichtigkeit mit unterschiedlicher Genauigkeit zu quantisieren, wobei der Verlust berücksichtigt wird.", "In dieser Arbeit wird eine Technik zur Quantisierung der Gewichte eines neuronalen Netzes vorgeschlagen, bei der die Bittiefe/Präzision für jeden Parameter unterschiedlich ist."]} {"source": "We study the problem of learning permutation invariant representations that can capture containment relations.We propose training a model on a novel task: predicting the size of the symmetric difference between pairs of multisets, sets which may contain multiple copies of the same object.With motivation from fuzzy set theory, we formulate both multiset representations and how to predict symmetric difference sizes given these representations.We model multiset elements as vectors on the standard simplex and multisets as the summations of such vectors, and we predict symmetric difference as the l1-distance between multiset representations.We demonstrate that our representations more effectively predict the sizes of symmetric differences than DeepSets-based approaches with unconstrained object representations.Furthermore, we demonstrate that the model learns meaningful representations, mapping objects of different classes to different standard basis vectors.", "target": ["Auf der Grundlage der Fuzzy-Mengen-Theorie schlagen wir ein Modell vor, das nur die Größen der symmetrischen Unterschiede zwischen Paaren von Multisets angibt und Repräsentationen solcher Multisets und ihrer Elemente lernt.", "Dieses Papier schlägt eine neue Aufgabe des Mengenlernens vor, die Vorhersage der Größe der symmetrischen Differenz zwischen mehreren Mengen, und gibt eine Methode zur Lösung der Aufgabe auf der Grundlage der Fuzzy-Mengentheorie."]} {"source": "It is important to collect credible training samples $(x,y)$ for building data-intensive learning systems (e.g., a deep learning system).In the literature, there is a line of studies on eliciting distributional information from self-interested agents who hold a relevant information. Asking people to report complex distribution $p(x)$, though theoretically viable, is challenging in practice.This is primarily due to the heavy cognitive loads required for human agents to reason and report this high dimensional information.Consider the example where we are interested in building an image classifier via first collecting a certain category of high-dimensional image data.While classical elicitation results apply to eliciting a complex and generative (and continuous) distribution $p(x)$ for this image data, we are interested in eliciting samples $x_i \\sim p(x)$ from agents.This paper introduces a deep learning aided method to incentivize credible sample contributions from selfish and rational agents.The challenge to do so is to design an incentive-compatible score function to score each reported sample to induce truthful reports, instead of an arbitrary or even adversarial one.We show that with accurate estimation of a certain $f$-divergence function we are able to achieve approximate incentive compatibility in eliciting truthful samples.We then present an efficient estimator with theoretical guarantee via studying the variational forms of $f$-divergence function.Our work complements the literature of information elicitation via introducing the problem of \\emph{sample elicitation}. We also show a connection between this sample elicitation problem and $f$-GAN, and how this connection can help reconstruct an estimator of the distribution based on collected samples.", "target": ["In diesem Werk wird eine Deep-Learning-gestützte Methode vorgeschlagen, um glaubwürdige Proben von eigennützigen Agenten zu erhalten. ", "Die Autoren schlagen einen Rahmen für die Auswahl von Stichproben für das Problem der Auswahl von glaubwürdigen Stichproben von Agenten für komplexe Verteilungen vor, schlagen vor, dass tiefe neuronale Rahmen in diesem Rahmen angewendet werden können, und verbinden Stichprobenauswahl und f-GAN.", "In diesem Papier wird das Problem der Stichprobenauswahl untersucht und ein Ansatz des Deep Learning vorgeschlagen, der sich auf den dualen Ausdruck der f-Divergenz stützt, die sich als Maximum über eine Menge von Funktionen t schreibt."]} {"source": "The celebrated Sequence to Sequence learning (Seq2Seq) technique and its numerous variants achieve excellent performance on many tasks.However, many machine learning tasks have inputs naturally represented as graphs; existing Seq2Seq models face a significant challenge in achieving accurate conversion from graph form to the appropriate sequence.To address this challenge, we introduce a general end-to-end graph-to-sequence neural encoder-decoder architecture that maps an input graph to a sequence of vectors and uses an attention-based LSTM method to decode the target sequence from these vectors.Our method first generates the node and graph embeddings using an improved graph-based neural network with a novel aggregation strategy to incorporate edge direction information in the node embeddings.We further introduce an attention mechanism that aligns node embeddings and the decoding sequence to better cope with large graphs.Experimental results on bAbI, Shortest Path, and Natural Language Generation tasks demonstrate that our model achieves state-of-the-art performance and significantly outperforms existing graph neural networks, Seq2Seq, and Tree2Seq models; using the proposed bi-directional node embedding aggregation strategy, the model can converge rapidly to the optimal performance.", "target": ["Graph to Sequence Learning mit aufmerksamkeitsbasierten neuronalen Netzen", "Eine graph2seq-Architektur, die einen Graphen-Encoder, der GGNN- und GCN-Komponenten mit einem Aufmerksamkeits-Sequenz-Encoder kombiniert, und die Verbesserungen gegenüber den Basislinien zeigt.", "In dieser Arbeit wird ein Ende-zu-Ende Graph Encoder zu Sequenz Decoder Modell mit einem dazwischen liegenden Aufmerksamkeitsmechanismus vorgeschlagen."]} {"source": "We address the problem of learning to discover 3D parts for objects in unseen categories.Being able to learn the geometry prior of parts and transfer this prior to unseen categories pose fundamental challenges on data-driven shape segmentation approaches.Formulated as a contextual bandit problem, we propose a learning-based iterative grouping framework which learns a grouping policy to progressively merge small part proposals into bigger ones in a bottom-up fashion.At the core of our approach is to restrict the local context for extracting part-level features, which encourages the generalizability to novel categories.On a recently proposed large-scale fine-grained 3D part dataset, PartNet, we demonstrate that our method can transfer knowledge of parts learned from 3 training categories to 21 unseen testing categories without seeing any annotated samples.Quantitative comparisons against four strong shape segmentation baselines show that we achieve the state-of-the-art performance.", "target": ["Ein Zero-Shot-Segmentierungsrahmen für die Segmentierung von 3D-Objektteilen. Modellierung der Segmentierung als Entscheidungsprozess und Lösung als kontextuelles Bandit-Problem.", "Eine Methode zur Segmentierung von 3D-Punktansammlungen von Objekten in Einzelteilen, die sich auf die Verallgemeinerung von Teilgruppierungen auf neue Objektkategorien konzentriert, die während des Trainings nicht gesehen wurden, und die eine starke Leistung im Vergleich zu den Basislinien zeigt.", "In dieser Arbeit wird eine Methode zur Segmentierung von Teilen in Objektpunktansammlungen vorgeschlagen."]} {"source": "This paper presents the ballistic graph neural network.Ballistic graph neural network tackles the weight distribution from a transportation perspective and has many different properties comparing to the traditional graph neural network pipeline.The ballistic graph neural network does not require to calculate any eigenvalue.The filters propagate exponentially faster($\\sigma^2 \\sim T^2$) comparing to traditional graph neural network($\\sigma^2 \\sim T$).We use a perturbed coin operator to perturb and optimize the diffusion rate.Our results show that by selecting the diffusion speed, the network can reach a similar accuracy with fewer parameters.We also show the perturbed filters act as better representations comparing to pure ballistic ones.We provide a new perspective of training graph neural network, by adjusting the diffusion rate, the neural network's performance can be improved.", "target": ["Eine neue Perspektive für die Erfassung der Korrelation zwischen Knoten auf der Grundlage von Diffusionseigenschaften.", "Eine neue Diffusionsoperation für neuronale Graphennetze, die keine Eigenwertberechnung erfordert und sich im Vergleich zu herkömmlichen neuronalen Graphennetzen exponentiell schneller ausbreiten kann.", "In dieser Arbeit wird vorgeschlagen, das Problem der Diffusionsgeschwindigkeit durch die Einführung eines ballistischen Laufs zu lösen."]} {"source": "In this paper, we propose a \\textit{weak supervision} framework for neural ranking tasks based on the data programming paradigm \\citep{Ratner2016}, which enables us to leverage multiple weak supervision signals from different sources.Empirically, we consider two sources of weak supervision signals, unsupervised ranking functions and semantic feature similarities.We train a BERT-based passage-ranking model (which achieves new state-of-the-art performances on two benchmark datasets with full supervision) in our weak supervision framework.Without using ground-truth training labels, BERT-PR models outperform BM25 baseline by a large margin on all three datasets and even beat the previous state-of-the-art results with full supervision on two of datasets.", "target": ["Wir schlagen eine Trainings-Pipeline mit schwacher Überwachung vor, die auf dem Datenprogrammierungs-Framework für Ranking-Aufgaben basiert, in dem wir ein BERT-basiertes Ranking-Modell trainieren und neue SOTA-Ergebnisse erzielen.", "Die Autoren schlagen eine Kombination aus BERT und dem Framework für schwache Überwachung vor, um das Problem des Passagen-Rankings anzugehen, und erzielen damit bessere Ergebnisse als der vollständig überwachte Stand der Technik."]} {"source": "We study the training process of Deep Neural Networks (DNNs) from the Fourier analysis perspective.We demonstrate a very universal Frequency Principle (F-Principle) --- DNNs often fit target functions from low to high frequencies --- on high-dimensional benchmark datasets, such as MNIST/CIFAR10, and deep networks, such as VGG16.This F-Principle of DNNs is opposite to the learning behavior of most conventional iterative numerical schemes (e.g., Jacobi method), which exhibits faster convergence for higher frequencies, for various scientific computing problems.With a naive theory, we illustrate that this F-Principle results from the regularity of the commonly used activation functions.The F-Principle implies an implicit bias that DNNs tend to fit training data by a low-frequency function.This understanding provides an explanation of good generalization of DNNs on most real datasets and bad generalization of DNNs on parity function or randomized dataset.", "target": ["Bei realen Problemen haben wir festgestellt, dass DNNs während des Trainingsprozesses häufig Zielfunktionen von niedrigen zu hohen Frequenzen anpassen.", "In dieser Arbeit wird der Verlust von neuronalen Netzen im Fourier-Bereich analysiert und festgestellt, dass DNNs dazu neigen, niederfrequente Komponenten vor hochfrequenten zu lernen.", "Das Werk untersucht den Trainingsprozess von NNs durch Fourier-Analyse und kommt zu dem Schluss, dass NNs niederfrequente Komponenten vor hochfrequenten Komponenten lernen."]} {"source": "The problem of accelerating drug discovery relies heavily on automatic tools to optimize precursor molecules to afford them with better biochemical properties.Our work in this paper substantially extends prior state-of-the-art on graph-to-graph translation methods for molecular optimization.In particular, we realize coherent multi-resolution representations by interweaving the encoding of substructure components with the atom-level encoding of the original molecular graph.Moreover, our graph decoder is fully autoregressive, and interleaves each step of adding a new substructure with the process of resolving its attachment to the emerging molecule.We evaluate our model on multiple molecular optimization tasks and show that our model significantly outperforms previous state-of-the-art baselines.", "target": ["Wir schlagen einen hierarchisch gekoppelten Encoder-Decoder mit mehreren Auflösungen für die Übersetzung von Graphen in Graphen vor.", "Ein hierarchisches Graph-zu-Graph-Übersetzungsmodell zur Erzeugung molekularer Graphen unter Verwendung chemischer Substrukturen als Bausteine, das vollständig autoregressiv ist und kohärente Mehrfachauflösungsrepräsentationen erlernt, wodurch es frühere Modelle übertrifft.", "Die Autoren stellen eine hierarchische Graph-zu-Graph-Übersetzungsmethode zur Erzeugung neuartiger organischer Moleküle vor."]} {"source": "Equivariance is a nice property to have as it produces much more parameter efficient neural architectures and preserves the structure of the input through the feature mapping.Even though some combinations of transformations might never appear (e.g. an upright face with a horizontal nose), current equivariant architectures consider the set of all possible transformations in a transformation group when learning feature representations.Contrarily, the human visual system is able to attend to the set of relevant transformations occurring in the environment and utilizes this information to assist and improve object recognition.Based on this observation, we modify conventional equivariant feature mappings such that they are able to attend to the set of co-occurring transformations in data and generalize this notion to act on groups consisting of multiple symmetries.We show that our proposed co-attentive equivariant neural networks consistently outperform conventional rotation equivariant and rotation & reflection equivariant neural networks on rotated MNIST and CIFAR-10.", "target": ["Wir nutzen die Aufmerksamkeit, um äquivariante neuronale Netze auf die Menge oder die gemeinsam auftretenden Transformationen in den Daten zu beschränken. ", "In dieser Arbeit wird Aufmerksamkeit mit Gruppenäquivarianz kombiniert, wobei insbesondere die p4m-Gruppe der Rotationen, Translationen und Spiegelungen betrachtet wird, und es wird eine Form der Selbstaufmerksamkeit abgeleitet, die die Äquivarianzeigenschaft nicht zerstört.", "Die Autoren schlagen einen Selbstbeobachtungsmechanismus für rotationsäquivariante neuronale Netze vor, der die Klassifizierungsleistung gegenüber regulären rotationsäquivarianten Netzen verbessert."]} {"source": "The fast generation and refinement of protein backbones would constitute a major advancement to current methodology for the design and development of de novo proteins.In this study, we train Generative Adversarial Networks (GANs) to generate fixed-length full-atom protein backbones, with the goal of sampling from the distribution of realistic 3-D backbone fragments.We represent protein structures by pairwise distances between all backbone atoms, and present a method for directly recovering and refining the corresponding backbone coordinates in a differentiable manner.We show that interpolations in the latent space of the generator correspond to smooth deformations of the output backbones, and that test set structures not seen by the generator during training exist in its image.Finally, we perform sequence design, relaxation, and ab initio folding of a subset of generated structures, and show that in some cases we can recover the generated folds after forward-folding.Together, these results suggest a mechanism for fast protein structure refinement and folding using external energy functions.", "target": ["Wir trainieren ein GAN zur Generierung und Wiederherstellung von Protein-Rückgraten mit vollständigen Atomen und zeigen, dass wir in ausgewählten Fällen die generierten Proteine nach Sequenzdesign und in von Anfang an Vorwärtsfaltung wiederherstellen können.", "Ein generatives Modell für das Proteinrückgrat, das ein GAN, ein Autoencoder-ähnliches Netzwerk und einen Verfeinerungsprozess verwendet, sowie eine Reihe von qualitativen Bewertungen, die auf positive Ergebnisse hindeuten.", "In diesem Beitrag wird ein Ende-zu-Ende Ansatz für die Generierung von Protein-Rückgraten unter Verwendung generativer adversarialer Netzwerke vorgestellt."]} {"source": "Few-Shot Learning (learning with limited labeled data) aims to overcome the limitations of traditional machine learning approaches which require thousands of labeled examples to train an effective model.Considered as a hallmark of human intelligence, the community has recently witnessed several contributions on this topic, in particular through meta-learning, where a model learns how to learn an effective model for few-shot learning.The main idea is to acquire prior knowledge from a set of training tasks, which is then used to perform (few-shot) test tasks.Most existing work assumes that both training and test tasks are drawn from the same distribution, and a large amount of labeled data is available in the training tasks.This is a very strong assumption which restricts the usage of meta-learning strategies in the real world where ample training tasks following the same distribution as test tasks may not be available.In this paper, we propose a novel meta-learning paradigm wherein a few-shot learning model is learnt, which simultaneously overcomes domain shift between the train and test tasks via adversarial domain adaptation.We demonstrate the efficacy the proposed method through extensive experiments.", "target": ["Beim Meta Learning für Few Shot Learning wird davon ausgegangen, dass Trainings- und Testaufgaben aus der gleichen Verteilung gezogen werden. Was macht man, wenn das nicht der Fall ist? Meta-Learning mit Bereichsanpassung auf Aufgabenebene.", "Diese Arbeit schlägt ein Modell vor, das unüberwachte adversarische Domänenanpassung mit prototypischen Netzwerken kombiniert, die besser abschneiden als die Basissysteme für Few-Shot Learning Aufgaben mit Domänenverschiebung.", "Die Autoren schlugen eine Meta-Domänenanpassung vor, um das Szenario der Domänenverschiebung in einer Meta-Lernumgebung anzugehen, und konnten in mehreren Experimenten Leistungsverbesserungen nachweisen."]} {"source": "Universal probabilistic programming systems (PPSs) provide a powerful framework for specifying rich and complex probabilistic models.However, this expressiveness comes at the cost of substantially complicating the process of drawing inferences from the model.In particular, inference can become challenging when the support of the model varies between executions.Though general-purpose inference engines have been designed to operate in such settings, they are typically inefficient, often relying on proposing from the prior to make transitions.To address this, we introduce a new inference framework: Divide, Conquer, and Combine (DCC).DCC divides the program into separate straight-line sub-programs, each of which has a fixed support allowing more powerful inference algorithms to be run locally, before recombining their outputs in a principled fashion.We show how DCC can be implemented as an automated and general-purpose PPS inference engine, and empirically confirm that it can provide substantial performance improvements over previous approaches.", "target": ["Divide, Conquer, and Combine ist ein neues Inferenzschema, das auf probabilistischen Programmen mit stochastischer Unterstützung angewendet werden kann, d.h. die Existenz der Variablen selbst ist stochastisch."]} {"source": "Detecting communities or the modular structure of real-life networks (e.g. a socialnetwork or a product purchase network) is an important task because the way anetwork functions is often determined by its communities.The traditional approaches to community detection involve modularity-based approaches,which generally speaking, construct partitions based on heuristics thatseek to maximize the ratio of the edges within the partitions to those betweenthem.Node embedding approaches, which represent each node in a graph as areal-valued vector, transform the problem of community detection in a graph tothat of clustering a set of vectors.Existing node embedding approaches are primarilybased on first initiating uniform random walks from each node to constructa context of a node and then seeks to make the vector representation ofthe node close to its context.However, standard node embedding approaches donot directly take into account the community structure of a network while constructingthe context around each node.To alleviate this, we explore two differentthreads of work.First, we investigate the use of biased random walks (specifically,maximum entropy based walks) to obtain more centrality preserving embeddingof nodes, which we hypothesize may lead to more effective clusters in the embeddedspace.Second, we propose a community structure aware node embeddingapproach where we incorporate modularity-based partitioning heuristics intothe objective function of node embedding.We demonstrate that our proposed approachfor community detection outperforms a number of modularity-based baselinesas well as K-means on a standard node-embedded vector space (specifically,node2vec) on a wide range of real-life networks of different sizes and densities.", "target": ["Ein gemeinschaftserhaltender Knoteneinbettungsalgorithmus, der zu einer effektiveren Erkennung von Gemeinschaften mit einer Clusterbildung im eingebetteten Raum führt."]} {"source": "A point cloud is an agile 3D representation, efficiently modeling an object's surface geometry.However, these surface-centric properties also pose challenges on designing tools to recognize and synthesize point clouds.This work presents a novel autoregressive model, PointGrow, which generates realistic point cloud samples from scratch or conditioned from given semantic contexts.Our model operates recurrently, with each point sampled according to a conditional distribution given its previously-generated points.Since point cloud object shapes are typically encoded by long-range interpoint dependencies, we augment our model with dedicated self-attention modules to capture these relations.Extensive evaluation demonstrates that PointGrow achieves satisfying performance on both unconditional and conditional point cloud generation tasks, with respect to fidelity, diversity and semantic preservation.Further, conditional PointGrow learns a smooth manifold of given images where 3D shape interpolation and arithmetic calculation can be performed inside.", "target": ["Ein autoregressives Deep-Learning-Modell zur Erzeugung verschiedener Punkt Clouds.", "Ein Ansatz zur Erzeugung von 3D-Formen als Punkt Clouds, der die lexikografische Ordnung von Punkten nach Koordinaten berücksichtigt und ein Modell zur Vorhersage von Punkten in dieser Reihenfolge trainiert.", "In diesem Beitrag wird ein generatives Modell für Punkt Clouds vorgestellt, das ein Pixel-RNN-ähnliches autoregressives Modell und ein Aufmerksamkeitsmodell zur Behandlung von Interaktionen über größere Entfernungen verwendet."]} {"source": "Reinforcement learning and evolutionary algorithms can be used to create sophisticated control solutions.Unfortunately explaining how these solutions work can be difficult to due to their \"black box\" nature.In addition, the time-extended nature of control algorithms often prevent direct applications of explainability techniques used for standard supervised learning algorithms.This paper attempts to address explainability of blackbox control algorithms through six different techniques:1) Bayesian rule lists,2) Function analysis,3) Single time step integrated gradients,4) Grammar-based decision trees,5) Sensitivity analysis combined with temporal modeling with LSTMs, and6) Explanation templates.These techniques are tested on a simple 2d domain, where a simulated rover attempts to navigate through obstacles to reach a goal.For control, this rover uses an evolved multi-layer perception that maps an 8d field of obstacle and goal sensors to an action determining where it should go in the next time step.Results show that some simple insights in explaining the neural network are possible, but that good explanations are difficult.", "target": ["Beschreibt eine Reihe von Erklärungsmethoden, die auf einen einfachen neuronalen Netzcontroller für die Navigation angewendet werden.", "Dieser Beitrag bietet Einblicke und Erklärungen für das Problem, Erklärungen für ein mehrschichtiges Perzeptron zu liefern, das als inverser Controller für die Rover-Bewegung verwendet wird, sowie Ideen, wie man ein Black-Box-Modell erklären kann."]} {"source": "The Vision-and-Language Navigation (VLN) task entails an agent following navigational instruction in photo-realistic unknown environments.This challenging task demands that the agent be aware of which instruction was completed, which instruction is needed next, which way to go, and its navigation progress towards the goal.In this paper, we introduce a self-monitoring agent with two complementary components: (1) visual-textual co-grounding module to locate the instruction completed in the past, the instruction required for the next action, and the next moving direction from surrounding images and (2) progress monitor to ensure the grounded instruction correctly reflects the navigation progress.We test our self-monitoring agent on a standard benchmark and analyze our proposed approach through a series of ablation studies that elucidate the contributions of the primary components.Using our proposed method, we set the new state of the art by a significant margin (8% absolute increase in success rate on the unseen test set).Code is available at https://github.com/chihyaoma/selfmonitoring-agent.", "target": ["Wir schlagen einen selbstüberwachenden Agenten für die Seh- und Sprachnavigationsaufgabe vor.", "Eine Methode für die Navigation in Bild und Sprache, die den Fortschritt im Unterricht mit Hilfe eines Fortschrittsmonitors und eines visuell-textuellen Co-Grounding-Moduls verfolgt und bei Standard-Benchmarks gut abschneidet.", "In diesem Beitrag wird ein Modell für die Navigation durch Sehen und Sprache mit panoramischer visueller Aufmerksamkeit und einem zusätzlichen Verlust bei der Fortschrittskontrolle beschrieben, das Ergebnisse auf dem neuesten Stand der Technik liefert."]} {"source": "Environments in Reinforcement Learning (RL) are usually only partially observable.To address this problem, a possible solution is to provide the agent with information about past observations.While common methods represent this history using a Recurrent Neural Network (RNN), in this paper we propose an alternative representation which is based on the record of the past events observed in a given episode.Inspired by the human memory, these events describe only important changes in the environment and, in our approach, are automatically discovered using self-supervision. We evaluate our history representation method using two challenging RL benchmarks: some games of the Atari-57 suite and the 3D environment Obstacle Tower.Using these benchmarks we show the advantage of our solution with respect to common RNN-based approaches.", "target": ["Ereigniserkennung zur Darstellung der Historie für den Agenten in RL", "Die Autoren untersuchen das RL-Problem unter teilweise beobachteten Bedingungen und schlagen eine Lösung vor, die ein FFNN verwendet, aber eine Geschichtsdarstellung bietet und PPO übertrifft.", "In diesem Beitrag wird eine neue Methode zur Darstellung der Vergangenheit als Eingabe für einen RL-Agenten vorgeschlagen, die sich als besser erweist als PPO und eine RNN-Variante von PPO."]} {"source": "The unconditional generation of high fidelity images is a longstanding benchmarkfor testing the performance of image decoders.Autoregressive image modelshave been able to generate small images unconditionally, but the extension ofthese methods to large images where fidelity can be more readily assessed hasremained an open problem.Among the major challenges are the capacity to encodethe vast previous context and the sheer difficulty of learning a distribution thatpreserves both global semantic coherence and exactness of detail.To address theformer challenge, we propose the Subscale Pixel Network (SPN), a conditionaldecoder architecture that generates an image as a sequence of image slices of equalsize.The SPN compactly captures image-wide spatial dependencies and requires afraction of the memory and the computation.To address the latter challenge, wepropose to use multidimensional upscaling to grow an image in both size and depthvia intermediate stages corresponding to distinct SPNs.We evaluate SPNs on theunconditional generation of CelebAHQ of size 256 and of ImageNet from size 32to 128.We achieve state-of-the-art likelihood results in multiple settings, set upnew benchmark results in previously unexplored settings and are able to generatevery high fidelity large scale samples on the basis of both datasets.", "target": ["Wir zeigen, dass autoregressive Modelle sehr realitätsnahe Bilder erzeugen können. ", "Eine Architektur, die Decoder einsetzt, Decoder zur Größenanpassung und Decoder zur Tiefenanpassung, um das Problem des Erlernens von weitreichenden Abhängigkeiten in Bildern zu lösen, um Bilder mit hoher Wiedergabetreue zu erhalten.", "Dieser Beitrag befasst sich mit dem Problem der Generierung von Bildern mit hoher Genauigkeit und zeigt erfolgreich überzeugende Imagenet-Beispiele mit einer Auflösung von 128x128 für ein Likelihood-Density-Modell."]} {"source": "Real-world dynamical systems often consist of multiple stochastic subsystems that interact with each other.Modeling and forecasting the behavior of such dynamics are generally not easy, due to the inherent hardness in understanding the complicated interactions and evolutions of their constituents.This paper introduces the relational state-space model (R-SSM), a sequential hierarchical latent variable model that makes use of graph neural networks (GNNs) to simulate the joint state transitions of multiple correlated objects.By letting GNNs cooperate with SSM, R-SSM provides a flexible way to incorporate relational information into the modeling of multi-object dynamics.We further suggest augmenting the model with normalizing flows instantiated for vertex-indexed random variables and propose two auxiliary contrastive objectives to facilitate the learning.The utility of R-SSM is empirically evaluated on synthetic and real time series datasets.", "target": ["Ein tiefes hierarchisches Zustandsraummodell, in dem die Zustandsübergänge korrelierter Objekte durch graphische neuronale Netze koordiniert werden.", "Ein hierarchisches latentes Variablenmodell für sequenzielle dynamische Prozesse mehrerer Objekte, wenn jedes Objekt eine signifikante Stochastizität aufweist.", "Diese Arbeit stellt ein relationales Zustandsraummodell vor, das die gemeinsamen Zustandsübergänge korrelierter Objekte simuliert, die in einer Graphenstruktur hierarchisch koordiniert sind."]} {"source": "Natural language is hierarchically structured: smaller units (e.g., phrases) are nested within larger units (e.g., clauses).When a larger constituent ends, all of the smaller constituents that are nested within it must also be closed.While the standard LSTM architecture allows different neurons to track information at different time scales, it does not have an explicit bias towards modeling a hierarchy of constituents.This paper proposes to add such inductive bias by ordering the neurons; a vector of master input and forget gates ensures that when a given neuron is updated, all the neurons that follow it in the ordering are also updated.Our novel recurrent architecture, ordered neurons LSTM (ON-LSTM), achieves good performance on four different tasks: language modeling, unsupervised parsing, targeted syntactic evaluation, and logical inference.", "target": ["Wir führen eine neue induktive Verzerrung ein, die Baumstrukturen in rekurrente neuronale Netze integriert.", "In diesem Papier wird ON-LSTM vorgeschlagen, eine neue RNN-Einheit, die die latente Baumstruktur in rekurrente Modelle integriert und gute Ergebnisse bei der Sprachmodellierung, dem unbeaufsichtigten Parsing, der gezielten syntaktischen Auswertung und der logischen Inferenz erzielt."]} {"source": "Skip connections made the training of very deep networks possible and have become an indispensable component in a variety of neural architectures.A completely satisfactory explanation for their success remains elusive.Here, we present a novel explanation for the benefits of skip connections in training very deep networks.The difficulty of training deep networks is partly due to the singularities caused by the non-identifiability of the model.Several such singularities have been identified in previous works:(i) overlap singularities caused by the permutation symmetry of nodes in a given layer,(ii) elimination singularities corresponding to the elimination, i.e. consistent deactivation, of nodes,(iii) singularities generated by the linear dependence of the nodes.These singularities cause degenerate manifolds in the loss landscape that slow down learning.We argue that skip connections eliminate these singularities by breaking the permutation symmetry of nodes, by reducing the possibility of node elimination and by making the nodes less linearly dependent.Moreover, for typical initializations, skip connections move the network away from the \"ghosts\" of these singularities and sculpt the landscape around them to alleviate the learning slow-down.These hypotheses are supported by evidence from simplified models, as well as from experiments with deep networks trained on real-world datasets.", "target": ["Degenerierte Mannigfaltigkeiten, die sich aus der Nicht-Identifizierbarkeit des Modells ergeben, verlangsamen das Lernen in tiefen Netzen; Skip-Verbindungen helfen, indem sie Degenerationen aufbrechen.", "Die Autoren zeigen, dass Eliminations- und Überlappungssingularitäten das Lernen in tiefen neuronalen Netzen behindern, und demonstrieren, dass Skip-Verbindungen die Häufigkeit dieser Singularitäten reduzieren und das Lernen beschleunigen können.", "Das Werk untersucht die Verwendung von Skip-Verbindungen in tiefen Netzwerken als eine Möglichkeit, Singularitäten in der Hessian-Matrix während des Trainings zu mildern."]} {"source": "Representation learning is a central challenge across a range of machine learning areas.In reinforcement learning, effective and functional representations have the potential to tremendously accelerate learning progress and solve more challenging problems.Most prior work on representation learning has focused on generative approaches, learning representations that capture all the underlying factors of variation in the observation space in a more disentangled or well-ordered manner.In this paper, we instead aim to learn functionally salient representations: representations that are not necessarily complete in terms of capturing all factors of variation in the observation space, but rather aim to capture those factors of variation that are important for decision making -- that are \"actionable\".These representations are aware of the dynamics of the environment, and capture only the elements of the observation that are necessary for decision making rather than all factors of variation, eliminating the need for explicit reconstruction.We show how these learned representations can be useful to improve exploration for sparse reward problems, to enable long horizon hierarchical reinforcement learning, and as a state representation for learning policies for downstream tasks.We evaluate our method on a number of simulated environments, and compare it to prior methods for representation learning, exploration, and hierarchical reinforcement learning.", "target": ["Lernen von Zustandsdarstellungen, die die für die Kontrolle notwendigen Faktoren erfassen.", "Ein Ansatz für das Repräsentationslernen im Rahmen des Reinforcement Learnings, der zwei Stufen funktional in Bezug auf die Aktionen unterscheidet, die erforderlich sind, um sie zu erreichen.", "In dem Werk wird eine Methode zum Erlernen von Darstellungen vorgestellt, bei denen die Nähe im euklidischen Abstand Zustände darstellt, die durch ähnliche Strategien erreicht werden."]} {"source": "We explore the behavior of a standard convolutional neural net in a setting that introduces classification tasks sequentially and requires the net to master new tasks while preserving mastery of previously learned tasks. This setting corresponds to that which human learners face as they acquire domain expertise, for example, as an individual reads a textbook chapter-by-chapter.Through simulations involving sequences of 10 related tasks, we find reason for optimism that nets will scale well as they advance from having a single skill to becoming domain experts.We observed two key phenomena.First, forward facilitation---the accelerated learning of task n+1 having learned n previous tasks---grows with n. Second, backward interference---the forgetting of the n previous tasks when learning task n+1---diminishes with n. Forward facilitation is the goal of research on metalearning, and reduced backward interference is the goal of research on ameliorating catastrophic forgetting.We find that both of these goals are attained simply through broader exposure to a domain.", "target": ["Wir untersuchen das Verhalten eines CNN bei der Bewältigung neuer Aufgaben unter Beibehaltung der Beherrschung bereits gelernter Aufgaben"]} {"source": "We demonstrate a low effort method that unsupervisedly constructs task-optimized embeddings from existing word embeddings to gain performance on a supervised end-task.This avoids additional labeling or building more complex model architectures by instead providing specialized embeddings better fit for the end-task(s).Furthermore, the method can be used to roughly estimate whether a specific kind of end-task(s) can be learned form, or is represented in, a given unlabeled dataset, e.g. using publicly available probing tasks.We evaluate our method for diverse word embedding probing tasks and by size of embedding training corpus -- i.e. to explore its use in reduced (pretraining-resource) settings.", "target": ["Morty überarbeitet vordefinierte Worteinbettungen, um entweder: (a) Verbesserung der gesamten Einbettungsleistung (für Multi-Task-Einstellungen) oder Verbesserung der Single-Task-Leistung, wobei nur minimaler Aufwand erforderlich ist."]} {"source": "Data augmentation is commonly used to encode invariances in learning methods.However, this process is often performed in an inefficient manner, as artificial examples are created by applying a number of transformations to all points in the training set.The resulting explosion of the dataset size can be an issue in terms of storage and training costs, as well as in selecting and tuning the optimal set of transformations to apply.In this work, we demonstrate that it is possible to significantly reduce the number of data points included in data augmentation while realizing the same accuracy and invariance benefits of augmenting the entire dataset.We propose a novel set of subsampling policies, based on model influence and loss, that can achieve a 90% reduction in augmentation set size while maintaining the accuracy gains of standard data augmentation.", "target": ["Das selektive Erweitern von schwer zu klassifizierenden Punkten führt zu einem effizienten Training.", "Die Autoren untersuchen das Problem der Identifizierung von Subsampling-Strategien für die Datenerweiterung und schlagen Strategien vor, die auf Modelleinfluss und -verlust sowie auf einem empirischen Benchmarking der vorgeschlagenen Methoden basieren.", "Die Autoren schlagen vor, Einfluss- oder verlustbasierte Methoden zu verwenden, um eine Teilmenge von Punkten auszuwählen, die zur Erweiterung von Datensätzen für Trainingsmodelle verwendet werden, wobei der Verlust additiv über die Datenpunkte ist."]} {"source": "Over the last few years exciting work in deep generative models has produced models able to suggest new organic molecules by generating strings, trees, and graphs representing their structure.While such models are able to generate molecules with desirable properties, their utility in practice is limited due to the difficulty in knowing how to synthesize these molecules.We therefore propose a new molecule generation model, mirroring a more realistic real-world process, where reactants are selected and combined to form more complex molecules.More specifically, our generative model proposes a bag of initial reactants (selected from a pool of commercially-available molecules) and uses a reaction model to predict how they react together to generate new molecules.Modeling the entire process of constructing a molecule during generation offers a number of advantages.First, we show that such a model has the ability to generate a wide, diverse set of valid and unique molecules due to the useful inductive biases of modeling reactions.Second, modeling synthesis routes rather than final molecules offers practical advantages to chemists who are not only interested in new molecules but also suggestions on stable and safe synthetic routes.Third, we demonstrate the capabilities of our model to also solve one-step retrosynthesis problems, predicting a set of reactants that can produce a target product.", "target": ["Ein tiefes generatives Modell für organische Moleküle, das zunächst Reaktionsbausteine erzeugt und diese dann mithilfe eines Reaktionsvorhersageprogramms kombiniert.", "Ein molekulares generatives Modell, das Moleküle in einem zweistufigen Prozess erzeugt und Synthesewege für die erzeugten Moleküle bereitstellt, so dass die Benutzer die synthetische Zugänglichkeit der erzeugten Verbindungen untersuchen können."]} {"source": "Deep neural networks are complex non-linear models used as predictive analytics tool and have demonstrated state-of-the-art performance on many classification tasks. However, they have no inherent capability to recognize when their predictions might go wrong.There have been several efforts in the recent past to detect natural errors i.e. misclassified inputs but these mechanisms pose additional energy requirements. To address this issue, we present a novel post-hoc framework to detect natural errors in an energy efficient way. We achieve this by appending relevant features based linear classifiers per class referred as Relevant features based Auxiliary Cells (RACs). The proposed technique makes use of the consensus between RACs appended at few selected hidden layers to distinguish the correctly classified inputs from misclassified inputs.The combined confidence of RACs is utilized to determine if classification should terminate at an early stage.We demonstrate the effectiveness of our technique on various image classification datasets such as CIFAR10, CIFAR100 and Tiny-ImageNet.Our results show that for CIFAR100 dataset trained on VGG16 network, RACs can detect 46% of the misclassified examples along with 12% reduction in energy compared to the baseline network while 69% of the examples are correctly classified.", "target": ["Verbesserung der Robustheit und Energieeffizienz eines tiefen neuronalen Netzes unter Verwendung der versteckten Darstellungen.", "Dieser Artikel zielt darauf ab, die Fehlklassifikationen von tiefen neuronalen Netzen auf energieeffiziente Weise zu reduzieren, indem relevante merkmalsbasierte Hilfszellen nach einer oder mehreren versteckten Schichten hinzugefügt werden, um zu entscheiden, ob die Klassifizierung vorzeitig beendet werden soll."]} {"source": "Many methods have been developed to represent knowledge graph data, which implicitly exploit low-rank latent structure in the data to encode known information and enable unknown facts to be inferred.To predict whether a relationship holds between entities, their embeddings are typically compared in the latent space following a relation-specific mapping.Whilst link prediction has steadily improved, the latent structure, and hence why such models capture semantic information, remains unexplained.We build on recent theoretical interpretation of word embeddings as a basis to consider an explicit structure for representations of relations between entities.For identifiable relation types, we are able to predict properties and justify the relative performance of leading knowledge graph representation methods, including their often overlooked ability to make independent predictions.", "target": ["Verständnis der Struktur von Wissensgraphen unter Verwendung von Erkenntnissen aus Worteinbettungen.", "In diesem Beitrag wird versucht, die latente Struktur zu verstehen, die den Methoden zur Einbettung von Wissensgraphen zugrunde liegt, und es wird gezeigt, dass die Fähigkeit eines Modells, einen Beziehungstyp darzustellen, von den Beschränkungen der Modellarchitektur in Bezug auf die Beziehungsbedingungen abhängt.", "Diese Arbeit schlägt eine detaillierte Studie über die Erklärbarkeit von Link-Prädiktionsmodellen (LP) vor, indem es eine neue Interpretation von Worteinbettungen verwendet, um ein besseres Verständnis der Modellleistung von LPs zu ermöglichen."]} {"source": "Many real-world applications involve multivariate, geo-tagged time series data: at each location, multiple sensors record corresponding measurements.For example, air quality monitoring system records PM2.5, CO, etc.The resulting time-series data often has missing values due to device outages or communication errors.In order to impute the missing values, state-of-the-art methods are built on Recurrent Neural Networks (RNN), which process each time stamp sequentially, prohibiting the direct modeling of the relationship between distant time stamps.Recently, the self-attention mechanism has been proposed for sequence modeling tasks such as machine translation, significantly outperforming RNN because the relationship between each two time stamps can be modeled explicitly.In this paper, we are the first to adapt the self-attention mechanism for multivariate, geo-tagged time series data.In order to jointly capture the self-attention across different dimensions (i.e. time, location and sensor measurements) while keep the size of attention maps reasonable, we propose a novel approach called Cross-Dimensional Self-Attention (CDSA) to process each dimension sequentially, yet in an order-independent manner.On three real-world datasets, including one our newly collected NYC-traffic dataset, extensive experiments demonstrate the superiority of our approach compared to state-of-the-art methods for both imputation and forecasting tasks.", "target": ["Ein neuartiger Mechanismus zur Selbstbeobachtung für die Imputation multivariater, geogetaggter Zeitreihen.", "In diesem Beitrag wird das Problem der Anwendung des Transformatorennetzwerks auf räumlich-zeitliche Daten in einer rechnerisch effizienten Art und Weise behandelt und es werden Möglichkeiten zur Implementierung von 3D-Attention untersucht.", "In diesem Beitrag wird die Wirksamkeit von Transformationsmodellen für die Imputation von Zeitreihendaten über verschiedene Dimensionen der Eingabe empirisch untersucht."]} {"source": "The conversion of scanned documents to digital forms is performed using an Optical Character Recognition (OCR) software.This work focuses on improving the quality of scanned documents in order to improve the OCR output.We create an end-to-end document enhancement pipeline which takes in a set of noisy documents and produces clean ones.Deep neural network based denoising auto-encoders are trained to improve the OCR quality.We train a blind model that works on different noise levels of scanned text documents.Results are shown for blurring and watermark noise removal from noisy scanned documents.", "target": ["Wir haben ein REDNET (ResNet Encoder-Decoder) mit 8 Skip-Verbindungen entwickelt und getestet, um Störungen aus Dokumenten zu entfernen, einschließlich Unschärfe und Wasserzeichen, was zu einem leistungsstarken tiefen Netzwerk für die Bereinigung von Dokumentenbildern führt. "]} {"source": "The existence of adversarial examples, or intentional mis-predictions constructed from small changes to correctly predicted examples, is one of the most significant challenges in neural network research today.Ironically, many new defenses are based on a simple observation - the adversarial inputs themselves are not robust and small perturbations to the attacking input often recover the desired prediction.While the intuition is somewhat clear, a detailed understanding of this phenomenon is missing from the research literature.This paper presents a comprehensive experimental analysis of when and why perturbation defenses work and potential mechanisms that could explain their effectiveness (or ineffectiveness) in different settings.", "target": ["Wir identifizieren eine Familie von Schutztechniken und zeigen, dass sowohl eine deterministische verlustbehaftete Kompression als auch zufällige Störungen der Eingabe zu einer ähnlichen Verbesserung der Robustheit führen.", "In diesem Beitrag wird erörtert, wie ein bestimmter gegnerischer Angriff destabilisiert werden kann, was gegnerische Bilder nicht robust macht und ob es für Angreifer möglich ist, ein universelles Modell von Störungen zu verwenden, um ihre gegnerischen Beispiele robust gegen solche Störungen zu machen.", "Das Papier untersucht die Robustheit von gegnerischen Angriffen gegenüber Transformationen ihrer Eingaben."]} {"source": "There is no consensus yet on the question whether adaptive gradient methods like Adam are easier to use than non-adaptive optimization methods like SGD.In this work, we fill in the important, yet ambiguous concept of ‘ease-of-use’ by defining an optimizer’s tunability: How easy is it to find good hyperparameter configurations using automatic random hyperparameter search?We propose a practical and universal quantitative measure for optimizer tunability that can form the basis for a fair optimizer benchmark. Evaluating a variety of optimizers on an extensive set of standard datasets and architectures, we find that Adam is the most tunable for the majority of problems, especially with a low budget for hyperparameter tuning.", "target": ["Wir bieten eine Methode zum Benchmarking von Optimierern an, die den Prozess der Hyperparameterabstimmung berücksichtigt.", "Einführung einer neuartigen Metrik zur Erfassung der Abstimmbarkeit eines Optimierers und ein umfassender empirischer Vergleich von Deep-Learning-Optimierern bei unterschiedlichem Umfang der Hyperparameter-Abstimmung. ", "In diesem Beitrag wird ein einfaches Maß für die Abstimmbarkeit eingeführt, das den Vergleich von Optimierern unter Ressourcenbeschränkungen ermöglicht. Es zeigt sich, dass die Abstimmung der Lernrate von Adam-Optimierern am einfachsten ist, um gut funktionierende Hyperparameter-Konfigurationen zu finden."]} {"source": "The phase problem in diffraction physics is one of the oldest inverse problems in all of science.The central difficulty that any approach to solving this inverse problem must overcome is that half of the information, namely the phase of the diffracted beam, is always missing.In the context of electron microscopy, the phase problem is generally non-linear and solutions provided by phase-retrieval techniques are known to be poor approximations to the physics of electrons interacting with matter.Here, we show that a semi-supervised learning approach can effectively solve the phase problem in electron microscopy/scattering.In particular, we introduce a new Deep Neural Network (DNN), Y-net, which simultaneously learns a reconstruction algorithm via supervised training in addition to learning a physics-based regularization via unsupervised training.We demonstrate that this constrained, semi-supervised approach is an order of magnitude more data-efficient and accurate than the same model trained in a purely supervised fashion.In addition, the architecture of the Y-net model provides for a straightforward evaluation of the consistency of the model's prediction during inference and is generally applicable to the phase problem in other settings.", "target": ["Wir führen ein halbüberwachtes tiefes neuronales Netzwerk ein, um die Lösung des Phasenproblems in der Elektronenmikroskopie zu approximieren"]} {"source": "Word embeddings extract semantic features of words from large datasets of text.Most embedding methods rely on a log-bilinear model to predict the occurrenceof a word in a context of other words.Here we propose word2net, a method thatreplaces their linear parametrization with neural networks.For each term in thevocabulary, word2net posits a neural network that takes the context as input andoutputs a probability of occurrence.Further, word2net can use the hierarchicalorganization of its word networks to incorporate additional meta-data, such assyntactic features, into the embedding model.For example, we show how to shareparameters across word networks to develop an embedding model that includespart-of-speech information.We study word2net with two datasets, a collectionof Wikipedia articles and a corpus of U.S. Senate speeches.Quantitatively, wefound that word2net outperforms popular embedding methods on predicting held-out words and that sharing parameters based on part of speech further boostsperformance.Qualitatively, word2net learns interpretable semantic representationsand, compared to vector-based methods, better incorporates syntactic information.", "target": ["Word2net ist eine neuartige Methode zum Erlernen neuronaler Netzwerkrepräsentationen von Wörtern, die syntaktische Informationen nutzen kann, um bessere semantische Merkmale zu lernen.", "Dieser Artikel erweitert SGNS mit einem architektonischen Wechsel von einem Bag-of-Words-Modell zu einem Feedforward-Modell und trägt eine neue Form der Regularisierung bei, indem es eine Teilmenge von Schichten zwischen verschiedenen assoziierten Netzwerken bindet.", "Eine Methode zur Verwendung nichtlinearer Kombination von Kontextvektoren zum Erlernen der Vektordarstellung von Wörtern, wobei die Hauptidee darin besteht, jede Worteinbettung durch ein neuronales Netz zu ersetzen."]} {"source": "A key goal in neuroscience is to understand brain mechanisms of cognitive functions.An emerging approach is to study “brain states” dynamics using functional magnetic resonance imaging (fMRI).So far in the literature, brain states have typically been studied using 30 seconds of fMRI data or more, and it is unclear to which extent brain states can be reliably identified from very short time series.In this project, we applied graph convolutional networks (GCN) to decode brain activity over short time windows in a task fMRI dataset, i.e. associate a given window of fMRI time series with the task used.Starting with a populational brain graph with nodes defined by a parcellation of cerebral cortex and the adjacent matrix extracted from functional connectome, GCN takes a short series of fMRI volumes as input, generates high-level domain-specific graph representations, and then predicts the corresponding cognitive state.We investigated the performance of this GCN \"cognitive state annotation\" in the Human Connectome Project (HCP) database, which features 21 different experimental conditions spanning seven major cognitive domains, and high temporal resolution task fMRI data.Using a 10-second window, the 21 cognitive states were identified with an excellent average test accuracy of 89% (chance level 4.8%).As the HCP task battery was designed to selectively activate a wide range of specialized functional networks, we anticipate the GCN annotation to be applicable as a base model for other transfer learning applications, for instance, adapting to new task domains.", "target": ["Unter Verwendung eines 10-Sekunden-Fensters von fMRI-Signalen identifizierte unser GCN-Modell 21 verschiedene Aufgabenbedingungen aus dem HCP-Datensatz mit einer Testgenauigkeit von 89 %."]} {"source": "Modern deep neural networks (DNNs) require high memory consumption and large computational loads. In order to deploy DNN algorithms efficiently on edge or mobile devices, a series of DNN compression algorithms have been explored, including the line of works on factorization methods.Factorization methods approximate the weight matrix of a DNN layer with multiplication of two or multiple low-rank matrices.However, it is hard to measure the ranks of DNN layers during the training process.Previous works mainly induce low-rank through implicit approximations or via costly singular value decomposition (SVD) process on every training step.The former approach usually induces a high accuracy loss while the latter prevents DNN factorization from efficiently reaching a high compression rate.In this work, we propose SVD training, which first applies SVD to decompose DNN's layers and then performs training on the full-rank decomposed weights.To improve the training quality and convergence, we add orthogonality regularization to the singular vectors, which ensure the valid form of SVD and avoid gradient vanishing/exploding.Low-rank is encouraged by applying sparsity-inducing regularizers on the singular values of each layer.Singular value pruning is applied at the end to reach a low-rank model.We empirically show that SVD training can significantly reduce the rank of DNN layers and achieve higher reduction on computation load under the same accuracy, comparing to not only previous factorization methods but also state-of-the-art filter pruning methods.", "target": ["Effiziente Induktion von tiefen neuronalen Netzen mit niedrigem Rang durch SVD-Training mit spärlichen Singulärwerten und orthogonalen Singulärvektoren.", "In diesem Artikel wird ein Ansatz zur Netzwerkkompression vorgestellt, bei dem die Gewichtsmatrix in jeder Schicht einen niedrigen Rang hat und die Gewichtsmatrizen explizit in eine SVD-ähnliche Faktorisierung zur Behandlung als neue Parameter faktorisiert werden.", "Vorschlag, jede Schicht eines tiefen neuronalen Netzes vor dem Training mit einer Low-Rank-Matrixzerlegung zu parametrisieren, die Convolutions durch zwei aufeinanderfolgende Convolutions zu ersetzen und dann die zerlegte Methode zu trainieren."]} {"source": "The recent rise in popularity of few-shot learning algorithms has enabled models to quickly adapt to new tasks based on only a few training samples.Previous few-shot learning works have mainly focused on classification and reinforcement learning. In this paper, we propose a few-shot meta-learning system that focuses exclusively on regression tasks.Our model is based on the idea that the degree of freedom of the unknown function can be significantly reduced if it is represented as a linear combination of a set of appropriate basis functions.This enables a few labelled samples to approximate the function.We design a Feature Extractor network to encode basis functions for a task distribution, and a Weights Generator to generate the weight vector for a novel task.We show that our model outperforms the current state of the art meta-learning methods in various regression tasks.", "target": ["Wir schlagen ein Few-Shot Learning Modell vor, das speziell auf Regressionsaufgaben zugeschnitten ist.", "In dieser Arbeit wird ein neues Shot-Learning-Verfahren für Regressionsprobleme mit kleinen Stichproben vorgeschlagen.", "Eine Methode, die mit wenigen Beispielen ein Regressionsmodell erlernt und andere Methoden übertrifft."]} {"source": "Most classification and segmentation datasets assume a closed-world scenario in which predictions are expressed as distribution over a predetermined set of visual classes.However, such assumption implies unavoidable and often unnoticeable failures in presence of out-of-distribution (OOD) input.These failures are bound to happen in most real-life applications since current visual ontologies are far from being comprehensive.We propose to address this issue by discriminative detection of OOD pixels in input data.Different from recent approaches, we avoid to bring any decisions by only observing the training dataset of the primary model trained to solve the desired computer vision task.Instead, we train a dedicated OOD modelwhich discriminates the primary training set from a much larger \"background\" dataset which approximates the variety of the visual world.We perform our experiments on high resolution natural images in a dense prediction setup.We use several road driving datasets as our training distribution, while we approximate the background distribution with the ILSVRC dataset.We evaluate our approach on WildDash test, which is currently the only public test dataset with out-of-distribution images.The obtained results show that the proposed approach succeeds to identify out-of-distribution pixels while outperforming previous work by a wide margin.", "target": ["Wir präsentieren einen neuen Ansatz zur Erkennung von Pixeln, die nicht in der Verteilung liegen, bei der semantischen Segmentierung.", "Diese Arbeit befasst sich mit der Erkennung von Verteilungsfehlern, um den Segmentierungsprozess zu unterstützen, und schlägt einen Ansatz für das Training eines binären Klassifizierers vor, der Bildfelder aus einer bekannten Gruppe von Klassen von denen einer unbekannten unterscheidet.", "Diese Arbeit zielt auf die Erkennung von Pixeln außerhalb der Verteilung für die semantische Segmentierung ab, und diese Arbeit nutzt Daten aus anderen Bereichen, um unbestimmte Klassen zu erkennen und die Unsicherheit besser zu modellieren."]} {"source": "Network quantization is one of the most hardware friendly techniques to enable the deployment of convolutional neural networks (CNNs) on low-power mobile devices.Recent network quantization techniques quantize each weight kernel in a convolutional layer independently for higher inference accuracy, since the weight kernels in a layer exhibit different variances and hence have different amounts of redundancy.The quantization bitwidth or bit number (QBN) directly decides the inference accuracy, latency, energy and hardware overhead.To effectively reduce the redundancy and accelerate CNN inferences, various weight kernels should be quantized with different QBNs.However, prior works use only one QBN to quantize each convolutional layer or the entire CNN, because the design space of searching a QBN for each weight kernel is too large.The hand-crafted heuristic of the kernel-wise QBN search is so sophisticated that domain experts can obtain only sub-optimal results.It is difficult for even deep reinforcement learning (DRL) DDPG-based agents to find a kernel-wise QBN configuration that can achieve reasonable inference accuracy.In this paper, we propose a hierarchical-DRL-based kernel-wise network quantization technique, AutoQ, to automatically search a QBN for each weight kernel, and choose another QBN for each activation layer.Compared to the models quantized by the state-of-the-art DRL-based schemes, on average, the same models quantized by AutoQ reduce the inference latency by 54.06%, and decrease the inference energy consumption by 50.69%, while achieving the same inference accuracy.", "target": ["Genaue, schnelle und automatisierte Quantisierung neuronaler Netze nach dem Kernelprinzip mit gemischter Präzision unter Verwendung von hierarchischem Deep Reinforcement Learning", "Eine Methode zur Quantisierung von Gewichten und Aktivierungen neuronaler Netze, die tiefes Reinforcement Learning verwendet, um die Bitbreite für einzelne Kernel in einer Schicht auszuwählen, und die eine bessere Leistung bzw. Latenz als frühere Ansätze erzielt.", "In diesem Artikel wird vorgeschlagen, automatisch nach Quantisierungsschemata für jeden Kernel im neuronalen Netz zu suchen, wobei hierarchisches RL zur Steuerung der Suche verwendet wird. "]} {"source": "Recent visual analytics systems make use of multiple machine learning models to better fit the data as opposed to traditional single, pre-defined model systems.However, while multi-model visual analytic systems can be effective, their added complexity poses usability concerns, as users are required to interact with the parameters of multiple models.Further, the advent of various model algorithms and associated hyperparameters creates an exhaustive model space to sample models from.This poses complexity to navigate this model space to find the right model for the data and the task.In this paper, we present Gaggle, a multi-model visual analytic system that enables users to interactively navigate the model space.Further translating user interactions into inferences, Gaggle simplifies working with multiple models by automatically finding the best model from the high-dimensional model space to support various user tasks.Through a qualitative user study, we show how our approach helps users to find a best model for a classification and ranking task.The study results confirm that Gaggle is intuitive and easy to use, supporting interactive model space navigation and automated model selection without requiring any technical expertise from users.", "target": ["Gaggle, ein interaktives visuelles Analysesystem, das den Benutzern hilft, sich interaktiv im Modellraum für Klassifizierungs- und Rankingaufgaben zu bewegen.", "Ein neues visuelles Analysesystem, das es auch unerfahrenen Nutzern ermöglichen soll, interaktiv in einem Modellraum zu navigieren, indem es einen demonstrationsbasierten Ansatz verwendet.", "Ein visuelles Analysesystem, das unerfahrenen Analysten hilft, sich im Modellraum zurechtzufinden, um Klassifizierungs- und Rangordnungsaufgaben durchzuführen."]} {"source": "Chinese text classification has received more and more attention today.However, the problem of Chinese text representation still hinders the improvement of Chinese text classification, especially the polyphone and the homophone in social media.To cope with it effectively, we propose a new structure, the Extractor, based on attention mechanisms and design novel attention networks named Extractor-attention network (EAN).Unlike most of previous works, EAN uses a combination of a word encoder and a Pinyin character encoder instead of a single encoder.It improves the capability of Chinese text representation.Moreover, compared with the hybrid encoder methods, EAN has more complex combination architecture and more reducing parameters structures.Thus, EAN can take advantage of a large amount of information that comes from multi-inputs and alleviates efficiency issues.The proposed model achieves the state of the art results on 5 large datasets for Chinese text classification.", "target": ["Wir schlagen ein neuartiges Aufmerksamkeitsnetz mit dem Hybird-Encoder vor, um das Problem der Textdarstellung bei der Klassifizierung chinesischer Texte zu lösen, insbesondere die sprachlichen Phänomene der Aussprache wie Polyphone und Homophone.", "Dieses Papier schlägt ein aufmerksamkeitsbasiertes Modell vor, das aus einem Wort-Encoder und einem Pinyin-Encoder für die Klassifizierung chinesischer Texte besteht, und erweitert die Architektur um den Pinyin-Zeichen-Encoder.", "Vorschlag für ein Aufmerksamkeitsnetz, bei dem sowohl Wort als auch Pinyin für die chinesische Repräsentation berücksichtigt werden, mit verbesserten Ergebnissen, die in mehreren Datensätzen für die Textklassifizierung gezeigt wurden."]} {"source": "Recent advances in learning from demonstrations (LfD) with deep neural networks have enabled learning complex robot skills that involve high dimensional perception such as raw image inputs. LfD algorithms generally assume learning from single task demonstrations.In practice, however, it is more efficient for a teacher to demonstrate a multitude of tasks without careful task set up, labeling, and engineering.Unfortunately in such cases, traditional imitation learning techniques fail to represent the multi-modal nature of the data, and often result in sub-optimal behavior.In this paper we present an LfD approach for learning multiple modes of behavior from visual data.Our approach is based on a stochastic deep neural network (SNN), which represents the underlying intention in the demonstration as a stochastic activation in the network.We present an efficient algorithm for training SNNs, and for learning with vision inputs, we also propose an architecture that associates the intention with a stochastic attention module.We demonstrate our method on real robot visual object reaching tasks, and show thatit can reliably learn the multiple behavior modes in the demonstration data.Video results are available at https://vimeo.com/240212286/fd401241b9.", "target": ["Multimodales Imitation Learning aus unstrukturierten Demonstrationen unter Verwendung stochastischer neuronaler Netzmodelle. ", "Ein neuer, auf Stichproben basierender Ansatz für die Inferenz in latenten Variablenmodellen, der auf multimodales Imitation Learning anwendbar ist und besser funktioniert als deterministische neuronale Netze und stochastische neuronale Netze für eine reale visuelle Robotikaufgabe.", "Dieser Artikel zeigt, wie man mehrere Modalitäten durch Imitation Learning aus visuellen Daten mit Hilfe stochastischer neuronaler Netze erlernen kann, und eine Methode zum Lernen aus Demonstrationen, bei denen mehrere Modalitäten derselben Aufgabe gegeben sind."]} {"source": "The interpretability of neural networks has become crucial for their applications in real world with respect to the reliability and trustworthiness.Existing explanation generation methods usually provide important features by scoring their individual contributions to the model prediction and ignore the interactions between features, which eventually provide a bag-of-words representation as explanation.In natural language processing, this type of explanations is challenging for human user to understand the meaning of an explanation and draw the connection between explanation and model prediction, especially for long texts.In this work, we focus on detecting the interactions between features, and propose a novel approach to build a hierarchy of explanations based on feature interactions.The proposed method is evaluated with three neural classifiers, LSTM, CNN, and BERT, on two benchmark text classification datasets.The generated explanations are assessed by both automatic evaluation measurements and human evaluators.Experiments show the effectiveness of the proposed method in providing explanations that are both faithful to models, and understandable to humans.", "target": ["Ein neuartiger Ansatz zur Konstruktion hierarchischer Erklärungen für die Textklassifizierung durch Erkennung von Merkmalsinteraktionen.", "Eine neuartige Methode zur Bereitstellung von Erklärungen für Vorhersagen, die von Textklassifizierern getroffen wurden, die bei der Bewertung der Wichtigkeit auf Wortebene besser abschneidet, sowie eine neue Metrik, der Kohäsionsverlust, zur Bewertung der Wichtigkeit auf Bereichsebene.", "Eine Interpretationsmethode auf der Grundlage von Merkmalsinteraktionen und Merkmalswichtigkeitspunkten im Vergleich zu unabhängigen Merkmalsbeiträgen."]} {"source": "Making deep convolutional neural networks more accurate typically comes at the cost of increased computational and memory resources.In this paper, we reduce this cost by exploiting the fact that the importance of features computed by convolutional layers is highly input-dependent, and propose feature boosting and suppression (FBS), a new method to predictively amplify salient convolutional channels and skip unimportant ones at run-time.FBS introduces small auxiliary connections to existing convolutional layers.In contrast to channel pruning methods which permanently remove channels, it preserves the full network structures and accelerates convolution by dynamically skipping unimportant input and output channels.FBS-augmented networks are trained with conventional stochastic gradient descent, making it readily available for many state-of-the-art CNNs.We compare FBS to a range of existing channel pruning and dynamic execution schemes and demonstrate large improvements on ImageNet classification.Experiments show that FBS can respectively provide 5× and 2× savings in compute on VGG-16 and ResNet-18, both with less than 0.6% top-5 accuracy loss.", "target": ["Wir sorgen dafür, dass Convolutional Layers schneller laufen, indem wir Kanäle bei der Merkmalsberechnung dynamisch verstärken und unterdrücken.", "Eine Methode zur Verstärkung und Unterdrückung von Merkmalen für dynamisches Channel Pruning, die die Bedeutung jedes Kanals vorhersagt und dann eine affine Funktion zur Verstärkung/Unterdrückung der Kanalbedeutung verwendet.", "Vorschlag für eine Channel Pruning Methode zur dynamischen Auswahl von Kanälen während der Prüfung."]} {"source": "We propose a novel way of reducing the number of parameters in the storage-hungry fully connected layers of a neural network by using pre-defined sparsity, where the majority of connections are absent prior to starting training.Our results indicate that convolutional neural networks can operate without any loss of accuracy at less than 0.5% classification layer connection density, or less than 5% overall network connection density.We also investigate the effects of pre-defining the sparsity of networks with only fully connected layers.Based on our sparsifying technique, we introduce the `scatter' metric to characterize the quality of a particular connection pattern.As proof of concept, we show results on CIFAR, MNIST and a new dataset on classifying Morse code symbols, which highlights some interesting trends and limits of sparse connection patterns.", "target": ["Neuronale Netze können so vordefiniert werden, dass sie ohne Leistungseinbußen eine spärliche Konnektivität aufweisen.", "Diese Arbeit untersucht spärliche Verbindungsmuster in den oberen Schichten von Convolutional Bildklassifikations Netzwerken und stellt Heuristiken zur Verteilung von Verbindungen zwischen Fenstern/Gruppen und ein Maß namens Streuung zur Konstruktion von Konnektivitätsmasken vor.", "Vorschlag zur Verringerung der Anzahl von Parametern, die von einem tiefen Netz gelernt werden, durch die Einrichtung von spärlichen Verbindungsgewichten in Klassifizierungsschichten und der Einführung eines Konzepts der \"Streuung\"."]} {"source": "Deep neural networks are vulnerable to adversarial examples, which becomes one of the most important problems in the development of deep learning.While a lot of efforts have been made in recent years, it is of great significance to perform correct and complete evaluations of the adversarial attack and defense algorithms.In this paper, we establish a comprehensive, rigorous, and coherent benchmark to evaluate adversarial robustness on image classification tasks.After briefly reviewing plenty of representative attack and defense methods, we perform large-scale experiments with two robustness curves as the fair-minded evaluation criteria to fully understand the performance of these methods.Based on the evaluation results, we draw several important findings and provide insights for future research.", "target": ["Wir bieten ein umfassendes, strenges und kohärentes Benchmarking zur Bewertung der adversarial Robustheit von Deep-Learning-Modellen.", "In diesem Artikel werden verschiedene Arten von Klassifizierungsmodellen unter verschiedenen Angriffsmethoden bewertet.", "Eine groß angelegte empirische Studie, in der verschiedene Angriffs- und Verteidigungstechniken miteinander verglichen werden, sowie die Verwendung von Kurven für die Genauigkeit im Vergleich zum Störungsbudget und die Genauigkeit im Vergleich zur Angriffsstärke zur Bewertung von Angriffen und Verteidigungen."]} {"source": "We propose a modification to traditional Artificial Neural Networks (ANNs), which provides the ANNs with new aptitudes motivated by biological neurons. Biological neurons work far beyond linearly summing up synaptic inputs and then transforming the integrated information. A biological neuron change firing modes accordingly to peripheral factors (e.g., neuromodulators) as well as intrinsic ones. Our modification connects a new type of ANN nodes, which mimic the function of biological neuromodulators and are termed modulators, to enable other traditional ANN nodes to adjust their activation sensitivities in run-time based on their input patterns. In this manner, we enable the slope of the activation function to be context dependent. This modification produces statistically significant improvements in comparison with traditional ANN nodes in the context of Convolutional Neural Networks and Long Short-Term Memory networks.", "target": ["Wir schlagen eine Modifikation traditioneller Künstlicher Neuronaler Netze vor, die durch die Biologie der Neuronen motiviert ist und es ermöglicht, die Form der Aktivierungsfunktion kontextabhängig zu gestalten.", "Eine Methode zur Skalierung der Aktivierungen einer Schicht von Neuronen in einem ANN in Abhängigkeit von den Eingaben in diese Schicht, die Verbesserungen gegenüber den Grundlinien aufweist.", "Einführung einer architektonischen Änderung für Basisneuronen in einem neuronalen Netz und die Idee, die Ausgabe der Linearkombinationen der Neuronen mit einem Modulator zu multiplizieren, bevor sie in die Aktivierungsfunktion eingespeist wird."]} {"source": "In this work, we study how the large-scale pretrain-finetune framework changes the behavior of a neural language generator.We focus on the transformer encoder-decoder model for the open-domain dialogue response generation task.We find that after standard fine-tuning, the model forgets important language generation skills acquired during large-scale pre-training.We demonstrate the forgetting phenomenon through a detailed behavior analysis from the perspectives of context sensitivity and knowledge transfer.Adopting the concept of data mixing, we propose an intuitive fine-tuning strategy named \"mix-review''.We find that mix-review effectively regularize the fine-tuning process, and the forgetting problem is largely alleviated.Finally, we discuss interesting behavior of the resulting dialogue model and its implications.", "target": ["Wir identifizieren das Problem des Vergessens beim Fine-Tuning von vortrainierten NLG-Modellen und schlagen eine Mix-Review-Strategie vor, um dieses Problem zu lösen.", "In dieser Arbeit wird das Problem des Vergessens im Rahmen des Pretraining Fine-Tunings aus der Perspektive der Kontextsensitivität und des Wissenstransfers analysiert und eine Fine-Tuningstrategie vorgeschlagen, die die Methode des Gewichtsabfalls übertrifft.", "Untersuchung des Vergessensproblems im Pretrain-Finetuning Framework, insbesondere bei Aufgaben zur Erzeugung von Dialogantworten, und Vorschlag einer Mix-Review-Strategie, um das Vergessensproblem zu mildern."]} {"source": "Combining domain knowledge models with neural models has been challenging. End-to-end trained neural models often perform better (lower Mean Square Error) than domain knowledge models or domain/neural combinations, and the combination is inefficient to train. In this paper, we demonstrate that by composing domain models with machine learning models, by using extrapolative testing sets, and invoking decorrelation objective functions, we create models which can predict more complex systems.The models are interpretable, extrapolative, data-efficient, and capture predictable but complex non-stochastic behavior such as unmodeled degrees of freedom and systemic measurement noise. We apply this improved modeling paradigm to several simulated systems and an actual physical system in the context of system identification. Several ways of composing domain models with neural models are examined for time series, boosting, bagging, and auto-encoding on various systems of varying complexity and non-linearity. Although this work is preliminary, we show that the ability to combine models is a very promising direction for neural modeling.", "target": ["Verbesserte Modellierung komplexer Systeme unter Verwendung hybrider neuronaler/domänenbezogener Modellkompositionen, neuer Dekorrelationsverlustfunktionen und extrapolativer Testsätze.", "In diesem Beitrag werden Experimente durchgeführt, um die extrapolativen Vorhersagen verschiedener hybrider Modelle, die physikalische Modelle, neuronale Netze und stochastische Modelle umfassen, zu vergleichen und die Herausforderung der nicht modellierten Dynamik als Engpass zu bewältigen.", "In diesem Beitrag werden Ansätze zur Kombination von neuronalen Netzen mit Nicht-NN-Modellen vorgestellt, um das Verhalten komplexer physikalischer Systeme vorherzusagen."]} {"source": "Humans can learn task-agnostic priors from interactive experience and utilize the priors for novel tasks without any finetuning.In this paper, we propose Scoring-Aggregating-Planning (SAP), a framework that can learn task-agnostic semantics and dynamics priors from arbitrary quality interactions as well as the corresponding sparse rewards and then plan on unseen tasks in zero-shot condition.The framework finds a neural score function for local regional state and action pairs that can be aggregated to approximate the quality of a full trajectory; moreover, a dynamics model that is learned with self-supervision can be incorporated for planning.Many of previous works that leverage interactive data for policy learning either need massive on-policy environmental interactions or assume access to expert data while we can achieve a similar goal with pure off-policy imperfect data.Instantiating our framework results in a generalizable policy to unseen tasks.Experiments demonstrate that the proposed method can outperform baseline methods on a wide range of applications including gridworld, robotics tasks and video games.", "target": ["Wir lernen dichte Werte und ein dynamisches Modell als Priors aus Explorationsdaten und verwenden sie, um eine gute Strategie für neue Aufgaben in Zero-Shot Bedingungen zu entwickeln.", "In diesem Beitrag wird die Verallgemeinerung von Zero Shots auf neue Umgebungen diskutiert und ein Ansatz mit Ergebnissen zu Grid-World, Super Mario Bros und 3D Robotics vorgeschlagen.", "Eine Methode, die darauf abzielt, aufgabenagnostische Prioritäten für die Zero-Shot-Generalisierung zu erlernen, mit der Idee, einen Modellierungsansatz zusätzlich zum modellbasierten RL-Framework einzusetzen."]} {"source": "Particle-based inference algorithm is a promising method to efficiently generate samples for an intractable target distribution by iteratively updating a set of particles.As a noticeable example, Stein variational gradient descent (SVGD) provides a deterministic and computationally efficient update, but it is known to underestimate the variance in high dimensions, the mechanism of which is poorly understood.In this work we explore a connection between SVGD and MMD-based inference algorithm via Stein's lemma.By comparing the two update rules, we identify the source of bias in SVGD as a combination of high variance and deterministic bias, and empirically demonstrate that the removal of either factors leads to accurate estimation of the variance.In addition, for learning high-dimensional Gaussian target, we analytically derive the converged variance for both algorithms, and confirm that only SVGD suffers from the \"curse of dimensionality\".", "target": ["Analyse der zugrundeliegenden Mechanismen des Varianzkollapses von SVGD in hohen Dimensionen."]} {"source": "We describe an approach to understand the peculiar and counterintuitive generalization properties of deep neural networks. The approach involves going beyond worst-case theoretical capacity control frameworks that have been popular in machine learning in recent years to revisit old ideas in the statistical mechanics of neural networks. Within this approach, we present a prototypical Very Simple Deep Learning (VSDL) model, whose behavior is controlled by two control parameters, one describing an effective amount of data, or load, on the network (that decreases when noise is added to the input), and one with an effective temperature interpretation (that increases when algorithms are early stopped). Using this model, we describe how a very simple application of ideas from the statistical mechanics theory of generalization provides a strong qualitative description of recently-observed empirical results regarding the inability of deep neural networks not to overfit training data, discontinuous learning and sharp transitions in the generalization properties of learning algorithms, etc.", "target": ["Neudenken der Verallgemeinerung erfordert ein Überdenken alter Ideen: Ansätze der statistischen Mechanik und komplexes Lernverhalten", "Die Autoren schlagen vor, dass Ideen aus der statistischen Mechanik zum Verständnis der Generalisierungseigenschaften von tiefen neuronalen Netzen beitragen, und geben einen Ansatz vor, der starke qualitative Beschreibungen empirischer Ergebnisse in Bezug auf tiefe neuronale Netze und Lernalgorithmen liefert.", "Eine Reihe von Ideen zum theoretischen Verständnis der Verallgemeinerungseigenschaften von mehrschichtigen neuronalen Netzen und eine qualitative Analogie zwischen dem Verhalten beim Deep Learning und den Ergebnissen der quantitativen statistischen Physikanalyse von ein- und zweischichtigen neuronalen Netzen."]} {"source": "Computations for the softmax function in neural network models are expensive when the number of output classes is large.This can become a significant issue in both training and inference for such models.In this paper, we present Doubly Sparse Softmax (DS-Softmax), Sparse Mixture of Sparse of Sparse Experts, to improve the efficiency for softmax inference.During training, our method learns a two-level class hierarchy by dividing entire output class space into several partially overlapping experts.Each expert is responsible for a learned subset of the output class space and each output class only belongs to a small number of those experts.During inference, our method quickly locates the most probable expert to compute small-scale softmax.Our method is learning-based and requires no knowledge of the output class partition space a priori.We empirically evaluate our method on several real-world tasks and demonstrate that we can achieve significant computation reductions without loss of performance.", "target": ["Wir präsentieren doppelt spärlichen Softmax, die spärliche Mischung aus spärlichen Experten, um die Effizienz der Softmax-Inferenz durch Ausnutzung der zweistufigen überlappenden Hierarchie zu verbessern. ", "In diesem Beitrag wird eine schnelle Annäherung an die Softmax-Berechnung vorgeschlagen, wenn die Anzahl der Klassen sehr groß ist.", "In diesem Artikel wird eine spärliche Mischung aus spärlichen Experten vorgeschlagen, die eine zweistufige Klassenhierarchie für eine effiziente Softmax-Inferenz erlernt."]} {"source": "Supervised machine learning models for high-value computer vision applications such as medical image classification often require large datasets labeled by domain experts, which are slow to collect, expensive to maintain, and static with respect to changes in the data distribution.In this context, we assess the utility of observational supervision, where we take advantage of passively-collected signals such as eye tracking or “gaze” data, to reduce the amount of hand-labeled data needed for model training.Specifically, we leverage gaze information to directly supervise a visual attention layer by penalizing disagreement between the spatial regions the human labeler looked at the longest and those that most heavily influence model output.We present evidence that constraining the model in this way can reduce the number of labeled examples required to achieve a given performance level by as much as 50%, and that gaze information is most helpful on more difficult tasks.", "target": ["Wir untersuchen die Verwendung von passiv gesammelten Eye-Tracking-Daten, um die Menge an markierten Daten, die während des Trainings benötigt werden, zu reduzieren.", "Eine Methode zur Nutzung von Blickinformationen, um die Komplexität der Stichprobe eines Modells und den erforderlichen Beschriftungsaufwand zu reduzieren, um eine Zielleistung zu erreichen, mit verbesserten Ergebnissen bei mittelgroßen Stichproben und schwierigeren Aufgaben.", "Eine Methode zur Einbeziehung von Blicksignalen in Standard-CNNs für die Bildklassifizierung durch Hinzufügen eines Verlustfunktionsterms, der auf der Differenz zwischen der Klassenaktivierungskarte des Modells und der aus Blickverfolgungsinformationen konstruierten Zuordnungen basiert."]} {"source": "We study the robustness to symmetric label noise of GNNs training procedures.By combining the nonlinear neural message-passing models (e.g. Graph Isomorphism Networks, GraphSAGE, etc.) with loss correction methods, we present a noise-tolerant approach for the graph classification task.Our experiments show that test accuracy can be improved under the artificial symmetric noisy setting.", "target": ["Wir wenden eine Verlustkorrektur auf graphische neuronale Netze an, um ein Modell zu trainieren, das robuster gegen Rauschen ist.", "Diese Arbeit führt eine Verlustkorrektur für Graph Neural Networks ein, um mit symmetrischem Graph Label Störungen umzugehen, und konzentriert sich auf eine Graph-Klassifikationsaufgabe.", "In diesem Beitrag wird die Verwendung eines Rauschkorrekturverlustes im Zusammenhang mit neuronalen Graphennetzen vorgeschlagen, um mit verrauschten Etiketten umzugehen."]} {"source": "Through many recent advances in graph representation learning, performance achieved on tasks involving graph-structured data has substantially increased in recent years---mostly on tasks involving node-level predictions.The setup of prediction tasks over entire graphs (such as property prediction for a molecule, or side-effect prediction for a drug), however, proves to be more challenging, as the algorithm must combine evidence about several structurally relevant patches of the graph into a single prediction.Most prior work attempts to predict these graph-level properties while considering only one graph at a time---not allowing the learner to directly leverage structural similarities and motifs across graphs.Here we propose a setup in which a graph neural network receives pairs of graphs at once, and extend it with a co-attentional layer that allows node representations to easily exchange structural information across them.We first show that such a setup provides natural benefits on a pairwise graph classification task (drug-drug interaction prediction), and then expand to a more generic graph regression setup: enhancing predictions over QM9, a standard molecular prediction benchmark.Our setup is flexible, powerful and makes no assumptions about the underlying dataset properties, beyond anticipating the existence of multiple training graphs.", "target": ["Wir verwenden Graph Co-Attention in einem gepaarten Graph-Trainingssystem für Graph-Klassifikation und -Regression.", "In diesem Artikel wird ein Multi-Head-Co-Attention-Mechanismus in GCN eingeführt, der es einem Medikament ermöglicht auf ein anderes zu folgen, während der Vorhersage von Nebenwirkungen eines Medikaments.", "Eine Methode zur Erweiterung des graphenbasierten Lernens mit einer co-attentionalen Schicht, die bei einer paarweisen Graphenklassifizierungsaufgabe andere frühere Methoden übertrifft."]} {"source": "In this paper we study image captioning as a conditional GAN training, proposing both a context-aware LSTM captioner and co-attentive discriminator, which enforces semantic alignment between images and captions.We investigate the viability of two discrete GAN training methods: Self-critical Sequence Training (SCST) and Gumbel Straight-Through (ST) and demonstrate that SCST shows more stable gradient behavior and improved results over Gumbel ST.", "target": ["Bildbeschriftung als bedingtes GAN-Training mit neuartigen Architekturen, untersucht auch zwei diskrete GAN-Trainingsmethoden. ", "Ein verbessertes GAN-Modell für Bildbeschriftungen, das einen kontextabhängigen LSTM-Beschrifter vorschlägt, einen stärkeren co-attentiven Diskriminator mit besserer Leistung einführt und SCST für das GAN-Training verwendet."]} {"source": "We present Newtonian Monte Carlo (NMC), a method to improve Markov Chain Monte Carlo (MCMC) convergence by analyzing the first and second order gradients of the target density to determine a suitable proposal density at each point.Existing first order gradient-based methods suffer from the problem of determining an appropriate step size.Too small a step size and it will take a large number of steps to converge, while a very large step size will cause it to overshoot the high density region.NMC is similar to the Newton-Raphson update in optimization where the second order gradient is used to automatically scale the step size in each dimension.However, our objective is not to find a maxima but instead to find a parameterized density that can best match the local curvature of the target density. This parameterized density is then used as a single-site Metropolis-Hastings proposal.As a further improvement on first order methods, we show that random variables with constrained supports don't need to be transformed before taking a gradient step.NMC directly matches constrained random variables to a proposal density with the same support thus keeping the curvature of the target density intact.We demonstrate the efficiency of NMC on a number of different domains.For statistical models where the prior is conjugate to the likelihood, our method recovers the posterior quite trivially in one step.However, we also show results on fairly large non-conjugate models, where NMC performs better than adaptive first order methods such as NUTS or other inexact scalable inference methods such as Stochastic Variational Inference or bootstrapping.", "target": ["Ausnutzung der Krümmung, damit MCMC-Methoden schneller konvergieren als der Stand der Technik."]} {"source": "Neural Tangents is a library designed to enable research into infinite-width neural networks.It provides a high-level API for specifying complex and hierarchical neural network architectures.These networks can then be trained and evaluated either at finite-width as usual or in their infinite-width limit.Infinite-width networks can be trained analytically using exact Bayesian inference or using gradient descent via the Neural Tangent Kernel.Additionally, Neural Tangents provides tools to study gradient descent training dynamics of wide but finite networks in either function space or weight space. The entire library runs out-of-the-box on CPU, GPU, or TPU.All computations can be automatically distributed over multiple accelerators with near-linear scaling in the number of devices. Neural Tangents is availableathttps://www.github.com/google/neural-tangentsWe also provide an accompanying interactive Colab notebook athttps://colab.sandbox.google.com/github/google/neural-tangents/blob/master/notebooks/neural_tangents_cookbook.ipynb", "target": ["Keras für unendliche neuronale Netze."]} {"source": "Deep neural networks have achieved great success in classification tasks during the last years.However, one major problem to the path towards artificial intelligence is the inability of neural networks to accurately detect samples from novel class distributions and therefore, most of the existent classification algorithms assume that all classes are known prior to the training stage.In this work, we propose a methodology for training a neural network that allows it to efficiently detect out-of-distribution (OOD) examples without compromising much of its classification accuracy on the test examples from known classes.Based on the Outlier Exposure (OE) technique, we propose a novel loss function that achieves state-of-the-art results in out-of-distribution detection with OE both on image and text classification tasks.Additionally, the way this method was constructed makes it suitable for training any classification algorithm that is based on Maximum Likelihood methods.", "target": ["Wir schlagen eine neuartige Verlustfunktion vor, die sowohl bei Bild- als auch bei Textklassifizierungsaufgaben modernste Ergebnisse in der Out-of-Distribution-Erkennung mit Outlier Exposure erzielt.", "In diesem Beitrag werden die Probleme der Erkennung von Ausreißern und der Modellkalibrierung durch die Anpassung der Verlustfunktion der Ausreißer-Expositions-Technik angegangen. Die Ergebnisse zeigen eine höhere Leistung als OE bei Bildverarbeitungs- und Textbenchmarks sowie eine verbesserte Modellkalibrierung.", "Vorschlag für eine neue Verlustfunktion zum Trainieren des Netzes mit Outlier Exposure, die im Vergleich zu einfachen Verlustfunktionen mit KL-Divergenz zu einer besseren OOD-Erkennung führt."]} {"source": "Navigation is crucial for animal behavior and is assumed to require an internal representation of the external environment, termed a cognitive map.The precise form of this representation is often considered to be a metric representation of space.An internal representation, however, is judged by its contribution to performance on a given task, and may thus vary between different types of navigation tasks.Here we train a recurrent neural network that controls an agent performing several navigation tasks in a simple environment.To focus on internal representations, we split learning into a task-agnostic pre-training stage that modifies internal connectivity and a task-specific Q learning stage that controls the network's output.We show that pre-training shapes the attractor landscape of the networks, leading to either a continuous attractor, discrete attractors or a disordered state.These structures induce bias onto the Q-Learning phase, leading to a performance pattern across the tasks corresponding to metric and topological regularities.Our results show that, in recurrent networks, inductive bias takes the form of attractor landscapes -- which can be shaped by pre-training and analyzed using dynamical systems methods.Furthermore, we demonstrate that non-metric representations are useful for navigation tasks.", "target": ["Aufgabenunabhängiges Vortraining kann die Attraktor Landschaft von RNN formen und verschiedene induktive Vorlieben für verschiedene Navigationsaufgaben ausbilden.", "In dieser Arbeit werden die internen Repräsentationen von rekurrenten neuronalen Netzen untersucht, die für Navigationsaufgaben trainiert wurden, und es wird festgestellt, dass RNNs, die für die Pfadintegration trainiert wurden, kontinuierliche 2D-Attraktoren enthalten, während RNNs, die für den Landmark-Speicher trainiert wurden, diskrete Attraktoren enthalten.", "In diesem Beitrag wird untersucht, wie das Pre-Training von rekurrenten Netzen auf verschiedene Navigationsziele unterschiedliche Vorteile für die Lösung nachgelagerter Aufgaben mit sich bringt, und es wird gezeigt, wie sich das unterschiedliche Pre-Training in unterschiedlichen dynamischen Strukturen in den Netzen nach dem Pre-Training manifestiert."]} {"source": "Formal verification of machine learning models has attracted attention recently, and significant progress has been made on proving simple properties like robustness to small perturbations of the input features.In this context, it has also been observed that folding the verification procedure into training makes it easier to train verifiably robust models.In this paper, we extend the applicability of verified training by extending it to (1) recurrent neural network architectures and (2) complex specifications that go beyond simple adversarial robustness, particularly specifications that capture temporal properties like requiring that a robot periodically visits a charging station or that a language model always produces sentences of bounded length.Experiments show that while models trained using standard training often violate desired specifications, our verified training method produces models that both perform well (in terms of test error or reward) and can be shown to be provably consistent with specifications.", "target": ["Verifizierung neuronaler Netze für zeitliche Eigenschaften und Modelle zur Erzeugung von Sequenzen.", "Dieses Papier erweitert die Intervall-Bound-Propagation auf rekurrente Berechnungen und autoregressive Modelle, führt die Signal Temporal Logic ein und erweitert sie, um zeitliche Einschränkungen zu spezifizieren, und liefert den Beweis, dass STL mit Bound-Propagation sicherstellen kann, dass neuronale Modelle mit der zeitlichen Spezifikation übereinstimmen.", "Eine Möglichkeit, Zeitreihenregressoren nachweislich in Bezug auf eine Reihe von Regeln zu trainieren, die durch die zeitliche Logik von Signalen definiert sind, und Arbeit bei der Ableitung von Regeln für die gebundene Fortpflanzung in der STL-Sprache."]} {"source": "Neural Network (NN) has achieved state-of-the-art performances in many tasks within image, speech, and text domains.Such great success is mainly due to special structure design to fit the particular data patterns, such as CNN capturing spatial locality and RNN modeling sequential dependency.Essentially, these specific NNs achieve good performance by leveraging the prior knowledge over corresponding domain data.Nevertheless, there are many applications with all kinds of tabular data in other domains.Since there are no shared patterns among these diverse tabular data, it is hard to design specific structures to fit them all.Without careful architecture design based on domain knowledge, it is quite challenging for NN to reach satisfactory performance in these tabular data domains.To fill the gap of NN in tabular data learning, we propose a universal neural network solution, called TabNN, to derive effective NN architectures for tabular data in all kinds of tasks automatically.Specifically, the design of TabNN follows two principles: \\emph{to explicitly leverages expressive feature combinations} and \\emph{to reduce model complexity}.Since GBDT has empirically proven its strength in modeling tabular data, we use GBDT to power the implementation of TabNN.Comprehensive experimental analysis on a variety of tabular datasets demonstrate that TabNN can achieve much better performance than many baseline solutions.", "target": ["Wir schlagen eine universelle Lösung für neuronale Netze vor, um effektive NN-Architekturen für tabellarische Daten automatisch abzuleiten.", "Ein neues Trainingsverfahren für neuronale Netze, das für tabellarische Daten entwickelt wurde und darauf abzielt, aus GBDTs extrahierte Merkmalscluster zu nutzen.", "Vorschlag für einen hybriden Algorithmus für maschinelles Lernen unter Verwendung von Gradient Boosted Decision Trees und Deep Neural Networks, mit beabsichtigter Forschungsrichtung auf tabellarischen Daten."]} {"source": "Knowledge Bases (KBs) are becoming increasingly large, sparse and probabilistic.These KBs are typically used to perform query inferences and rule mining.But their efficacy is only as high as their completeness.Efficiently utilizing incomplete KBs remains a major challenge as the current KB completion techniques either do not take into account the inherent uncertainty associated with each KB tuple or do not scale to large KBs.Probabilistic rule learning not only considers the probability of every KB tuple but also tackles the problem of KB completion in an explainable way.For any given probabilistic KB, it learns probabilistic first-order rules from its relations to identify interesting patterns.But, the current probabilistic rule learning techniques perform grounding to do probabilistic inference for evaluation of candidate rules.It does not scale well to large KBs as the time complexity of inference using grounding is exponential over the size of the KB.In this paper, we present SafeLearner -- a scalable solution to probabilistic KB completion that performs probabilistic rule learning using lifted probabilistic inference -- as faster approach instead of grounding. We compared SafeLearner to the state-of-the-art probabilistic rule learner ProbFOIL+ and to its deterministic contemporary AMIE+ on standard probabilistic KBs of NELL (Never-Ending Language Learner) and Yago.Our results demonstrate that SafeLearner scales as good as AMIE+ when learning simple rules and is also significantly faster than ProbFOIL+.", "target": ["Probabilistisches Regel-Lernsystem mit gehobener Inferenz.", "Ein Modell für probabilistisches Regellernen zur Automatisierung der Vervollständigung probabilistischer Datenbanken, das AMIE+ und gehobene Inferenz zur Steigerung der Recheneffizienz verwendet."]} {"source": "Recent efforts in Dialogue State Tracking (DST) for task-oriented dialogues have progressed toward open-vocabulary or generation-based approaches where the models can generate slot value candidates from the dialogue history itself.These approaches have shown good performance gain, especially in complicated dialogue domains with dynamic slot values.However, they fall short in two aspects: (1) they do not allow models to explicitly learn signals across domains and slots to detect potential dependencies among \\textit{(domain, slot)} pairs; and (2) existing models follow auto-regressive approaches which incur high time cost when the dialogue evolves over multiple domains and multiple turns.In this paper, we propose a novel framework of Non-Autoregressive Dialog State Tracking (NADST) which can factor in potential dependencies among domains and slots to optimize the models towards better prediction of dialogue states as a complete set rather than separate slots.In particular, the non-autoregressive nature of our method not only enables decoding in parallel to significantly reduce the latency of DST for real-time dialogue response generation, but also detect dependencies among slots at token level in addition to slot and domain level.Our empirical results show that our model achieves the state-of-the-art joint accuracy across all domains on the MultiWOZ 2.1 corpus, and the latency of our model is an order of magnitude lower than the previous state of the art as the dialogue history extends over time.", "target": ["Wir schlagen das erste nicht-autoregressive neuronale Modell für Dialogue State Tracking (DST) vor, das die SOTA-Genauigkeit (49,04%) beim MultiWOZ2.1 Benchmark erreicht und die Inferenzlatenz um eine Größenordnung reduziert.", "Ein neues Modell für die DST-Aufgabe, das die Komplexität der Inferenzzeit mit einem nicht-autoregressiven Decoder reduziert, eine wettbewerbsfähige DST-Genauigkeit erzielt und Verbesserungen gegenüber anderen Grundmodellen aufweist.", "Vorschlag für ein Modell, das in der Lage ist, Dialogzustände auf nicht-rekursive Weise zu verfolgen."]} {"source": "The 3D-zoom operation is the positive translation of the camera in the Z-axis, perpendicular to the image plane.In contrast, the optical zoom changes the focal length and the digital zoom is used to enlarge a certain region of an image to the original image size.In this paper, we are the first to formulate an unsupervised 3D-zoom learning problem where images with an arbitrary zoom factor can be generated from a given single image.An unsupervised framework is convenient, as it is a challenging task to obtain a 3D-zoom dataset of natural scenes due to the need for special equipment to ensure camera movement is restricted to the Z-axis.Besides, the objects in the scenes should not move when being captured, which hinders the construction of a large dataset of outdoor scenes.We present a novel unsupervised framework to learn how to generate arbitrarily 3D-zoomed versions of a single image, not requiring a 3D-zoom ground truth, called the Deep 3D-Zoom Net.The Deep 3D-Zoom Net incorporates the following features:(i) transfer learning from a pre-trained disparity estimation network via a back re-projection reconstruction loss;(ii) a fully convolutional network architecture that models depth-image-based rendering (DIBR), taking into account high-frequency details without the need for estimating the intermediate disparity; and(iii) incorporating a discriminator network that acts as a no-reference penalty for unnaturally rendered areas.Even though there is no baseline to fairly compare our results, our method outperforms previous novel view synthesis research in terms of realistic appearance on large camera baselines.We performed extensive experiments to verify the effectiveness of our method on the KITTI and Cityscapes datasets.", "target": ["Eine neuartige Netzwerkarchitektur zur Durchführung von Deep 3D Zoom oder Nahaufnahmen.", "Ein Verfahren zur Erstellung eines \"gezoomten Bildes\" für ein gegebenes Eingangsbild und ein neuartiger Rückprojektions-Rekonstruktionsverlust, der es dem Netzwerk ermöglicht, die zugrunde liegende 3D-Struktur zu erlernen und ein natürliches Erscheinungsbild beizubehalten.", "Ein Algorithmus für die Synthese von 3D-Zoom-Verhalten, wenn sich die Kamera vorwärts bewegt, eine Netzwerkstruktur, die Disparitätsschätzung in einem GAN-Framework zur Synthese neuartiger Ansichten beinhaltet, und eine vorgeschlagene neue Computer Vision Aufgabe."]} {"source": "The universal approximation theorem, in one of its most general versions, says that if we consider only continuous activation functions σ, then a standard feedforward neural network with one hidden layer is able to approximate any continuous multivariate function f to any given approximation threshold ε, if and only if σ is non-polynomial.In this paper, we give a direct algebraic proof of the theorem.Furthermore we shall explicitly quantify the number of hidden units required for approximation.Specifically, if X in R^n is compact, then a neural network with n input units, m output units, and a single hidden layer with {n+d choose d} hidden units (independent of m and ε), can uniformly approximate any polynomial function f:X -> R^m whose total degree is at most d for each of its m coordinate functions.In the general case that f is any continuous function, we show there exists some N in O(ε^{-n}) (independent of m), such that N hidden units would suffice to approximate f.We also show that this uniform approximation property (UAP) still holds even under seemingly strong conditions imposed on the weights.We highlight several consequences:(i) For any δ > 0, the UAP still holds if we restrict all non-bias weights w in the last layer to satisfy |w| < δ.(ii) There exists some λ>0 (depending only on f and σ), such that the UAP still holds if we restrict all non-bias weights w in the first layer to satisfy |w|>λ.(iii) If the non-bias weights in the first layer are *fixed* and randomly chosen from a suitable range, then the UAP holds with probability 1.", "target": ["Eine quantitative Verfeinerung des universellen Angleichungstheorems durch einen algebraischen Ansatz.", "Die Autoren leiten die Beweise für die universelle Approximationseigenschaft algebraisch ab und versichern, dass die Ergebnisse allgemein für andere Arten von neuronalen Netzen und ähnlichen Lernern gelten.", "Ein neuer Beweis von Leshnos Version der universellen Approximationseigenschaft für neuronale Netze und neue Erkenntnisse über die universelle Approximationseigenschaft."]} {"source": "In this paper, we design a generic framework for learning a robust text classification model that achieves accuracy comparable to standard full models under test-timebudget constraints.We take a different approach from existing methods and learn to dynamically delete a large fraction of unimportant words by a low-complexity selector such that the high-complexity classifier only needs to process a small fraction of important words.In addition, we propose a new data aggregation method to train the classifier, allowing it to make accurate predictions even on fragmented sequence of words.Our end-to-end method achieves state-of-the-art performance while its computational complexity scales linearly with the small fraction of important words in the whole corpus.Besides, a single deep neural network classifier trained by our framework can be dynamically tuned to different budget levels at inference time.", "target": ["Modulares Framework für die Dokumentenklassifizierung und Datenaggregationstechnik, um den Rahmen robust gegenüber verschiedenen Verzerrungen und Störungen zu machen und sich nur auf die wichtigen Wörter zu konzentrieren. ", "Die Autoren betrachten das Training einer RNN-basierten Textklassifikation, bei der es eine Ressourcenbeschränkung für die Testzeitvorhersage gibt, und stellen einen Ansatz vor, bei dem ein Maskierungsmechanismus verwendet wird, um die in der Vorhersage verwendeten Wörter/Phrasen/Sätze zu reduzieren, gefolgt von einem Klassifikator, der diese Komponenten verarbeitet."]} {"source": "Differentiable architecture search (DARTS) provided a fast solution in finding effective network architectures, but suffered from large memory and computing overheads in jointly training a super-net and searching for an optimal architecture.In this paper, we present a novel approach, namely Partially-Connected DARTS, by sampling a small part of super-net to reduce the redundancy in exploring the network space, thereby performing a more efficient search without comprising the performance.In particular, we perform operation search in a subset of channels while bypassing the held out part in a shortcut.This strategy may suffer from an undesired inconsistency on selecting the edges of super-net caused by sampling different channels.We solve it by introducing edge normalization, which adds a new set of edge-level hyper-parameters to reduce uncertainty in search.Thanks to the reduced memory cost, PC-DARTS can be trained with a larger batch size and, consequently, enjoy both faster speed and higher training stability.Experiment results demonstrate the effectiveness of the proposed method.Specifically, we achieve an error rate of 2.57% on CIFAR10 within merely 0.1 GPU-days for architecture search, and a state-of-the-art top-1 error rate of 24.2% on ImageNet (under the mobile setting) within 3.8 GPU-days for search.Our code has been made available at https://www.dropbox.com/sh/on9lg3rpx1r6dkf/AABG5mt0sMHjnEJyoRnLEYW4a?dl=0.", "target": ["Zulassen von Teilkanalverbindungen in Supernetzen zur Regulierung und Beschleunigung der Suche nach differenzierbaren Architekturen", "Eine Erweiterung des neuronalen Architektur-Suchverfahrens DARTS, die dessen Manko der immensen Speicherkosten durch Verwendung einer zufälligen Teilmenge von Kanälen und einer Methode zur Normalisierung von Kanten behebt.", "Diese Arbeit schlägt vor, DARTS in Bezug auf die Trainingseffizienz zu verbessern, von großen Speicher-und Rechen-Overheads, und schlägt eine teilweise verbunden DARTS mit partiellen Kanal-Verbindung und Rand Normalisierung."]} {"source": "Dialogue research tends to distinguish between chit-chat and goal-oriented tasks.While the former is arguably more naturalistic and has a wider use of language, the latter has clearer metrics and a more straightforward learning signal.Humans effortlessly combine the two, and engage in chit-chat for example with the goal of exchanging information or eliciting a specific response.Here, we bridge the divide between these two domains in the setting of a rich multi-player text-based fantasy environment where agents and humans engage in both actions and dialogue.Specifically, we train a goal-oriented model with reinforcement learning via self-play against an imitation-learned chit-chat model with two new approaches: the policy either learns to pick a topic or learns to pick an utterance given the top-k utterances.We show that both models outperform a strong inverse model baseline and can converse naturally with their dialogue partner in order to achieve goals.", "target": ["Agenten interagieren (sprechen, handeln) und können in einer reichhaltigen Welt mit vielfältiger Sprache Ziele erreichen, indem sie die Kluft zwischen Plauderei und zielorientiertem Dialog überbrücken.", "Dieses Papier untersucht eine Multi-Agenten-Dialogaufgabe, bei der der lernende Agent darauf abzielt, natürlichsprachliche Handlungen zu generieren, die dem anderen Agenten eine bestimmte Handlung entlocken, und zeigt, dass RL-Agenten einen höheren Grad an Aufgabenerfüllung erreichen können als die Basisprogramme des Imitationslernens.", "Diese Arbeit untersucht die zielorientierte Dialoggestaltung mit Reinforcement Learning in einem Fantasy Text Adventure Game und stellt fest, dass die RL-Ansätze die überwachten Lernmodelle übertreffen."]} {"source": "We consider off-policy policy evaluation when the trajectory data are generated by multiple behavior policies.Recent work has shown the key role played by the state or state-action stationary distribution corrections in the infinite horizon context for off-policy policy evaluation.We propose estimated mixture policy (EMP), a novel class of partially policy-agnostic methods to accurately estimate those quantities.With careful analysis, we show that EMP gives rise to estimates with reduced variance for estimating the state stationary distribution correction while it also offers a useful induction bias for estimating the state-action stationary distribution correction.In extensive experiments with both continuous and discrete environments, we demonstrate that our algorithm offers significantly improved accuracy compared to the state-of-the-art methods.", "target": ["Eine neue, teilweise policy-agnostische Methode zur Evaluierung von Off-Policy-Policies mit unendlichem Horizont und mehreren bekannten oder unbekannten Verhaltensrichtlinien.", "Eine geschätzte gemischte Stategie, die Ideen von Schätzern für unendliche Horizonte und Regressions-Bedeutsamkeitsstichproben für die Wichtigkeitsgewichtung aufgreift und sie auf viele Strategien und unbekannte Regeln ausweitet.", "Ein Algorithmus zur Bewertung von Strategien mit unendlichem Horizont und mehreren Verhaltensstrategien durch Schätzung einer gemischten Strategie unter Regression sowie der theoretische Beweis, dass ein geschätztes Strategieverhältnis die Varianz reduzieren kann."]} {"source": "We introduce a more efficient neural architecture for amortized inference, which combines continuous and conditional normalizing flows using a principled choice of structure.Our gradient flow derives its sparsity pattern from the minimally faithful inverse of its underlying graphical model.We find that this factorization reduces the necessary numbers both of parameters in the neural network and of adaptive integration steps in the ODE solver.Consequently, the throughput at training time and inference time is increased, without decreasing performance in comparison to unconstrained flows.By expressing the structural inversion and the flow construction as compilation passes of a probabilistic programming language, we demonstrate their applicability to the stochastic inversion of realistic models such as convolutional neural networks (CNN).", "target": ["Wir stellen eine effizientere neuronale Architektur für amortisierte Inferenz vor, die kontinuierliche und bedingte Normalisierungsflüsse mit einer prinzipiellen Wahl der Seltenheitsstruktur kombiniert."]} {"source": "We present a neural architecture search algorithm to construct compact reinforcement learning (RL) policies, by combining ENAS and ES in a highly scalable and intuitive way.By defining the combinatorial search space of NAS to be the set of different edge-partitionings (colorings) into same-weight classes, we represent compact architectures via efficient learned edge-partitionings.For several RL tasks, we manage to learn colorings translating to effective policies parameterized by as few as 17 weight parameters, providing >90 % compression over vanilla policies and 6x compression over state-of-the-art compact policies based on Toeplitz matrices, while still maintaining good reward.We believe that our work is one of the first attempts to propose a rigorous approach to training structured neural network architectures for RL problems that are of interest especially in mobile robotics with limited storage and computational resources.", "target": ["Wir zeigen, dass ENAS mit ES-Optimierung in RL hochgradig skalierbar ist, und verwenden es, um die Richtlinien neuronaler Netze durch Gewichtsteilung zu verdichten.", "Die Autoren konstruieren Reinforcement-Learning-Strategien mit sehr wenigen Parametern, indem sie ein Feed-Forward-Neuronalnetz komprimieren, es zwingen, Gewichte zu teilen, und eine Reinforcement-Learning-Methode verwenden, um die Zuordnung der geteilten Gewichte zu lernen.", "In diesem Beitrag werden Ideen aus ENAS- und ES-Methoden zur Optimierung kombiniert und die chromatische Netzarchitektur eingeführt, die die Gewichte des RL-Netzes in gebundene Untergruppen unterteilt."]} {"source": "Deep approaches to anomaly detection have recently shown promising results over shallow methods on large and complex datasets.Typically anomaly detection is treated as an unsupervised learning problem.In practice however, one may have---in addition to a large set of unlabeled samples---access to a small pool of labeled samples, e.g. a subset verified by some domain expert as being normal or anomalous.Semi-supervised approaches to anomaly detection aim to utilize such labeled samples, but most proposed methods are limited to merely including labeled normal samples.Only a few methods take advantage of labeled anomalies, with existing deep approaches being domain-specific.In this work we present Deep SAD, an end-to-end deep methodology for general semi-supervised anomaly detection.Using an information-theoretic perspective on anomaly detection, we derive a loss motivated by the idea that the entropy of the latent distribution for normal data should be lower than the entropy of the anomalous distribution.We demonstrate in extensive experiments on MNIST, Fashion-MNIST, and CIFAR-10, along with other anomaly detection benchmark datasets, that our method is on par or outperforms shallow, hybrid, and deep competitors, yielding appreciable performance improvements even when provided with only little labeled data.", "target": ["Wir stellen Deep SAD vor, eine Deep-Methode für die allgemeine semi-supervised Anomalieerkennung, die insbesondere die Vorteile von gelabelten Anomalien nutzt.", "Eine neue Methode, um anomale Daten zu finden, wenn einige gekennzeichnete Anomalien gegeben sind, die den von der Informationstheorie abgeleiteten Verlust anwendet, der darauf basiert, dass normale Daten normalerweise eine geringere Entropie haben als abnormale Daten.", "Vorschlag für ein Rahmenwerk zur Erkennung von Anomalien unter Bedingungen, in denen unbeschriftete Daten, beschriftete positive Daten und beschriftete negative Daten verfügbar sind, und Vorschlag zur Annäherung an semi-supervised AD aus einer informationstheoretischen Perspektive."]} {"source": "To analyze deep ReLU network, we adopt a student-teacher setting in which an over-parameterized student network learns from the output of a fixed teacher network of the same depth, with Stochastic Gradient Descent (SGD).Our contributions are two-fold.First, we prove that when the gradient is zero (or bounded above by a small constant) at every data point in training, a situation called \\emph{interpolation setting}, there exists many-to-one \\emph{alignment} between student and teacher nodes in the lowest layer under mild conditions.This suggests that generalization in unseen dataset is achievable, even the same condition often leads to zero training error.Second, analysis of noisy recovery and training dynamics in 2-layer network shows that strong teacher nodes (with large fan-out weights) are learned first and subtle teacher nodes are left unlearned until late stage of training.As a result, it could take a long time to converge into these small-gradient critical points.Our analysis shows that over-parameterization plays two roles: (1) it is a necessary condition for alignment to happen at the critical points, and (2) in training dynamics, it helps student nodes cover more teacher nodes with fewer iterations.Both improve generalization.Experiments justify our finding.", "target": ["In diesem Artikel werden die Trainingsdynamik und die kritischen Punkte beim Training eines tiefen ReLU-Netzwerks mittels SGD in einer Lehrer-Schüler-Umgebung analysiert. ", "Untersuchung der Überparametrisierung in mehrschichtigen Schüler-Lehrer ReLU-Netzwerken, ein theoretischer Teil über kritische SGD-Punkte für die Lehrer-Schüler-Umgebung und ein heuristischer und empirischer Teil über die Dynamik des SDG-Algorithmus in Abhängigkeit von Lehrernetzwerken."]} {"source": "We study the convergence of gradient descent (GD) and stochastic gradient descent (SGD) for training $L$-hidden-layer linear residual networks (ResNets).We prove that for training deep residual networks with certain linear transformations at input and output layers, which are fixed throughout training, both GD and SGD with zero initialization on all hidden weights can converge to the global minimum of the training loss.Moreover, when specializing to appropriate Gaussian random linear transformations, GD and SGD provably optimize wide enough deep linear ResNets.Compared with the global convergence result of GD for training standard deep linear networks \\citep{du2019width}, our condition on the neural network width is sharper by a factor of $O(\\kappa L)$, where $\\kappa$ denotes the condition number of the covariance matrix of the training data.In addition, for the first time we establish the global convergence of SGD for training deep linear ResNets and prove a linear convergence rate when the global minimum is $0$.", "target": ["Unter bestimmten Bedingungen für die linearen Eingangs- und Ausgangstransformationen können sowohl GD als auch SGD globale Konvergenz für das Training tiefer linearer ResNets erreichen.", "Die Autoren untersuchen die Konvergenz des Gradientenabstiegs bei der Ausbildung von tiefen linearen Residual Networks und stellen eine globale Konvergenz von GD/SGD und lineare Konvergenzraten von SG/SGD fest.", "Untersuchung der Konvergenzeigenschaften von GD und SGD auf tiefen linearen ResNets und Nachweis, dass GD und SGD unter bestimmten Bedingungen für die Eingangs- und Ausgangstransformationen und mit Null-Initialisierung zu globalen Minima konvergieren."]} {"source": "In this paper, we empirically investigate the training journey of deep neural networks relative to fully trained shallow machine learning models.We observe that the deep neural networks (DNNs) train by learning to correctly classify shallow-learnable examples in the early epochs before learning the harder examples.We build on this observation this to suggest a way for partitioning the dataset into hard and easy subsets that can be used for improving the overall training process.Incidentally, we also found evidence of a subset of intriguing examples across all the datasets we considered, that were shallow learnable but not deep-learnable.In order to aid reproducibility, we also duly release our code for this work at https://github.com/karttikeya/Shallow_to_Deep/", "target": ["Wir analysieren den Trainingsprozess für Deep Networks und zeigen, dass sie mit dem schnellen Lernen von flachen, klassifizierbaren Beispielen beginnen und langsam auf härtere Datenpunkte verallgemeinern."]} {"source": "While much recent work has targeted learning deep discrete latent variable models with variational inference, this setting remains challenging, and it is often necessary to make use of potentially high-variance gradient estimators in optimizing the ELBO.As an alternative, we propose to optimize a non-ELBO objective derived from the Bethe free energy approximation to an MRF's partition function.This objective gives rise to a saddle-point learning problem, which we train inference networks to approximately optimize.The derived objective requires no sampling, and can be efficiently computed for many MRFs of interest.We evaluate the proposed approach in learning high-order neural HMMs on text, and find that it often outperforms other approximate inference schemes in terms of true held-out log likelihood.At the same time, we find that all the approximate inference-based approaches to learning high-order neural HMMs we consider underperform learning with exact inference by a significant margin.", "target": ["Lernen von tiefen latenten Variablen-MRFs mit einem Sattelpunkt-Ziel, das von der Bethe-Partitionsfunktion-Approximation abgeleitet ist.", "Eine Methode zum Erlernen tiefer latent-variabler MRF mit einem Optimierungsziel, das die freie Bethe-Energie nutzt, die auch die zugrundeliegenden Beschränkungen der Optimierungen der freien Bethe-Energie löst.", "Eine Zielsetzung für das Lernen von MRFs mit latenten Variablen auf der Grundlage der freien Bethe-Energie und der amortisierten Inferenz, die sich von der Optimierung der Standard-ELBO unterscheidet."]} {"source": "In an explanation generation problem, an agent needs to identify and explain the reasons for its decisions to another agent.Existing work in this area is mostly confined to planning-based systems that use automated planning approaches to solve the problem.In this paper, we approach this problem from a new perspective, where we propose a general logic-based framework for explanation generation.In particular, given a knowledge base $KB_1$ that entails a formula $\\phi$ and a second knowledge base $KB_2$ that does not entail $\\phi$, we seek to identify an explanation $\\epsilon$ that is a subset of $KB_1$ such that the union of $KB_2$ and $\\epsilon$ entails $\\phi$.We define two types of explanations, model- and proof-theoretic explanations, and use cost functions to reflect preferences between explanations.Further, we present our algorithm implemented for propositional logic that compute such explanations and empirically evaluate it in random knowledge bases and a planning domain.", "target": ["Ein allgemeines Framework für die Erstellung von Erklärungen mit Hilfe der Logik.", "Dieses Papier untersucht die Generierung von Erklärungen aus der Sicht der KR und führt Experimente durch, in denen die Größe der Erklärungen und die Laufzeit mit Zufallsformeln und Formeln aus einer Blocksworld Instanz gemessen werden.", "Dieses Papier bietet eine Perspektive auf Erklärungen zwischen zwei Wissensbasen und läuft parallel zu Arbeiten über Modellabgleich in der Planungsliteratur."]} {"source": "Recent theoretical work has demonstrated that deep neural networks have superior performance over shallow networks, but their training is more difficult, e.g., they suffer from the vanishing gradient problem.This problem can be typically resolved by the rectified linear unit (ReLU) activation.However, here we show that even for such activation, deep and narrow neural networks (NNs) will converge to erroneous mean or median states of the target function depending on the loss with high probability.Deep and narrow NNs are encountered in solving partial differential equations with high-order derivatives.We demonstrate this collapse of such NNs both numerically and theoretically, and provide estimates of the probability of collapse.We also construct a diagram of a safe region for designing NNs that avoid the collapse to erroneous states.Finally, we examine different ways of initialization and normalization that may avoid the collapse problem.Asymmetric initializations may reduce the probability of collapse but do not totally eliminate it.", "target": ["Tiefe und enge neuronale Netze konvergieren je nach Verlust mit hoher Wahrscheinlichkeit zu fehlerhaften Mittel- oder Medianwerten der Zielfunktion.", "In diesem Beitrag werden die Fehlermöglichkeiten von tiefen und engen Netzen untersucht, wobei der Schwerpunkt auf möglichst kleinen Modellen liegt, bei denen das unerwünschte Verhalten auftritt.", "In diesem Beitrag wird gezeigt, dass das Training tiefer neuronaler ReLU-Netze mit hoher Wahrscheinlichkeit zu einem konstanten Klassifikator konvergiert, wenn die Breite der versteckten Schichten zu klein ist."]} {"source": "We study adversarial robustness of neural networks from a margin maximization perspective, where margins are defined as the distances from inputs to a classifier's decision boundary.Our study shows that maximizing margins can be achieved by minimizing the adversarial loss on the decision boundary at the \"shortest successful perturbation\", demonstrating a close connection between adversarial losses and the margins.We propose Max-Margin Adversarial (MMA) training to directly maximize the margins to achieve adversarial robustness. Instead of adversarial training with a fixed $\\epsilon$, MMA offers an improvement by enabling adaptive selection of the \"correct\" $\\epsilon$ as the margin individually for each datapoint.In addition, we rigorously analyze adversarial training with the perspective of margin maximization, and provide an alternative interpretation for adversarial training, maximizing either a lower or an upper bound of the margins.Our experiments empirically confirm our theory and demonstrate MMA training's efficacy on the MNIST and CIFAR10 datasets w.r.t. $\\ell_\\infty$ and $\\ell_2$ robustness.", "target": ["Wir schlagen MMA Training zur direkten Maximierung des Eingaberaumrands vor, um die adversarial Robustheit des Systems vor allem dadurch zu verbessern, dass die Vorgabe einer festen Verzerrungsgrenze entfällt.", "Ein adaptiver, auf Rand basierender Ansatz zum Training von robusten DNNs durch Maximierung der kürzesten Marge der Eingaben zur Entscheidungsgrenze, der ein adversariales Training mit großen Störungen möglich macht.", "Es wird eine Methode für robustes Lernen gegen gegnerische Angriffe vorgestellt, bei dem der Rand des Eingaberaums direkt maximiert wird und eine Softmax-Variante der Max-Margin eingeführt wird."]} {"source": "Many anomaly detection methods exist that perform well on low-dimensional problems however there is a notable lack of effective methods for high-dimensional spaces, such as images.Inspired by recent successes in deep learning we propose a novel approach to anomaly detection using generative adversarial networks.Given a sample under consideration, our method is based on searching for a good representation of that sample in the latent space of the generator; if such a representation is not found, the sample is deemed anomalous. We achieve state-of-the-art performance on standard image benchmark datasets and visual inspection of the most anomalous samples reveals that our method does indeed return anomalies.", "target": ["Wir schlagen eine Methode zur Erkennung von Anomalien mit GANs vor, indem wir den latenten Raum des Generators nach guten Musterdarstellungen durchsuchen.", "Die Autoren schlagen vor, GAN für die Erkennung von Anomalien zu verwenden, eine auf Gradientenabstieg basierende Methode zur iterativen Aktualisierung latenter Repräsentationen und eine neuartige Parameteraktualisierung für die Generatoren.", "Ein GAN-basierter Ansatz zur Erkennung von Anomalien bei Bilddaten, bei dem der latente Raum des Generators erforscht wird, um eine Darstellung für ein Testbild zu finden."]} {"source": "Variational inference (VI) and Markov chain Monte Carlo (MCMC) are approximate posterior inference algorithms that are often said to have complementary strengths, with VI being fast but biased and MCMC being slower but asymptotically unbiased.In this paper, we analyze gradient-based MCMC and VI procedures and find theoretical and empirical evidence that these procedures are not as different as one might think.In particular, a close examination of the Fokker-Planck equation that governs the Langevin dynamics (LD) MCMC procedure reveals that LD implicitly follows a gradient flow that corresponds to a variational inference procedure based on optimizing a nonparametric normalizing flow.This result suggests that the transient bias of LD (due to too few warmup steps) may track that of VI (due to too few optimization steps), up to differences due to VI’s parameterization and asymptotic bias.Empirically, we find that the transient biases of these algorithms (and momentum-accelerated versions) do evolve similarly.This suggests that practitioners with a limited time budget may get more accurate results by running an MCMC procedure (even if it’s far from burned in) than a VI procedure, as long as the variance of the MCMC estimator can be dealt with (e.g., by running many parallel chains).", "target": ["Das transiente Verhalten von gradientenbasierten MCMC- und Variationsinferenz-Algorithmen ist ähnlicher als man denkt, was die Behauptung in Frage stellt, dass Variationsinferenz schneller ist als MCMC."]} {"source": "Graph Convolutional Networks (GCNs) have recently been shown to be quite successful in modeling graph-structured data.However, the primary focus has been on handling simple undirected graphs.Multi-relational graphs are a more general and prevalent form of graphs where each edge has a label and direction associated with it.Most of the existing approaches to handle such graphs suffer from over-parameterization and are restricted to learning representations of nodes only.In this paper, we propose CompGCN, a novel Graph Convolutional framework which jointly embeds both nodes and relations in a relational graph.CompGCN leverages a variety of entity-relation composition operations from Knowledge Graph Embedding techniques and scales with the number of relations.It also generalizes several of the existing multi-relational GCN methods.We evaluate our proposed method on multiple tasks such as node classification, link prediction, and graph classification, and achieve demonstrably superior results.We make the source code of CompGCN available to foster reproducible research.", "target": ["Ein kompositionsbasiertes Graph Convolutional Framework für multirelationale Graphen.", "Die Autoren entwickeln GCN für multirelationale Graphen und schlagen CompGCN vor, das Erkenntnisse aus der Einbettung von Wissensgraphen nutzt und Knoten- und Beziehungsrepräsentationen lernt, um das Problem der Überparametrisierung zu mildern.", "Dieses Papier stellt ein GCN-Framework für multirelationale Graphen vor und verallgemeinert mehrere bestehende Ansätze zur Einbettung von Wissensgraphen in ein Framework."]} {"source": "State-of-the-art neural machine translation methods employ massive amounts of parameters.Drastically reducing computational costs of such methods without affecting performance has been up to this point unsolved.In this work, we propose a quantization strategy tailored to the Transformer architecture.We evaluate our method on the WMT14 EN-FR and WMT14 EN-DE translation tasks and achieve state-of-the-art quantization results for the Transformer, obtaining no loss in BLEU scores compared to the non-quantized baseline.We further compress the Transformer by showing that, once the model is trained, a good portion of the nodes in the encoder can be removed without causing any loss in BLEU.", "target": ["Wir quantisieren den Transformer vollständig auf 8 Bit und verbessern die Übersetzungsqualität im Vergleich zum Modell mit voller Präzision.", "Eine 8-Bit-Quantisierungsmethode zur Quantisierung des maschinellen Übersetzungsmodell Transformers, die vorschlägt, eine einheitliche Min-Max-Quantisierung während der Inferenz und Bucketing-Gewichte vor der Quantisierung zu verwenden, um Quantisierungsfehler zu reduzieren.", "Eine Methode zur Verringerung des benötigten Speicherplatzes durch eine Quantisierungstechnik, die sich auf die Verringerung des Speicherplatzes für die Transformer Architektur konzentriert."]} {"source": "Gradient-based meta-learning techniques are both widely applicable and proficient at solving challenging few-shot learning and fast adaptation problems.However, they have practical difficulties when operating on high-dimensional parameter spaces in extreme low-data regimes.We show that it is possible to bypass these limitations by learning a data-dependent latent generative representation of model parameters, and performing gradient-based meta-learning in this low-dimensional latent space.The resulting approach, latent embedding optimization (LEO), decouples the gradient-based adaptation procedure from the underlying high-dimensional space of model parameters.Our evaluation shows that LEO can achieve state-of-the-art performance on the competitive miniImageNet and tieredImageNet few-shot classification tasks.Further analysis indicates LEO is able to capture uncertainty in the data, and can perform adaptation more effectively by optimizing in latent space.", "target": ["Latent Embedding Optimization (LEO) ist ein neuartiges gradientenbasiertes Meta-Lernverfahren, das bei den anspruchsvollen 5-Wege-1-Shot und 5-Shot-miniImageNet und tieredImageNet Klassifizierungsaufgaben eine hervorragende Leistung zeigt.", "Ein neues Meta-Lernsystem, das einen datenabhängigen latenten Raum erlernt, eine schnelle Anpassung im latenten Raum durchführt, effektiv für das Few-Shot Learning ist, eine aufgabenabhängige Initialisierung für die Anpassung hat und gut für multimodale Aufgabenverteilung funktioniert.", "Diese Arbeit schlägt eine latente Einbettungsoptimierungsmethode für Meta-Learning vor und behauptet, dass der Beitrag darin besteht, optimierungsbasierte Meta-Learning-Techniken vom hochdimensionalen Raum der Modellparameter zu entkoppeln."]} {"source": "We introduce an approach for augmenting model-free deep reinforcement learning agents with a mechanism for relational reasoning over structured representations, which improves performance, learning efficiency, generalization, and interpretability.Our architecture encodes an image as a set of vectors, and applies an iterative message-passing procedure to discover and reason about relevant entities and relations in a scene.In six of seven StarCraft II Learning Environment mini-games, our agent achieved state-of-the-art performance, and surpassed human grandmaster-level on four.In a novel navigation and planning task, our agent's performance and learning efficiency far exceeded non-relational baselines, it was able to generalize to more complex scenes than it had experienced during training.Moreover, when we examined its learned internal representations, they reflected important structure about the problem and the agent's intentions.The main contribution of this work is to introduce techniques for representing and reasoning about states in model-free deep reinforcement learning agents via relational inductive biases.Our experiments show this approach can offer advantages in efficiency, generalization, and interpretability, and can scale up to meet some of the most challenging test environments in modern artificial intelligence.", "target": ["Relationale induktive Verzerrungen verbessern die Verallgemeinerungsfähigkeit außerhalb der Verteilung in modellfreien Agenten mit Reinforcement Learning.", "Eine gemeinsame relationale Netzwerkarchitektur zur Parametrisierung des Akteurs- und Kritiknetzwerks, die sich auf verteilte vorteilhafte Akteur-Kritik Algorithmen konzentriert und modellfreie tiefe Verstärkungstechniken mit relationalem Wissen über die Umgebung erweitert, so dass Agenten interpretierbare Zustandsdarstellungen lernen können.", "Eine quantitative und qualitative Analyse und Bewertung des Selbstaufmerksamkeits Mechanismus in Kombination mit dem Beziehungsnetz im Kontext des modellfreien RL."]} {"source": "Image translation between two domains is a class of problems aiming to learn mapping from an input image in the source domain to an output image in the target domain.It has been applied to numerous applications, such as data augmentation, domain adaptation, and unsupervised training.When paired training data is not accessible, image translation becomes an ill-posed problem.We constrain the problem with the assumption that the translated image needs to be perceptually similar to the original image and also appears to be drawn from the new domain, and propose a simple yet effective image translation model consisting of a single generator trained with a self-regularization term and an adversarial term.We further notice that existing image translation techniques are agnostic to the subjects of interest and often introduce unwanted changes or artifacts to the input.Thus we propose to add an attention module to predict an attention map to guide the image translation process.The module learns to attend to key parts of the image while keeping everything else unaltered, essentially avoiding undesired artifacts or changes.The predicted attention map also opens door to applications such as unsupervised segmentation and saliency detection.Extensive experiments and evaluations show that our model while being simpler, achieves significantly better performance than existing image translation methods.", "target": ["Wir schlagen ein einfaches generatives Modell zur unbeaufsichtigten Bildübersetzung und Erkennung von Auffälligkeiten vor."]} {"source": "Building deep neural networks to control autonomous agents which have to interact in real-time with the physical world, such as robots or automotive vehicles, requires a seamless integration of time into a network’s architecture.The central question of this work is, how the temporal nature of reality should be reflected in the execution of a deep neural network and its components.Most artificial deep neural networks are partitioned into a directed graph of connected modules or layers and the layers themselves consist of elemental building blocks, such as single units.For most deep neural networks, all units of a layer are processed synchronously and in parallel, but layers themselves are processed in a sequential manner.In contrast, all elements of a biological neural network are processed in parallel.In this paper, we define a class of networks between these two extreme cases.These networks are executed in a streaming or synchronous layerwise-parallel manner, unlocking the layers of such networks for parallel processing.Compared to the standard layerwise-sequential deep networks, these new layerwise-parallel networks show a fundamentally different temporal behavior and flow of information, especially for networks with skip or recurrent connections.We argue that layerwise-parallel deep networks are better suited for future challenges of deep neural network design, such as large functional modularized and/or recurrent architectures as well as networks allocating different network capacities dependent on current stimulus and/or task complexity.We layout basic properties and discuss major challenges for layerwise-parallel networks.Additionally, we provide a toolbox to design, train, evaluate, and online-interact with layerwise-parallel networks.", "target": ["Wir definieren ein Konzept schichtweiser modellparalleler tiefer neuronaler Netze, bei denen die Schichten parallel arbeiten, und stellen eine Toolbox zur Verfügung, um diese Netze zu entwerfen, zu trainieren, zu bewerten und online mit ihnen zu interagieren.", "Eine GPU-beschleunigte Toolbox für die parallele Aktualisierung von Neuronen, geschrieben in Theano, die verschiedene Aktualisierungsreihenfolgen in rekurrenten Netzen und Netzen mit Verbindungen, die Schichten überspringen, unterstützt. ", "Eine neue Toolbox für das Lernen und Bewerten von tiefen neuronalen Netzen und ein Vorschlag für einen Paradigmenwechsel von schichtweise-sequentiellen Netzen zu schichtweise-parallelen Netzen."]} {"source": "Deep neural networks are known to be vulnerable to adversarial perturbations.In this paper, we bridge adversarial robustness of neural nets with Lyapunov stability of dynamical systems.From this viewpoint, training neural nets is equivalent to finding an optimal control of the discrete dynamical system, which allows one to utilize methods of successive approximations, an optimal control algorithm based on Pontryagin's maximum principle, to train neural nets.This decoupled training method allows us to add constraints to the optimization, which makes the deep model more robust.The constrained optimization problem can be formulated as a semi-definite programming problem and hence can be solved efficiently.Experiments show that our method effectively improves deep model's adversarial robustness.", "target": ["Eine adversarial Verteidigungsmethode, die die Robustheit von tiefen neuronalen Netzen mit der Lyapunov-Stabilität verbindet", "Die Autoren formulieren das Training von NNs als Suche nach einem optimalen Regler für ein diskretes dynamisches System, was es ihnen ermöglicht, die Methode der sukzessiven Annäherung zu verwenden, um ein NN so zu trainieren, dass es robuster gegen gegnerische Angriffe ist.", "In diesem Beitrag wird die theoretische Sichtweise eines neuronalen Netzes als diskretisierte ODE verwendet, um eine Theorie der robusten Steuerung zu entwickeln, die darauf abzielt, das Netz zu trainieren und gleichzeitig die Robustheit zu verstärken."]} {"source": "In this paper, we propose a method named Dimensional reweighting Graph Convolutional Networks (DrGCNs), to tackle the problem of variance between dimensional information in the node representations of GCNs.We prove that DrGCNs can reduce the variance of the node representations by connecting our problem to the theory of the mean field.However, practically, we find that the degrees DrGCNs help vary severely on different datasets.We revisit the problem and develop a new measure K to quantify the effect.This measure guides when we should use dimensional reweighting in GCNs and how much it can help.Moreover, it offers insights to explain the improvement obtained by the proposed DrGCNs.The dimensional reweighting block is light-weighted and highly flexible to be built on most of the GCN variants.Carefully designed experiments, including several fixes on duplicates, information leaks, and wrong labels of the well-known node classification benchmark datasets, demonstrate the superior performances of DrGCNs over the existing state-of-the-art approaches.Significant improvements can also be observed on a large scale industrial dataset.", "target": ["Wir schlagen ein einfaches, aber effektives Neugewichtungsschema für GCNs vor, das theoretisch durch die Theorie des Mittelwert Feldes unterstützt wird.", "Eine Methode, bekannt als DrGCN, zur Neugewichtung der verschiedenen Dimensionen der Knotendarstellungen in Graph Convolutional Networks durch Reduzierung der Varianz zwischen den Dimensionen."]} {"source": "Knowledge-grounded dialogue is a task of generating an informative response based on both discourse context and external knowledge.As we focus on better modeling the knowledge selection in the multi-turn knowledge-grounded dialogue, we propose a sequential latent variable model as the first approach to this matter.The model named sequential knowledge transformer (SKT) can keep track of the prior and posterior distribution over knowledge; as a result, it can not only reduce the ambiguity caused from the diversity in knowledge selection of conversation but also better leverage the response information for proper choice of knowledge.Our experimental results show that the proposed model improves the knowledge selection accuracy and subsequently the performance of utterance generation.We achieve the new state-of-the-art performance on Wizard of Wikipedia (Dinan et al., 2019) as one of the most large-scale and challenging benchmarks.We further validate the effectiveness of our model over existing conversation methods in another knowledge-based dialogue Holl-E dataset (Moghe et al., 2018).", "target": ["Unser Ansatz ist der erste Versuch, ein sequentielles latentes Variablenmodell für die Wissensauswahl in einem wissensbasierten Dialog mit mehreren Runden zu nutzen. Er erreicht eine neue Spitzenleistung beim Wizard of Wikipedia Benchmark.", "Ein sequentielles latentes Variablenmodell für die Wissensselektion bei der Dialoggenerierung, das das posteriore Aufmerksamkeitsmodell auf das Problem der latenten Wissensselektion ausweitet und eine höhere Leistung als bisherige State-of-the-Art-Modelle erzielt.", "Eine neuartige Architektur für die Auswahl von wissensbasierten Multi-Turn-Dialogen, die in relevanten Benchmark-Datensätzen den Stand der Technik erreicht und bei menschlichen Bewertungen besser abschneidet."]} {"source": "Meta-learning, or learning-to-learn, has proven to be a successful strategy in attacking problems in supervised learning and reinforcement learning that involve small amounts of data.State-of-the-art solutions involve learning an initialization and/or learning algorithm using a set of training episodes so that the meta learner can generalize to an evaluation episode quickly.These methods perform well but often lack good quantification of uncertainty, which can be vital to real-world applications when data is lacking.We propose a meta-learning method which efficiently amortizes hierarchical variational inference across tasks, learning a prior distribution over neural network weights so that a few steps of Bayes by Backprop will produce a good task-specific approximate posterior.We show that our method produces good uncertainty estimates on contextual bandit and few-shot learning benchmarks.", "target": ["Wir schlagen eine Meta-Learning-Methode vor, die hierarchische Variationsinferenz über Trainingsepisoden hinweg effizient amortisiert.", "Eine Anpassung an MAML-Modelle, die die posteriore Unsicherheit in aufgabenspezifischen latenten Variablen berücksichtigt, indem sie Variationsinferenz für aufgabenspezifische Parameter in einer hierarchischen Bayes'schen Sichtweise von MAML einsetzt.", "Die Autoren ziehen Meta-Lernen in Betracht, um eine Priorität über die Gewichte des neuronalen Netzes zu erlernen, was mittels amortisierter Variationsinferenz geschieht."]} {"source": "Often we wish to transfer representational knowledge from one neural network to another.Examples include distilling a large network into a smaller one, transferring knowledge from one sensory modality to a second, or ensembling a collection of models into a single estimator.Knowledge distillation, the standard approach to these problems, minimizes the KL divergence between the probabilistic outputs of a teacher and student network.We demonstrate that this objective ignores important structural knowledge of the teacher network.This motivates an alternative objective by which we train a student to capture significantly more information in the teacher's representation of the data.We formulate this objective as contrastive learning.Experiments demonstrate that our resulting new objective outperforms knowledge distillation on a variety of knowledge transfer tasks, including single model compression, ensemble distillation, and cross-modal transfer.When combined with knowledge distillation, our method sets a state of the art in many transfer tasks, sometimes even outperforming the teacher network.", "target": ["Repräsentation/Wissensdestillation durch Maximierung der gegenseitigen Information zwischen Lehrer und Schüler.", "Diese Arbeit kombiniert ein kontrastives Ziel, das die gegenseitige Information zwischen den Repräsentationen misst, die von Lehrer- und Schülernetzwerken für die Modelldestillation erlernt wurden, und schlägt ein Modell vor, das Verbesserungen gegenüber bestehenden Alternativen bei Destillationsaufgaben aufweist."]} {"source": "Developing effective biologically plausible learning rules for deep neural networks is important for advancing connections between deep learning and neuroscience.To date, local synaptic learning rules like those employed by the brain have failed to match the performance of backpropagation in deep networks.In this work, we employ meta-learning to discover networks that learn using feedback connections and local, biologically motivated learning rules.Importantly, the feedback connections are not tied to the feedforward weights, avoiding any biologically implausible weight transport.It can be shown mathematically that this approach has sufficient expressivity to approximate any online learning algorithm.Our experiments show that the meta-trained networks effectively use feedback connections to perform online credit assignment in multi-layer architectures.Moreover, we demonstrate empirically that this model outperforms a state-of-the-art gradient-based meta-learning algorithm for continual learning on regression and classification benchmarks.This approach represents a step toward biologically plausible learning mechanisms that can not only match gradient descent-based learning, but also overcome its limitations.", "target": ["Netzwerke, die mit rückgekoppelten Verbindungen und lokalen Plastizitätsregeln lernen, können durch Metalernen optimiert werden."]} {"source": "In the visual system, neurons respond to a patch of the input known as their classical receptive field (RF), and can be modulated by stimuli in the surround.These interactions are often mediated by lateral connections, giving rise to extra-classical RFs.We use supervised learning via backpropagation to learn feedforward connections, combined with an unsupervised learning rule to learn lateral connections between units within a convolutional neural network.These connections allow each unit to integrate information from its surround, generating extra-classical receptive fields for the units in our new proposed model (CNNEx).We demonstrate that these connections make the network more robust and achieve better performance on noisy versions of the MNIST and CIFAR-10 datasets.Although the image statistics of MNIST and CIFAR-10 differ greatly, the same unsupervised learning rule generalized to both datasets.Our framework can potentially be applied to networks trained on other tasks, with the learned lateral connections aiding the computations implemented by feedforward connections when the input is unreliable.", "target": ["CNNs mit biologisch inspirierten lateralen Verbindungen, die auf unüberwachte Weise gelernt werden, sind robuster gegenüber störhaften Eingaben. "]} {"source": "Deep learning (DL) has in recent years been widely used in naturallanguage processing (NLP) applications due to its superiorperformance.However, while natural languages are rich ingrammatical structure, DL has not been able to explicitlyrepresent and enforce such structures.This paper proposes a newarchitecture to bridge this gap by exploiting tensor productrepresentations (TPR), a structured neural-symbolic frameworkdeveloped in cognitive science over the past 20 years, with theaim of integrating DL with explicit language structures and rules.We call it the Tensor Product Generation Network(TPGN), and apply it to image captioning.The keyideas of TPGN are:1) unsupervised learning ofrole-unbinding vectors of words via a TPR-based deep neuralnetwork, and2) integration of TPR with typical DL architecturesincluding Long Short-Term Memory (LSTM) models.The novelty of ourapproach lies in its ability to generate a sentence and extractpartial grammatical structure of the sentence by usingrole-unbinding vectors, which are obtained in an unsupervisedmanner.Experimental results demonstrate the effectiveness of theproposed approach.", "target": ["In dieser Arbeit soll ein Ansatz zur Darstellung von Tensorprodukten für Deep-Learning-basierte Anwendungen zur Verarbeitung natürlicher Sprache entwickelt werden."]} {"source": "It is well-known that classifiers are vulnerable to adversarial perturbations.To defend against adversarial perturbations, various certified robustness results have been derived.However, existing certified robustnesses are limited to top-1 predictions.In many real-world applications, top-$k$ predictions are more relevant.In this work, we aim to derive certified robustness for top-$k$ predictions.In particular, our certified robustness is based on randomized smoothing, which turns any classifier to a new classifier via adding noise to an input example.We adopt randomized smoothing because it is scalable to large-scale neural networks and applicable to any classifier.We derive a tight robustness in $\\ell_2$ norm for top-$k$ predictions when using randomized smoothing with Gaussian noise.We find that generalizing the certified robustness from top-1 to top-$k$ predictions faces significant technical challenges.We also empirically evaluate our method on CIFAR10 and ImageNet.For example, our method can obtain an ImageNet classifier with a certified top-5 accuracy of 62.8\\% when the $\\ell_2$-norms of the adversarial perturbations are less than 0.5 (=127/255).Our code is publicly available at: \\url{https://github.com/jjy1994/Certify_Topk}.", "target": ["Wir untersuchen die zertifizierte Robustheit für Top-k Vorhersagen durch randomisierte Glättung unter Gaußschem Störrungen und leiten eine enge Robustheitsgrenze in der L_2 Norm ab.", "Dieser Artikel erweitert die Arbeit zur Ableitung eines zertifizierten Radius durch randomisierte Glättung und zeigt den Radius, bei dem ein geglätteter Klassifikator unter Gaußschen Störungen für die besten k Vorhersagen zertifiziert ist.", "Dieser Beitrag baut auf der Technik der Zufallsglättung für die Top-1 Vorhersage auf und zielt darauf ab, eine Zertifizierung für Top-k Vorhersagen zu liefern."]} {"source": "Recent work has shown increased interest in using the Variational Autoencoder (VAE) framework to discover interpretable representations of data in an unsupervised way.These methods have focussed largely on modifying the variational cost function to achieve this goal.However, we show that methods like beta-VAE simplify the tendency of variational inference to underfit causing pathological over-pruning and over-orthogonalization of learned components.In this paper we take a complementary approach: to modify the probabilistic model to encourage structured latent variable representations to be discovered.Specifically, the standard VAE probabilistic model is unidentifiable: the likelihood of the parameters is invariant under rotations of the latent space.This means there is no pressure to identify each true factor of variation with a latent variable.We therefore employ a rich prior distribution, akin to the ICA model, that breaks the rotational symmetry.Extensive quantitative and qualitative experiments demonstrate that the proposed prior mitigates the trade-off introduced by modified cost functions like beta-VAE and TCVAE between reconstruction loss and disentanglement.The proposed prior allows to improve these approaches with respect to both disentanglement and reconstruction quality significantly over the state of the art.", "target": ["Wir stellen strukturierte Priors für das unüberwachte Lernen von entwirrten Repräsentationen in VAEs vor, die den Kompromiss zwischen Entwirrung und Rekonstruktionsverlust deutlich abmildern.", "Ein allgemeiner Rahmen für die Verwendung der Familie der L^p-verschachtelten Verteilungen als Prior für den Code-Vektor der VAE, der eine höhere MIG demonstriert.", "Die Autoren weisen auf Probleme in aktuellen VAE-Ansätzen hin und bieten eine neue Perspektive auf den Kompromiss zwischen Rekonstruktion und Orthogonalisierung für VAE, beta-VAE und beta-TCVAE."]} {"source": "Due to the success of residual networks (resnets) and related architectures, shortcut connections have quickly become standard tools for building convolutional neural networks.The explanations in the literature for the apparent effectiveness of shortcuts are varied and often contradictory.We hypothesize that shortcuts work primarily because they act as linear counterparts to nonlinear layers.We test this hypothesis by using several variations on the standard residual block, with different types of linear connections, to build small (100k--1.2M parameter) image classification networks.Our experiments show that other kinds of linear connections can be even more effective than the identity shortcuts.Our results also suggest that the best type of linear connection for a given application may depend on both network width and depth.", "target": ["Wir verallgemeinern Residualblöcke zu Tandemblöcken, die beliebige lineare Zuordnungen anstelle von Verknüpfungen verwenden, und verbessern die Leistung gegenüber ResNets.", "Diese Arbeit führt eine Analyse der Verknüpfungen in ResNet-ähnlichen Architekturen durch und schlägt vor, die Identitätsverknüpfungen durch eine alternative Convolutional Verknüpfung zu ersetzen, der als Tandemblock bezeichnet wird.", "Dieser Artikel untersucht die Auswirkungen des Ersetzens von Identitäts-Sprungverbindungen durch trainierbare Convolutional Sprungverbindungen in ResNet und stellt fest, dass sich die Leistung verbessert."]} {"source": "Adam-typed optimizers, as a class of adaptive moment estimation methods with the exponential moving average scheme, have been successfully used in many applications of deep learning.Such methods are appealing for capability on large-scale sparse datasets.On top of that, they are computationally efficient and insensitive to the hyper-parameter settings.In this paper, we present a new framework for adapting Adam-typed methods, namely AdamT.Instead of applying a simple exponential weighted average, AdamT also includes the trend information when updating the parameters with the adaptive step size and gradients.The newly added term is expected to efficiently capture the non-horizontal moving patterns on the cost surface, and thus converge more rapidly.We show empirically the importance of the trend component, where AdamT outperforms the conventional Adam method constantly in both convex and non-convex settings.", "target": ["Wir stellen einen neuen Rahmen für die Anpassung von Methoden des Typs Adam vor, nämlich AdamT, um die Trend Informationen bei der Aktualisierung der Parameter mit der adaptiven Schrittgröße und den Gradienten einzubeziehen.", "Eine neue Art von Adam-Variante, die die lineare Methode von Holt zur Berechnung des geglätteten Impulses erster und zweiter Ordnung verwendet, anstatt den exponentiell gewichteten Durchschnitt zu verwenden."]} {"source": "As machine learning methods see greater adoption and implementation in high stakes applications suchas medical image diagnosis, the need for model interpretability and explanation has become morecritical.Classical approaches that assess feature importance (eg saliency maps) do not explain how and why a particular region of an image is relevant to the prediction.We proposea method that explains the outcome of a classification black-box by gradually exaggeratingthe semantic effect of a given class.Given a query input to a classifier, our method produces aprogressive set of plausible variations of that query, which gradually change the posterior probabilityfrom its original class to its negation.These counter-factually generated samples preserve featuresunrelated to the classification decision, such that a user can employ our method as a ``tuning knob'' to traverse a data manifold while crossing the decision boundary. Our method is model agnostic and only requires the output value and gradient of the predictor with respect to its input.", "target": ["Eine Methode zur Erläuterung eines Klassifizierers durch Erzeugung einer visuellen Störung eines Bildes, indem die semantischen Merkmale, die der Klassifizierer mit einer Zielbezeichnung assoziiert, übertrieben oder vermindert werden.", "Ein Modell, das, wenn eine Abfrage in eine Blackbox eingegeben wird, versucht, das Ergebnis zu erklären, indem es plausible und progressive Variationen der Abfrage liefert, die zu einer Änderung der Ausgabe führen können.", "Eine Methode zur Erklärung des Ergebnisses einer Black-Box-Klassifizierung von Bildern, die eine allmähliche Störung der Ergebnisse als Reaktion auf allmählich gestörte Eingangsabfragen erzeugt."]} {"source": "We study the problem of explaining a rich class of behavioral properties of deep neural networks.Our influence-directed explanations approach this problem by peering inside the network to identify neurons with high influence on the property of interest using an axiomatically justified influence measure, and then providing an interpretation for the concepts these neurons represent.We evaluate our approach by training convolutional neural networks on Pubfig, ImageNet, and Diabetic Retinopathy datasets. Our evaluation demonstrates that influence-directed explanations (1) localize features used by the network, (2) isolate features distinguishing related instances, (3) help extract the essence of what the network learned about the class, and (4) assist in debugging misclassifications.", "target": ["Wir stellen einen einflussgesteuerten Ansatz vor, um Erklärungen für das Verhalten von tiefen Convolutional Netzwerken zu finden, und zeigen, wie dieser Ansatz verwendet werden kann, um eine breite Palette von Fragen zu beantworten, die mit früheren Arbeiten nicht beantwortet werden konnten.", "Eine Methode zur Messung des Einflusses, die bestimmte Axiome erfüllt, und ein Begriff des Einflusses, der verwendet werden kann, um festzustellen, welcher Eingabeteil den größten Einfluss auf die Ausgabe eines Neurons in einem tiefen neuronalen Netz hat.", "In diesem Beitrag wird vorgeschlagen, den Einfluss einzelner Neuronen in Bezug auf eine interessierende Größe zu messen, die von einem anderen Neuron repräsentiert wird."]} {"source": "Standard deep learning systems require thousands or millions of examples to learn a concept, and cannot integrate new concepts easily.By contrast, humans have an incredible ability to do one-shot or few-shot learning.For instance, from just hearing a word used in a sentence, humans can infer a great deal about it, by leveraging what the syntax and semantics of the surrounding words tells us.Here, we draw inspiration from this to highlight a simple technique by which deep recurrent networks can similarly exploit their prior knowledge to learn a useful representation for a new word from little data.This could make natural language processing systems much more flexible, by allowing them to learn continually from the new words they encounter.", "target": ["Wir stellen eine Technik vor, mit der Systeme zur Verarbeitung natürlicher Sprache ein neues Wort aus dem Kontext lernen können, wodurch sie wesentlich flexibler werden.", "Eine Technik zur Nutzung von Vorwissen, um Einbettungsrepräsentationen für neue Wörter mit minimalen Daten zu lernen."]} {"source": "Recent research developing neural network architectures with external memory have often used the benchmark bAbI question and answering dataset which provides a challenging number of tasks requiring reasoning.Here we employed a classic associative inference task from the human neuroscience literature in order to more carefully probe the reasoning capacity of existing memory-augmented architectures.This task is thought to capture the essence of reasoning -- the appreciation of distant relationships among elements distributed across multiple facts or memories.Surprisingly, we found that current architectures struggle to reason over long distance associations.Similar results were obtained on a more complex task involving finding the shortest path between nodes in a path.We therefore developed a novel architecture, MEMO, endowed with the capacity to reason over longer distances.This was accomplished with the addition of two novel components.First, it introduces a separation between memories/facts stored in external memory and the items that comprise these facts in external memory.Second, it makes use of an adaptive retrieval mechanism, allowing a variable number of ‘memory hops’ before the answer is produced.MEMO is capable of solving our novel reasoning tasks, as well as all 20 tasks in bAbI.", "target": ["Eine Speicherarchitektur zur Unterstützung des schlussfolgernden Denkens.", "Dieses Papier schlägt Änderungen an der Ende-zu-Ende Speicher-Netzwerk Architektur vor, stellt eine neue Paired-Associative-Inference Aufgabe vor, die die meisten bestehenden Modelle nur schwierig lösen können, und zeigt, dass die vorgeschlagene Architektur die Aufgabe besser löst.", "Eine neue Aufgabe (Paired Associate Inference) aus der kognitiven Psychologie und ein Vorschlag für eine neue Speicherarchitektur mit Eigenschaften, die eine bessere Leistung bei der Paired Associate-Aufgabe ermöglichen."]} {"source": "Depthwise separable convolutions reduce the number of parameters and computation used in convolutional operations while increasing representational efficiency.They have been shown to be successful in image classification models, both in obtaining better models than previously possible for a given parameter count (the Xception architecture) and considerably reducing the number of parameters required to perform at a given level (the MobileNets family of architectures).Recently, convolutional sequence-to-sequence networks have been applied to machine translation tasks with good results.In this work, we study how depthwise separable convolutions can be applied to neural machine translation.We introduce a new architecture inspired by Xception and ByteNet, called SliceNet, which enables a significant reduction of the parameter count and amount of computation needed to obtain results like ByteNet, and, with a similar parameter count, achieves better results.In addition to showing that depthwise separable convolutions perform well for machine translation, we investigate the architectural changes that they enable: we observe that thanks to depthwise separability, we can increase the length of convolution windows, removing the need for filter dilation.We also introduce a new super-separable convolution operation that further reduces the number of parameters and computational cost of the models.", "target": ["In der Tiefe trennbare Convolutions verbessern die neuronale maschinelle Übersetzung: je trennbarer, desto besser.", "In dieser Arbeit wird vorgeschlagen, in einem vollständig Convolutional neuronalen maschinellen Übersetzungsmodell tiefenweise trennbare Convolutional Layers zu verwenden, und es wird eine neue super-trennbare Convolutional Layer eingeführt, die die Rechenkosten weiter reduziert."]} {"source": "Interpreting generative adversarial network (GAN) training as approximate divergence minimization has beentheoretically insightful, has spurred discussion, and has lead to theoretically and practically interestingextensions such as f-GANs and Wasserstein GANs.For both classic GANs and f-GANs, there is an original variant of training and a \"non-saturating\" variant which uses an alternative form of generator gradient.The original variant is theoretically easier to study, but for GANs the alternative variant performs better in practice.The non-saturating scheme is often regarded as a simple modification to deal with optimization issues, but we show that in fact the non-saturating scheme for GANs is effectively optimizing a reverse KL-like f-divergence.We also develop a number of theoretical tools to help compare and classify f-divergences.We hope these results may help to clarify some of the theoretical discussion surrounding the divergence minimization view of GAN training.", "target": ["Nicht sättigendes GAN-Training minimiert effektiv eine umgekehrte KL-ähnliche f-Divergenz.", "In diesem Beitrag wird ein nützlicher Ausdruck für die Klasse der f-Divergenzen vorgeschlagen, die theoretischen Eigenschaften der beliebten f-Divergenzen anhand neu entwickelter Werkzeuge untersucht und GANs mit dem nicht-sättigenden Trainingsschema untersucht."]} {"source": "We introduce a novel method for converting text data into abstract image representations, which allows image-based processing techniques (e.g. image classification networks) to be applied to text-based comparison problems.We apply the technique to entity disambiguation of inventor names in US patents.The method involves converting text from each pairwise comparison between two inventor name records into a 2D RGB (stacked) image representation.We then train an image classification neural network to discriminate between such pairwise comparison images, and use the trained network to label each pair of records as either matched (same inventor) or non-matched (different inventors), obtaining highly accurate results (F1: 99.09%, precision: 99.41%, recall: 98.76%).Our new text-to-image representation method could potentially be used more broadly for other NLP comparison problems, such as disambiguation of academic publications, or for problems that require simultaneous classification of both text and images.", "target": ["Wir stellen eine neuartige Textdarstellungsmethode vor, die es ermöglicht, Bildklassifikatoren auf Textklassifizierungsprobleme anzuwenden, und wenden die Methode auf die Disambiguierung von Erfindernamen für Patente an.", "Eine Methode zur Abbildung eines Paares von Textinformationen in ein 2D-RGB-Bild, das in 2D Convolutional neuronale Netze (Bildklassifizierer) eingespeist werden kann.", "Die Autoren befassen sich mit dem Problem der Disambiguierung von Erfindernamen für Patente und schlagen vor, eine Bildseiten-Darstellung der beiden zu vergleichenden Namensstränge zu erstellen und einen Bildklassifikator anzuwenden."]} {"source": "We propose a novel algorithm, Difference-Seeking Generative Adversarial Network (DSGAN), developed from traditional GAN.DSGAN considers the scenario that the training samples of target distribution, $p_{t}$, are difficult to collect.Suppose there are two distributions $p_{\\bar{d}}$ and $p_{d}$ such that the density of the target distribution can be the differences between the densities of $p_{\\bar{d}}$ and $p_{d}$.We show how to learn the target distribution $p_{t}$ only via samples from $p_{d}$ and $p_{\\bar{d}}$ (relatively easy to obtain).DSGAN has the flexibility to produce samples from various target distributions (e.g. the out-of-distribution).Two key applications, semi-supervised learning and adversarial training, are taken as examples to validate the effectiveness of DSGAN.We also provide theoretical analyses about the convergence of DSGAN.", "target": ["Wir haben das Modell \"Difference-Seeking Generative Adversarial Network\" (DSGAN) vorgeschlagen, um die Zielverteilung zu erlernen, für die es schwierig ist, Trainingsdaten zu sammeln.", "In diesem Papier wird DS-GAN vorgestellt, das darauf abzielt, den Unterschied zwischen zwei beliebigen Verteilungen zu erlernen, deren Stichproben schwer oder gar nicht zu erheben sind, und das seine Effektivität bei halbüberwachten Lern- und gegnerischen Trainingsaufgaben zeigt.", "In diesem Papier wird das Problem des Lernens eines GAN zur Erfassung einer Zielverteilung mit nur sehr wenigen verfügbaren Trainingsstichproben aus dieser Verteilung betrachtet."]} {"source": "Recently, Generative Adversarial Network (GAN) and numbers of its variants have been widely used to solve the image-to-image translation problem and achieved extraordinary results in both a supervised and unsupervised manner.However, most GAN-based methods suffer from the imbalance problem between the generator and discriminator in practice.Namely, the relative model capacities of the generator and discriminator do not match, leading to mode collapse and/or diminished gradients.To tackle this problem, we propose a GuideGAN based on attention mechanism.More specifically, we arm the discriminator with an attention mechanism so not only it estimates the probability that its input is real, but also does it create an attention map that highlights the critical features for such prediction.This attention map then assists the generator to produce more plausible and realistic images.We extensively evaluate the proposed GuideGAN framework on a number of image transfer tasks.Both qualitative results and quantitative comparison demonstrate the superiority of our proposed approach.", "target": ["Eine allgemeine Methode zur Verbesserung der Bildübersetzungsleistung des GAN-Rahmens durch Verwendung eines in die Aufmerksamkeit eingebetteten Diskriminators.", "Ein Feedback-Mechanismus im GAN-Rahmen, der die Qualität der erzeugten Bilder bei der Bild-zu-Bild-Übersetzung verbessert und dessen Diskriminator eine Karte ausgibt, die angibt, worauf sich der Generator konzentrieren sollte, um seine Ergebnisse überzeugender zu machen.", "Vorschlag für ein GAN mit einem aufmerksamkeitsbasierten Diskriminator für die I2I-Übersetzung, der die Wahrscheinlichkeit von echt/falsch und eine Aufmerksamkeitszuordnung liefert, die die Auffälligkeit für die Bilderzeugung widerspiegelt."]} {"source": "The problem of verifying whether a textual hypothesis holds based on the given evidence, also known as fact verification, plays an important role in the study of natural language understanding and semantic representation.However, existing studies are mainly restricted to dealing with unstructured evidence (e.g., natural language sentences and documents, news, etc), while verification under structured evidence, such as tables, graphs, and databases, remains unexplored.This paper specifically aims to study the fact verification given semi-structured data as evidence.To this end, we construct a large-scale dataset called TabFact with 16k Wikipedia tables as the evidence for 118k human-annotated natural language statements, which are labeled as either ENTAILED or REFUTED.TabFact is challenging since it involves both soft linguistic reasoning and hard symbolic reasoning.To address these reasoning challenges, we design two different models: Table-BERT and Latent Program Algorithm (LPA).Table-BERT leverages the state-of-the-art pre-trained language model to encode the linearized tables and statements into continuous vectors for verification.LPA parses statements into LISP-like programs and executes them against the tables to obtain the returned binary value for verification.Both methods achieve similar accuracy but still lag far behind human performance.We also perform a comprehensive analysis to demonstrate great future opportunities.", "target": ["Wir schlagen einen neuen Datensatz vor, um das Entailment-Problem unter halbstrukturierten Tabellen als Prämisse zu untersuchen.", "In diesem Papier wird ein neuer Datensatz für die tabellenbasierte Faktenüberprüfung vorgeschlagen und es werden Methoden für diese Aufgabe vorgestellt.", "Die Autoren stellen das Problem der Faktenüberprüfung mit halbstrukturierten Datenquellen wie Tabellen vor, erstellen einen neuen Datensatz und evaluieren Basismodelle mit Variationen."]} {"source": "This work presents a two-stage neural architecture for learning and refining structural correspondences between graphs.First, we use localized node embeddings computed by a graph neural network to obtain an initial ranking of soft correspondences between nodes.Secondly, we employ synchronous message passing networks to iteratively re-rank the soft correspondences to reach a matching consensus in local neighborhoods between graphs.We show, theoretically and empirically, that our message passing scheme computes a well-founded measure of consensus for corresponding neighborhoods, which is then used to guide the iterative re-ranking process.Our purely local and sparsity-aware architecture scales well to large, real-world inputs while still being able to recover global correspondences consistently.We demonstrate the practical effectiveness of our method on real-world tasks from the fields of computer vision and entity alignment between knowledge graphs, on which we improve upon the current state-of-the-art.", "target": ["Wir entwickeln eine Deep Graph Matching-Architektur, die anfängliche Korrespondenzen verfeinert, um einen nachbarschaftlichen Konsens zu erreichen.", "Ein Rahmen für die Beantwortung von Fragen zum Graphenabgleich, bestehend aus lokalen Knoteneinbettungen mit einem Verfeinerungsschritt durch Nachrichtenübermittlung.", "Eine zweistufige GNN-basierte Architektur zur Herstellung von Korrespondenzen zwischen zwei Graphen, die sich bei realen Aufgaben des Bildabgleichs und des Abgleichs von Wissensgraphen gut bewährt."]} {"source": "This paper extends the proof of density of neural networks in the space of continuous (or even measurable) functions on Euclidean spaces to functions on compact sets of probability measures.By doing so the work parallels a more then a decade old results on mean-map embedding of probability measures in reproducing kernel Hilbert spaces. The work has wide practical consequences for multi-instance learning, where it theoretically justifies some recently proposed constructions.The result is then extended to Cartesian products, yielding universal approximation theorem for tree-structured domains, which naturally occur in data-exchange formats like JSON, XML, YAML, AVRO, and ProtoBuffer.This has important practical implications, as it enables to automatically create an architecture of neural networks for processing structured data (AutoML paradigms), as demonstrated by an accompanied library for JSON format.", "target": ["Dieser Beitrag erweitert den Nachweis der Dichte von neuronalen Netzen im Raum der kontinuierlichen (oder sogar messbaren) Funktionen auf euklidischen Räumen auf Funktionen über kompakten Mengen von Wahrscheinlichkeitsmaßen. ", "In diesem Beitrag werden die Approximationseigenschaften einer Familie neuronaler Netze untersucht, die für die Bewältigung von Lernproblemen mit mehreren Instanzen entwickelt wurden, und es wird gezeigt, dass die Ergebnisse für standardmäßige einschichtige Architekturen auch für diese Modelle gelten.", "Diese Arbeit verallgemeinert den universellen Approximationssatz auf reelle Funktionen im Raum der Maße."]} {"source": "Interactions such as double negation in sentences and scene interactions in images are common forms of complex dependencies captured by state-of-the-art machine learning models.We propose Mahé, a novel approach to provide Model-Agnostic Hierarchical Explanations of how powerful machine learning models, such as deep neural networks, capture these interactions as either dependent on or free of the context of data instances.Specifically, Mahé provides context-dependent explanations by a novel local interpretation algorithm that effectively captures any-order interactions, and obtains context-free explanations through generalizing context-dependent interactions to explain global behaviors.Experimental results show that Mahé obtains improved local interaction interpretations over state-of-the-art methods and successfully provides explanations of interactions that are context-free.", "target": ["Ein neuer Rahmen für kontextabhängige und kontextfreie Erklärungen von Vorhersagen", "Die Autoren erweitern die lineare lokale Attributionsmethode LIME zur Interpretation von Black-Box-Modellen und schlagen eine Methode zur Unterscheidung zwischen kontextabhängigen und kontextfreien Interaktionen vor.", "Eine Methode, die hierarchische Erklärungen für ein Modell liefern kann, einschließlich kontextabhängiger und kontextfreier Erklärungen durch einen lokalen Interpretationsalgorithmus."]} {"source": "To realize the promise of ubiquitous embedded deep network inference, it is essential to seek limits of energy and area efficiency. To this end, low-precision networks offer tremendous promise because both energy and area scale down quadratically with the reduction in precision. Here, for the first time, we demonstrate ResNet-18, ResNet-34, ResNet-50, ResNet-152, Inception-v3, densenet-161, and VGG-16bn networks on the ImageNet classification benchmark that, at 8-bit precision exceed the accuracy of the full-precision baseline networks after one epoch of finetuning, thereby leveraging the availability of pretrained models.We also demonstrate ResNet-18, ResNet-34, and ResNet-50 4-bit models that match the accuracy of the full-precision baseline networks -- the highest scores to date.Surprisingly, the weights of the low-precision networks are very close (in cosine similarity) to the weights of the corresponding baseline networks, making training from scratch unnecessary.We find that gradient noise due to quantization during training increases with reduced precision, and seek ways to overcome this noise.The number of iterations required by stochastic gradient descent to achieve a given training error is related to the square of (a) the distance of the initial solution from the final plus (b) the maximum variance of the gradient estimates. By drawing inspiration from this observation, we (a) reduce solution distance by starting with pretrained fp32 precision baseline networks and fine-tuning, and (b) combat noise introduced by quantizing weights and activations during training, by using larger batches along with matched learning rate annealing. Sensitivity analysis indicates that these techniques, coupled with proper activation function range calibration, offer a promising heuristic to discover low-precision networks, if they exist, close to fp32 precision baseline networks.", "target": ["Das Fine-Tuning nach der Quantisierung entspricht oder übertrifft den Stand der Technik in Bezug auf Netzwerke mit voller Genauigkeit sowohl bei 8- als auch bei 4-Bit Quantisierung.", "In diesem Beitrag wird vorgeschlagen, die Leistung von Modellen mit geringer Genauigkeit zu verbessern, indem die Quantisierung an vortrainierten Modellen durchgeführt wird, große Batchgrößen verwendet werden und eine geeignete Lernrate beim auskühlen mit längerer Trainingszeit verwendet wird.", "Eine Methode für niedrige Bit-Quantisierung, um Inferenz auf effizienter Hardware zu ermöglichen, die volle Genauigkeit auf ResNet50 mit 4-Bit-Gewichten und -Aktivierungen erreicht, basierend auf der Beobachtung, dass Fine-Tuning bei niedriger Präzision Störungen im Gradienten einführt."]} {"source": "Analysis methods which enable us to better understand the representations and functioning of neural models of language are increasingly needed as deep learning becomes the dominant approach in NLP.Here we present two methods based on Representational Similarity Analysis (RSA) and Tree Kernels (TK) which allow us to directly quantify how strongly the information encoded in neural activation patterns corresponds to information represented by symbolic structures such as syntax trees.We first validate our methods on the case of a simple synthetic language for arithmetic expressions with clearly defined syntax and semantics, and show that they exhibit the expected pattern of results.We then apply our methods to correlate neural representations of English sentences with their constituency parse trees.", "target": ["Zwei Methoden, die auf der Representational Similarity Analysis (RSA) und Tree Kernels (TK) basieren und direkt quantifizieren, wie stark die in neuronalen Aktivierungsmustern kodierte Information mit der durch symbolische Strukturen repräsentierten Information übereinstimmt."]} {"source": "Supervised deep learning requires a large amount of training samples with annotations (e.g. label class for classification task, pixel- or voxel-wised label map for segmentation tasks), which are expensive and time-consuming to obtain.During the training of a deep neural network, the annotated samples are fed into the network in a mini-batch way, where they are often regarded of equal importance.However, some of the samples may become less informative during training, as the magnitude of the gradient start to vanish for these samples.In the meantime, other samples of higher utility or hardness may be more demanded for the training process to proceed and require more exploitation.To address the challenges of expensive annotations and loss of sample informativeness, here we propose a novel training framework which adaptively selects informative samples that are fed to the training process.The adaptive selection or sampling is performed based on a hardness-aware strategy in the latent space constructed by a generative model.To evaluate the proposed training framework, we perform experiments on three different datasets, including MNIST and CIFAR-10 for image classification task and a medical image dataset IVUS for biophysical simulation task.On all three datasets, the proposed framework outperforms a random sampling method, which demonstrates the effectiveness of our framework.", "target": ["In diesem Artikel wird ein Rahmen für dateneffizientes Repräsentationslernen durch adaptives Sampling im latenten Raum vorgestellt.", "Eine Methode zur sequentiellen und adaptiven Auswahl von Trainingsbeispielen, die dem Trainingsalgorithmus vorgelegt werden, wobei die Auswahl im latenten Raum auf der Grundlage der Auswahl von Beispielen in Richtung des Gradienten des Verlustes erfolgt.", "Eine Methode zur effizienten Auswahl harter Proben während des Trainings neuronaler Netze, die durch einen variationalen Autoencoder erreicht wird, der Proben in einem latenten Raum kodiert."]} {"source": "Existing methods for AI-generated artworks still struggle with generating high-quality stylized content, where high-level semantics are preserved, or separating fine-grained styles from various artists.We propose a novel Generative Adversarial Disentanglement Network which can disentangle two complementary factors of variations when only one of them is labelled in general, and fully decompose complex anime illustrations into style and content in particular.Training such model is challenging, since given a style, various content data may exist but not the other way round.Our approach is divided into two stages, one that encodes an input image into a style independent content, and one based on a dual-conditional generator.We demonstrate the ability to generate high-fidelity anime portraits with a fixed content and a large variety of styles from over a thousand artists, and vice versa, using a single end-to-end network and with applications in style transfer.We show this unique capability as well as superior output to the current state-of-the-art.", "target": ["Eine auf adversarialem Training basierende Methode zur Unterscheidung von zwei komplementären Variationsgruppen in einem Datensatz, von denen nur eine gekennzeichnet ist, getestet an Stil und Inhalt von Anime-Illustrationen.", "Eine Methode zur Bilderzeugung, die bedingte GANs und bedingte VAEs kombiniert, um originalgetreue Anime-Bilder mit verschiedenen Stilen von verschiedenen Künstlern zu erzeugen. ", "Vorschlag für eine Methode zum Erlernen von entkoppelten Stil- (Künstler-) und Inhaltsdarstellungen in Anime."]} {"source": "Recent research has shown that CNNs are often overly sensitive to high-frequency textural patterns.Inspired by the intuition that humans are more sensitive to the lower-frequency (larger-scale) patterns we design a regularization scheme that penalizes large differences between adjacent components within each convolutional kernel.We apply our regularization onto several popular training methods, demonstrating that the models with the proposed smooth kernels enjoy improved adversarial robustness.Further, building on recent work establishing connections between adversarial robustness and interpretability, we show that our method appears to give more perceptually-aligned gradients.", "target": ["Wir führen eine Glättungsregularisierung für Convolutional Kernels von CNN ein, die dazu beitragen kann, die adversariale Robustheit zu verbessern und zu wahrnehmungsgerechten Gradienten zu führen", "In diesem Beitrag wird ein neues Regularisierungsschema vorgeschlagen, das die Convolutional Kernel glättet. Es wird argumentiert, dass eine geringere Abhängigkeit des neuronalen Netzes von hochfrequenten Komponenten die Robustheit gegenüber feindlichen Beispielen erhöht. ", "Die Autoren schlagen eine Methode zum Erlernen glatterer Convolutional Kernels vor, insbesondere einen Regularisierer, der große Änderungen zwischen aufeinanderfolgenden Pixeln des Kernels bestraft, mit der Intuition, die Verwendung hochfrequenter Eingangskomponenten zu bestrafen."]} {"source": "Despite an ever growing literature on reinforcement learning algorithms and applications, much less is known about their statistical inference.In this paper, we investigate the large-sample behaviors of the Q-value estimates with closed-form characterizations of the asymptotic variances.This allows us to efficiently construct confidence regions for Q-value and optimal value functions, and to develop policies to minimize their estimation errors.This also leads to a policy exploration strategy that relies on estimating the relative discrepancies among the Q estimates.Numerical experiments show superior performances of our exploration strategy than other benchmark approaches.", "target": ["Wir untersuchen das Verhalten der Q-Wert-Schätzungen bei großen Stichproben und schlagen eine effiziente Erkundungsstrategie vor, die sich auf die Schätzung der relativen Diskrepanzen zwischen den Q-Schätzungen stützt. "]} {"source": "Entailment vectors are a principled way to encode in a vector what information is known and what is unknown. They are designed to model relations where one vector should include all the information in another vector, called entailment. This paper investigates the unsupervised learning of entailment vectors for the semantics of words. Using simple entailment-based models of the semantics of words in text (distributional semantics), we induce entailment-vector word embeddings which outperform the best previous results for predicting entailment between words, in unsupervised and semi-supervised experiments on hyponymy.", "target": ["Wir trainieren Worteinbettungen auf der Grundlage von Entailment anstelle von Ähnlichkeit und sagen erfolgreich lexikalisches Entailment voraus.", "In diesem Beitrag wird ein Worteinbettungsalgorithmus für lexikalisches Entailment vorgestellt, der sich an die Arbeit von Henderson und Popa (ACL, 2016) anlehnt."]} {"source": "We describe a simple scheme that allows an agent to learn about its environment in an unsupervised manner.Our scheme pits two versions of the same agent, Alice and Bob, against one another.Alice proposes a task for Bob to complete; and then Bob attempts to complete the task. In this work we will focus on two kinds of environments: (nearly) reversible environments and environments that can be reset.Alice will \"propose\" the task by doing a sequence of actions and then Bob must undo or repeat them, respectively. Via an appropriate reward structure, Alice and Bob automatically generate a curriculum of exploration, enabling unsupervised training of the agent.When Bob is deployed on an RL task within the environment, this unsupervised training reduces the number of supervised episodes needed to learn, and in some cases converges to a higher reward.", "target": ["Unüberwachtes Lernen für Reinforcement Learning unter Verwendung eines automatischen Lehrplans für das Selbstspiel.", "Eine neue Formulierung für die unbeaufsichtigte Erkundung der Umgebung, um später bei einer bestimmten Aufgabe zu helfen, wobei ein Agent immer schwierigere Aufgaben vorschlägt und der lernende Agent versucht, sie zu erfüllen.", "Ein Selbstspielmodell, bei dem ein Agent lernt, Aufgaben vorzuschlagen, die für ihn leicht, für einen Gegner aber schwierig sind, wodurch ein sich bewegendes Ziel von Selbstspielzielen und Lernplänen entsteht. "]} {"source": "Many real-world data sets are represented as graphs, such as citation links, social media, and biological interaction.The volatile graph structure makes it non-trivial to employ convolutional neural networks (CNN's) for graph data processing.Recently, graph attention network (GAT) has proven a promising attempt by combining graph neural networks with attention mechanism, so as to achieve massage passing in graphs with arbitrary structures.However, the attention in GAT is computed mainly based on the similarity between the node content, while the structures of the graph remains largely unemployed (except in masking the attention out of one-hop neighbors).In this paper, we propose an `````````````````````````````\"ADaptive Structural Fingerprint\" (ADSF) model to fully exploit both topological details of the graph and content features of the nodes.The key idea is to contextualize each node with a weighted, learnable receptive field encoding rich and diverse local graph structures.By doing this, structural interactions between the nodes can be inferred accurately, thus improving subsequent attention layer as well as the convergence of learning.Furthermore, our model provides a useful platform for different subspaces of node features and various scales of graph structures to ``cross-talk'' with each other through the learning of multi-head attention, being particularly useful in handling complex real-world data. Encouraging performance is observed on a number of benchmark data sets in node classification.", "target": ["Nutzung reichhaltiger struktureller Details in graphenstrukturierten Daten durch adaptive struktuelle Fingerabdrücke.", "Eine auf Graphenstrukturen basierende Methodik zur Erweiterung des Aufmerksamkeitsmechanismus von graphischen neuronalen Netzen, mit der Hauptidee, Interaktionen zwischen verschiedenen Arten von Knoten in der lokalen Nachbarschaft eines Wurzelknotens zu untersuchen.", "Diese Arbeit erweitert die Idee der Selbstaufmerksamkeit in Graphen-NNs, die typischerweise auf der Ähnlichkeit von Merkmalen zwischen Knoten basiert, um strukturelle Ähnlichkeit miteinzubeziehen."]} {"source": "Informed and robust decision making in the face of uncertainty is critical for robots that perform physical tasks alongside people.We formulate this as a Bayesian Reinforcement Learning problem over latent Markov Decision Processes (MDPs).While Bayes-optimality is theoretically the gold standard, existing algorithms do not scale well to continuous state and action spaces.We propose a scalable solution that builds on the following insight: in the absence of uncertainty, each latent MDP is easier to solve.We split the challenge into two simpler components.First, we obtain an ensemble of clairvoyant experts and fuse their advice to compute a baseline policy.Second, we train a Bayesian residual policy to improve upon the ensemble's recommendation and learn to reduce uncertainty.Our algorithm, Bayesian Residual Policy Optimization (BRPO), imports the scalability of policy gradient methods as well as the initialization from prior models.BRPO significantly improves the ensemble of experts and drastically outperforms existing adaptive RL methods.", "target": ["Wir schlagen einen skalierbaren Bayes'schen Reinforcement Learning Algorithmus vor, der eine Bayes'sche Korrektur über ein Ensemble von hellsichtigen Experten erlernt, um Probleme mit komplexen latenten Belohnungen und Dynamiken zu lösen.", "Diese Arbeit betrachtet das Bayesian Reinforcement Learning Problem über latente Markov Decision Processes (MDPs), indem Entscheidungen mit Experten getroffen werden.", "In diesem Beitrag motivieren die Autoren einen Lernalgorithmus, genannt Bayesian Residual Policy Optimization (BRPO), für Bayesian Reinforcement Learning Probleme und schlagen ihn vor."]} {"source": "One of the mysteries in the success of neural networks is randomly initialized first order methods like gradient descent can achieve zero training loss even though the objective function is non-convex and non-smooth.This paper demystifies this surprising phenomenon for two-layer fully connected ReLU activated neural networks.For an $m$ hidden node shallow neural network with ReLU activation and $n$ training data, we show as long as $m$ is large enough and no two inputs are parallel, randomly initialized gradient descent converges to a globally optimal solution at a linear convergence rate for the quadratic loss function.Our analysis relies on the following observation: over-parameterization and random initialization jointly restrict every weight vector to be close to its initialization for all iterations, which allows us to exploit a strong convexity-like property to show that gradient descent converges at a global linear rate to the global optimum.We believe these insights are also useful in analyzing deep models and other first order methods.", "target": ["Wir beweisen, dass der Gradientenabstieg bei überparametrisierten neuronalen Netzen einen Trainingsverlust von Null mit einer linearen Rate erreicht.", "Diese Arbeit befasst sich mit der Optimierung eines zweischichtigen überparametrisierten ReLU-Netzes mit quadratischen Verlust und einem Datensatz mit willkürlichen Labels.", "In diesem Papier werden neuronale Netze mit einer versteckten Schicht und quadratischem Verlust untersucht. Es wird gezeigt, dass in einer überparametrisierten Umgebung eine zufällige Initialisierung und ein Gradientenabstieg zu einem Verlust von Null führt."]} {"source": "For many applications, in particular in natural science, the task is todetermine hidden system parameters from a set of measurements.Often,the forward process from parameter- to measurement-space is well-defined,whereas the inverse problem is ambiguous: multiple parameter sets canresult in the same measurement.To fully characterize this ambiguity, the fullposterior parameter distribution, conditioned on an observed measurement,has to be determined.We argue that a particular class of neural networksis well suited for this task – so-called Invertible Neural Networks (INNs).Unlike classical neural networks, which attempt to solve the ambiguousinverse problem directly, INNs focus on learning the forward process, usingadditional latent output variables to capture the information otherwiselost.Due to invertibility, a model of the corresponding inverse process islearned implicitly.Given a specific measurement and the distribution ofthe latent variables, the inverse pass of the INN provides the full posteriorover parameter space.We prove theoretically and verify experimentally, onartificial data and real-world problems from medicine and astrophysics, thatINNs are a powerful analysis tool to find multi-modalities in parameter space,uncover parameter correlations, and identify unrecoverable parameters.", "target": ["Analyse von inversen Problemen mit invertierbaren neuronalen Netzen.", "Der Autor schlägt vor, invertierbare Netzwerke zu verwenden, um mehrdeutige inverse Probleme zu lösen, und schlägt vor, nicht nur das Forward Modell, sondern auch das inverse Modell mit einem MMD-Kritiker zu trainieren.", "Die Forschungsarbeit schlägt ein invertierbares Netzwerk mit Beobachtungen für die posteriore Wahrscheinlichkeit von komplexen Eingangsverteilungen mit einem theoretisch gültigen bidirektionalen Trainingsschema vor."]} {"source": "Decisions made by machine learning systems have increasing influence on the world.Yet it is common for machine learning algorithms to assume that no such influence exists.An example is the use of the i.i.d.assumption in online learning for applications such as content recommendation, where the (choice of) content displayed can change users' perceptions and preferences, or even drive them away, causing a shift in the distribution of users.Generally speaking, it is possible for an algorithm to change the distribution of its own inputs.We introduce the term self-induced distributional shift (SIDS) to describe this phenomenon.A large body of work in reinforcement learning and causal machine learning aims to deal with distributional shift caused by deploying learning systems previously trained offline.Our goal is similar, but distinct: we point out that changes to the learning algorithm, such as the introduction of meta-learning, can reveal hidden incentives for distributional shift (HIDS), and aim to diagnose and prevent problems associated with hidden incentives.We design a simple  environment as a \"unit test\" for HIDS, as well as a content recommendation environment which allows us to disentangle different types of SIDS.  We demonstrate the potential for HIDS to cause unexpected or undesirable behavior in these environments, and propose and test a mitigation strategy.Â", "target": ["Leistungskennzahlen sind unvollständige Angaben; der Zweck heiligt nicht immer die Mittel.", "Die Autoren zeigen, wie Meta-Lernen die versteckten Anreize für Verteilungsverschiebungen aufdeckt, und schlagen einen Ansatz vor, der auf dem Austausch von Lernenden zwischen Umgebungen basiert, um selbst verursachte Verteilungsverschiebungen zu reduzieren.", "Der Artikel verallgemeinert den inhärenten Anreiz für den Lernenden zu gewinnen, indem es die Aufgabe beim Meta-Lernen auf eine größere Klasse von Problemen ausweitet."]} {"source": "In one-class-learning tasks, only the normal case can be modeled with data, whereas the variation of all possible anomalies is too large to be described sufficiently by samples.Thus, due to the lack of representative data, the wide-spread discriminative approaches cannot cover such learning tasks, and rather generative models, which attempt to learn the input density of the normal cases, are used.However, generative models suffer from a large input dimensionality (as in images) and are typically inefficient learners.We propose to learn the data distribution more efficiently with a multi-hypotheses autoencoder.Moreover, the model is criticized by a discriminator, which prevents artificial data modes not supported by data, and which enforces diversity across hypotheses.This consistency-based anomaly detection (ConAD) framework allows the reliable identification of outof- distribution samples.For anomaly detection on CIFAR-10, it yields up to 3.9% points improvement over previously reported results.On a real anomaly detection task, the approach reduces the error of the baseline models from 6.8% to 1.5%.", "target": ["Wir schlagen einen Ansatz zur Erkennung von Anomalien vor, bei dem die Modellierung der Vordergrundklasse über mehrere lokale Dichten mit einem adversarialem Training kombiniert wird.", "Der Beitrag schlägt eine Technik vor, um generative Modelle robuster zu machen, indem man sie mit der lokalen Dichte in Einklang bringt."]} {"source": "Generative Adversarial Networks (GAN) can achieve promising performance on learning complex data distributions on different types of data.In this paper, we first show that a straightforward extension of an existing GAN algorithm is not applicable to point clouds, because the constraint required for discriminators is undefined for set data.We propose a two fold modification to a GAN algorithm to be able to generate point clouds (PC-GAN).First, we combine ideas from hierarchical Bayesian modeling and implicit generative models by learning a hierarchical and interpretable sampling process.A key component of our method is that we train a posterior inference network for the hidden variables.Second, PC-GAN defines a generic framework that can incorporate many existing GAN algorithms.We further propose a sandwiching objective, which results in a tighter Wasserstein distance estimate than the commonly used dual form in WGAN.We validate our claims on the ModelNet40 benchmark dataset and observe that PC- GAN trained by the sandwiching objective achieves better results on test data than existing methods.We also conduct studies on several tasks, including generalization on unseen point clouds, latent space interpolation, classification, and image to point clouds transformation, to demonstrate the versatility of the proposed PC-GAN algorithm.", "target": ["Wir schlagen eine GAN-Variante vor, die lernt, Punktwolken zu erzeugen. Es wurden verschiedene Studien durchgeführt, darunter eine engere Wasserstein-Abstandsschätzung, bedingte Erzeugung, Verallgemeinerung auf ungesehene Punktwolken und Bild zu Punktwolken.", "In diesem Artikel wird vorgeschlagen, GAN zur Erzeugung von 3D-Punktwolken zu verwenden und ein Sandwiching-Ziel einzuführen, das die obere und untere Grenze des Wasserstein-Abstands zwischen den Verteilungen mittelt.", "Dieser Artikel schlägt ein neues generatives Modell für ungeordnete Daten vor, mit einer besonderen Anwendung auf Punktwolken, das eine Inferenzmethode und eine neuartige Zielfunktion beinhaltet. "]} {"source": "Existing attention mechanisms, are mostly item-based in that a model is trained to attend to individual items in a collection (the memory) where each item has a predefined, fixed granularity, e.g., a character or a word.Intuitively, an area in the memory consisting of multiple items can be worth attending to as a whole.We propose area attention: a way to attend to an area of the memory, where each area contains a group of items that are either spatially adjacent when the memory has a 2-dimensional structure, such as images, or temporally adjacent for 1-dimensional memory, such as natural language sentences.Importantly, the size of an area, i.e., the number of items in an area or the level of aggregation, is dynamically determined via learning, which can vary depending on the learned coherence of the adjacent items.By giving the model the option to attend to an area of items, instead of only individual items, a model can attend to information with varying granularity.Area attention can work along multi-head attention for attending to multiple areas in the memory.We evaluate area attention on two tasks: neural machine translation (both character and token-level) and image captioning, and improve upon strong (state-of-the-art) baselines in all the cases.These improvements are obtainable with a basic form of area attention that is parameter free.In addition to proposing the novel concept of area attention, we contribute an efficient way for computing it by leveraging the technique of summed area tables.", "target": ["In diesem Beitrag wird ein neuartiger Ansatz für Aufmerksamkeitsmechanismen vorgestellt, der für eine Reihe von Aufgaben wie maschinelle Übersetzung und Bildbeschriftung von Nutzen sein kann.", "In diesem Beitrag werden die derzeitigen Aufmerksamkeitsmodelle von der Wortebene auf die Kombination benachbarter Wörter ausgedehnt, indem die Modelle auf Elemente angewendet werden, die aus zusammengefügten benachbarten Wörtern bestehen."]} {"source": "We identify a phenomenon, which we refer to as *multi-model forgetting*, that occurs when sequentially training multiple deep networks with partially-shared parameters; the performance of previously-trained models degrades as one optimizes a subsequent one, due to the overwriting of shared parameters.To overcome this, we introduce a statistically-justified weight plasticity loss that regularizes the learning of a model's shared parameters according to their importance for the previous models, and demonstrate its effectiveness when training two models sequentially and for neural architecture search.Adding weight plasticity in neural architecture search preserves the best models to the end of the search and yields improved results in both natural language processing and computer vision tasks.", "target": ["Wir identifizieren ein Phänomen, die neuronale Gehirnwäsche, und führen einen statistisch begründeten Gewichts Plastizitätverlust durch, um dies zu überwinden.", "In diesem Beitrag wird das Phänomen der neuralen Gehirnwäsche erörtert, das sich darauf bezieht, dass die Leistung eines Modells durch ein anderes Modell beeinflusst wird, das Parameter des Modells teilt."]} {"source": "Revealing latent structure in data is an active field of research, having introduced exciting technologies such as variational autoencoders and adversarial networks, and is essential to push machine learning towards unsupervised knowledge discovery.However, a major challenge is the lack of suitable benchmarks for an objective and quantitative evaluation of learned representations.To address this issue we introduce Morpho-MNIST, a framework that aims to answer: \"to what extent has my model learned to represent specific factors of variation in the data?\"We extend the popular MNIST dataset by adding a morphometric analysis enabling quantitative comparison of trained models, identification of the roles of latent variables, and characterisation of sample diversity.We further propose a set of quantifiable perturbations to assess the performance of unsupervised and supervised methods on challenging tasks such as outlier detection and domain adaptation.", "target": ["In diesem Beitrag wird Morpho-MNIST vorgestellt, eine Sammlung von Formmetriken und Störungen, die einen Schritt zur quantitativen Evaluierung des Repräsentationslernens darstellt.", "In diesem Beitrag wird das Problem der Bewertung und Diagnose der mit einem generativen Modell erlernten Darstellungen erörtert.", "Die Autoren stellen eine Reihe von Kriterien zur Kategorisierung von MNISt-Digisten und eine Reihe interessanter Störungen zur Modifizierung des MNIST-Datensatzes vor."]} {"source": "Exploration in environments with sparse rewards is a key challenge for reinforcement learning.How do we design agents with generic inductive biases so that they can explore in a consistent manner instead of just using local exploration schemes like epsilon-greedy?We propose an unsupervised reinforcement learning agent which learns a discrete pixel grouping model that preserves spatial geometry of the sensors and implicitly of the environment as well.We use this representation to derive geometric intrinsic reward functions, like centroid coordinates and area, and learn policies to control each one of them with off-policy learning.These policies form a basis set of behaviors (options) which allows us explore in a consistent way and use them in a hierarchical reinforcement learning setup to solve for extrinsically defined rewards.We show that our approach can scale to a variety of domains with competitive performance, including navigation in 3D environments and Atari games with sparse rewards.", "target": ["Strukturierte Exploration im tiefen Reinforcement Learning durch unüberwachte visuelle Abstraktionsentdeckung und -kontrolle", "In dem Artikel werden visuelle Abstraktionen vorgestellt, die für das Reinforcement Learning verwendet werden, bei dem ein Algorithmus lernt, jede Abstraktion zu \"kontrollieren\" und die Optionen auszuwählen, um die Gesamtaufgabe zu erfüllen."]} {"source": "Combinatorial optimization is a common theme in computer science.While in general such problems are NP-Hard, from a practical point of view, locally optimal solutions can be useful.In some combinatorial problems however, it can be hard to define meaningful solution neighborhoods that connect large portions of the search space, thus hindering methods that search this space directly.We suggest to circumvent such cases by utilizing a policy gradient algorithm that transforms the problem to the continuous domain, and to optimize a new surrogate objective that renders the former as generic stochastic optimizer.This is achieved by producing a surrogate objective whose distribution is fixed and predetermined, thus removing the need to fine-tune various hyper-parameters in a case by case manner.Since we are interested in methods which can successfully recover locally optimal solutions, we use the problem of finding locally maximal cliques as a challenging experimental benchmark, and we report results on a large dataset of graphs that is designed to test clique finding algorithms.Notably, we show in this benchmark that fixing the distribution of the surrogate is key to consistently recovering locally optimal solutions, and that our surrogate objective leads to an algorithm that outperforms other methods we have tested in a number of measures.", "target": ["Ein neuer Strategie Gradient Algorithmus, der für die Lösung von kombinatorischen Black-Box-Optimierungsproblemen entwickelt wurde. Der Algorithmus stützt sich nur auf Funktionsbewertungen und liefert mit hoher Wahrscheinlichkeit lokal optimale Lösungen.", "Die Arbeit schlägt einen Ansatz zur Konstruktion von Ersatzzielen für die Anwendung von Policy-Gradienten-Methoden in der kombinatorischen Optimierung vor, um die Notwendigkeit der Abstimmung von Hyperparametern zu verringern.", "In dem Artikel wird vorgeschlagen, den Belohnungsbegriff im Policy-Gradienten-Algorithmus durch seine zentrierte empirische kumulative Verteilung zu ersetzen. "]} {"source": "Deterministic neural networks (NNs) are increasingly being deployed in safety critical domains, where calibrated, robust and efficient measures of uncertainty are crucial.While it is possible to train regression networks to output the parameters of a probability distribution by maximizing a Gaussian likelihood function, the resulting model remains oblivious to the underlying confidence of its predictions.In this paper, we propose a novel method for training deterministic NNs to not only estimate the desired target but also the associated evidence in support of that target.We accomplish this by placing evidential priors over our original Gaussian likelihood function and training our NN to infer the hyperparameters of our evidential distribution.We impose priors during training such that the model is penalized when its predicted evidence is not aligned with the correct output.Thus the model estimates not only the probabilistic mean and variance of our target but also the underlying uncertainty associated with each of those parameters.We observe that our evidential regression method learns well-calibrated measures of uncertainty on various benchmarks, scales to complex computer vision tasks, and is robust to adversarial input perturbations.", "target": ["Schnelle, kalibrierte Unsicherheitsabschätzung für neuronale Netze ohne Stichproben", "In diesem Beitrag wird ein neuartiger Ansatz zur Schätzung der Zuverlässigkeit von Vorhersagen in einer Regressionsumgebung vorgeschlagen, der die Tür zu Online-Anwendungen mit vollständig integrierten Unsicherheitsschätzungen öffnet.", "In diesem Artikel wird die tiefe evidenzbasierte Regression vorgeschlagen, eine Methode zum Trainieren neuronaler Netze, die nicht nur die Ausgabe, sondern auch die damit verbundenen Beweise zur Unterstützung dieser Ausgabe schätzt."]} {"source": "The Lottery Ticket Hypothesis from Frankle & Carbin (2019) conjectures that, for typically-sized neural networks, it is possible to find small sub-networks which train faster and yield superior performance than their original counterparts.The proposed algorithm to search for such sub-networks (winning tickets), Iterative Magnitude Pruning (IMP), consistently finds sub-networks with 90-95% less parameters which indeed train faster and better than the overparameterized models they were extracted from, creating potential applications to problems such as transfer learning.In this paper, we propose a new algorithm to search for winning tickets, Continuous Sparsification, which continuously removes parameters from a network during training, and learns the sub-network's structure with gradient-based methods instead of relying on pruning strategies.We show empirically that our method is capable of finding tickets that outperforms the ones learned by Iterative Magnitude Pruning, and at the same time providing up to 5 times faster search, when measured in number of training epochs.", "target": ["Wir schlagen einen neuen Algorithmus vor, der mittels neuronaler Netze schnell Lotterie Gewinner-Lose findet.", "In diesem Artikel wird eine neuartige Zielfunktion vorgeschlagen, die zur gemeinsamen Optimierung eines Klassifizierungsziels verwendet werden kann und gleichzeitig die Sparsifikation in einem Netz fördert, das eine hohe Genauigkeit aufweist.", "In dieser Arbeit wird eine neue iterative Pruning Methode mit dem Namen Continuous Sparsification vorgeschlagen, die das aktuelle Gewicht kontinuierlich pruned, bis das Zielverhältnis erreicht ist."]} {"source": "In most practical settings and theoretical analyses, one assumes that a model can be trained until convergence.However, the growing complexity of machine learning datasets and models may violate such assumptions.Indeed, current approaches for hyper-parameter tuning and neural architecture search tend to be limited by practical resource constraints.Therefore, we introduce a formal setting for studying training under the non-asymptotic, resource-constrained regime, i.e., budgeted training.We analyze the following problem: \"given a dataset, algorithm, and fixed resource budget, what is the best achievable performance?\"We focus on the number of optimization iterations as the representative resource.Under such a setting, we show that it is critical to adjust the learning rate schedule according to the given budget.Among budget-aware learning schedules, we find simple linear decay to be both robust and high-performing.We support our claim through extensive experiments with state-of-the-art models on ImageNet (image classification), Kinetics (video classification), MS COCO (object detection and instance segmentation), and Cityscapes (semantic segmentation).We also analyze our results and find that the key to a good schedule is budgeted convergence, a phenomenon whereby the gradient vanishes at the end of each allowed budget.We also revisit existing approaches for fast convergence and show that budget-aware learning schedules readily outperform such approaches under (the practical but under-explored) budgeted training setting.", "target": ["Einführung einer formalen Einstellung für das budgetierte Training und Vorschlag für einen budgetgerechten linearen Lernratenplan.", "In dieser Arbeit wird eine Technik zur Abstimmung der Lernrate für das Training neuronaler Netze bei einer festen Anzahl von Epochen vorgestellt.", "In diesem Beitrag wurde untersucht, welcher Lernratenplan verwendet werden sollte, wenn die Anzahl der Iterationen begrenzt ist, wobei ein neues Konzept, der BAS (Budget-Aware Schedule), verwendet wurde."]} {"source": "We present a new approach for efficient exploration which leverages a low-dimensional encoding of the environment learned with a combination of model-based and model-free objectives.Our approach uses intrinsic rewards that are based on a weighted distance of nearest neighbors in the low dimensional representational space to gauge novelty.We then leverage these intrinsic rewards for sample-efficient exploration with planning routines in representational space.One key element of our approach is that we perform more gradient steps in-between every environment step in order to ensure the model accuracy.We test our approach on a number of maze tasks, as well as a control problem and show that our exploration approach is more sample-efficient compared to strong baselines.", "target": ["Wir führen die Erkundung mit Hilfe von intrinsischen Belohnungen durch, die auf einem gewichteten Abstand der nächsten Nachbarn im Repräsentationsraum beruhen.", "Diese Arbeit schlägt eine Methode zur effizienten Exploration in tabellarischen MDPs sowie eine einfache Kontrollumgebung vor, die deterministische Encoder zum Erlernen einer niedrigdimensionalen Darstellung der Umgebungsdynamik verwendet.", "In diesem Beitrag wird eine Methode zur stichprobeneffizienten Exploration für RL-Agenten vorgeschlagen, die eine Kombination aus modellbasierten und modellfreien Ansätzen mit einer Neuheitsmetrik verwendet."]} {"source": "Neural networks are vulnerable to small adversarial perturbations.While existing literature largely focused on the vulnerability of learned models, we demonstrate an intriguing phenomenon that adversarial robustness, unlike clean accuracy, is sensitive to the input data distribution.Even a semantics-preserving transformations on the input data distribution can cause a significantly different robustness for the adversarially trained model that is both trained and evaluated on the new distribution.We show this by constructing semantically- identical variants for MNIST and CIFAR10 respectively, and show that standardly trained models achieve similar clean accuracies on them, but adversarially trained models achieve significantly different robustness accuracies.This counter-intuitive phenomenon indicates that input data distribution alone can affect the adversarial robustness of trained neural networks, not necessarily the tasks themselves.Lastly, we discuss the practical implications on evaluating adversarial robustness, and make initial attempts to understand this complex phenomenon.", "target": ["Die Robustheit trainierter PGD-Modelle reagiert empfindlich auf semantikerhaltende Transformationen von Bilddatensätzen, was bedeutet, dass die Bewertung robuster Lernalgorithmen in der Praxis heikel ist."]} {"source": "Sample inefficiency is a long-lasting problem in reinforcement learning (RL). The state-of-the-art uses action value function to derive policy while it usually involves an extensive search over the state-action space and unstable optimization.Towards the sample-efficient RL, we propose ranking policy gradient (RPG), a policy gradient method that learns the optimal rank of a set of discrete actions. To accelerate the learning of policy gradient methods, we establish the equivalence between maximizing the lower bound of return and imitating a near-optimal policy without accessing any oracles.These results lead to a general off-policy learning framework, which preserves the optimality, reduces variance, and improves the sample-efficiency.We conduct extensive experiments showing that when consolidating with the off-policy learning framework, RPG substantially reduces the sample complexity, comparing to the state-of-the-art.", "target": ["Wir schlagen einen Rangordnungsregel-Gradienten vor, der die optimale Rangordnung von Aktionen erlernt, um den Ertrag zu maximieren. Wir schlagen einen allgemeinen Off-Policy Lern Framework mit den Eigenschaften der Optimalitätserhaltung, Varianzreduktion und Stichproben-Effizienz vor.", "In diesem Artikel wird vorgeschlagen, die Politik durch eine Form der Rangfolge neu zu parametrisieren, um das RL-Problem in ein überwachtes Lernproblem umzuwandeln.", "In diesem Papier wird eine neue Sichtweise auf Regel Gradienten Methoden aus der Perspektive des Rankings vorgestellt. "]} {"source": "We introduce MultiGrain, a neural network architecture that generates compact image embedding vectors that solve multiple tasks of different granularity: class, instance, and copy recognition.MultiGrain is trained jointly for classification by optimizing the cross-entropy loss and for instance/copy recognition by optimizing a self-supervised ranking loss.The self-supervised loss only uses data augmentation and thus does not require additional labels.Remarkably, the unified embeddings are not only much more compact than using several specialized embeddings, but they also have the same or better accuracy.When fed to a linear classifier, MultiGrain using ResNet-50 achieves 79.4% top-1 accuracy on ImageNet, a +1.8% absolute improvement over the the current state-of-the-art AutoAugment method.The same embeddings perform on par with state-of-the-art instance retrieval with images of moderate resolution.An ablation study shows that our approach benefits from the self-supervision, the pooling method and the mini-batches with repeated augmentations of the same image.", "target": ["Durch die Kombination von Klassifizierung und Bildabfrage in einer neuronalen Netzwerkarchitektur erzielen wir eine Verbesserung für beide Aufgaben.", "In diesem Beitrag wird eine einheitliche Einbettung für die Bildklassifizierung und den Instanzenabruf vorgeschlagen, um die Leistung für beide Aufgaben zu verbessern.", "In der Arbeit wird vorgeschlagen, ein tiefes neuronales Netz für die Bildklassifizierung, Instanz- und Kopiererkennung gemeinsam zu trainieren."]} {"source": "In this paper, we investigate mapping the hyponymy relation of wordnet to feature vectors. We aim to model lexical knowledge in such a way that it can be used as input in generic machine-learning models, such as phrase entailment predictors. We propose two models.The first one leverages an existing mapping of words to feature vectors (fasttext), and attempts to classify such vectors as within or outside of each class.The second model is fully supervised, using solely wordnet as a ground truth.It maps each concept to an interval or a disjunction thereof. On the first model, we approach, but not quite attain state of the art performance.The second model can achieve near-perfect accuracy.", "target": ["Wir untersuchen die Abbildung der Hyponymie-Relation von Wortordnungen auf Merkmalsvektoren", "In dieser Arbeit wird untersucht, wie Hyponymie zwischen Wörtern auf Merkmalsrepräsentationen abgebildet werden kann.", "In diesem Beitrag wird der Begriff der Hyponymie in Wortvektordarstellungen untersucht und eine Methode beschrieben, mit der WordNet-Beziehungen in einer Baumstruktur organisiert werden, um Hyponymie zu definieren."]} {"source": "Recurrent Neural Networks (RNNs) are powerful autoregressive sequence models for learning prevalent patterns in natural language. Yet language generated by RNNs often shows several degenerate characteristics that are uncommon in human language; while fluent, RNN language production can be overly generic, repetitive, and even self-contradictory. We postulate that the objective function optimized by RNN language models, which amounts to the overall perplexity of a text, is not expressive enough to capture the abstract qualities of good generation such as Grice’s Maxims.In this paper, we introduce a general learning framework that can construct a decoding objective better suited for generation.Starting with a generatively trained RNN language model, our framework learns to construct a substantially stronger generator by combining several discriminatively trained models that can collectively address the limitations of RNN generation. Human evaluation demonstrates that text generated by the resulting generator is preferred over that of baselines by a large margin and significantly enhances the overall coherence, style, and information content of the generated text.", "target": ["Wir entwickeln einen leistungsfähigeren Generator für natürliche Sprache, indem wir diskriminierende Bewertungsfunktionen trainieren, die die Kandidatengenerationen in Bezug auf verschiedene Eigenschaften guten Schreibens einstufen.", "In diesem Werk wird vorgeschlagen, mehrere induktive Verzerrungen zusammenzuführen, die Inkonsistenzen bei der Sequenzdekodierung korrigieren sollen, und die Parameter einer vordefinierten Kombination verschiedener Teilziele zu optimieren. ", "Diese Arbeit kombiniert das RNN-Sprachmodell mit mehreren diskriminativ trainierten Modellen, um die Spracherzeugung zu verbessern.", "In diesem Beitrag wird vorgeschlagen, die Generierung von RNN-Sprachmodellen mit Hilfe erweiterter Zielsetzungen zu verbessern, die sich an Grice' Kommunikationsmaximen orientieren."]} {"source": "In recent years, the efficiency and even the feasibility of traditional load-balancing policies are challenged by the rapid growth of cloud infrastructure with increasing levels of server heterogeneity and increasing size of cloud services and applications.In such many software-load-balancers heterogeneous systems, traditional solutions, such as JSQ, incur an increasing communication overhead, whereas low-communication alternatives, such as JSQ(d) and the recently proposed JIQ scheme are either unstable or provide poor performance.We argue that a better low-communication load balancing scheme can be established by allowing each dispatcher to have a different view of the system and keep using JSQ, rather than greedily trying to avoid starvation on a per-decision basis. accordingly, we introduce the Loosely-Shortest -Queue family of load balancing algorithms.Roughly speaking, in Loosely-shortest -Queue, each dispatcher keeps a different approximation of the server queue lengths and routes jobs to the shortest among them.Communication is used only to update the approximations and make sure that they are not too far from the real queue lengths in expectation.We formally establish the strong stability of any Loosely-Shortest -Queue policy and provide an easy-to-verify sufficient condition for verifying that a policy is Loosely-Shortest -Queue.We further demonstrate that the Loosely-Shortest -Queue approach allows constructing throughput optimal policies with an arbitrarily low communication budget.Finally, using extensive simulations that consider homogeneous, heterogeneous and highly skewed heterogeneous systems in scenarios with a single dispatcher as well as with multiple dispatchers, we show that the examined Loosely-Shortest -Queue example policies are always stable as dictated by theory.Moreover, it exhibits an appealing performance and significantly outperforms well-known low-communication policies, such as JSQ(d) and JIQ, while using a similar communication budget.", "target": ["Skalierbare und kommunikationsarme Lastausgleichslösung für heterogene Server-Multi-Dispatcher-Systeme mit starken theoretischen Garantien und vielversprechenden empirischen Ergebnissen. "]} {"source": "We propose a novel quantitative measure to predict the performance of a deep neural network classifier, where the measure is derived exclusively from the graph structure of the network.We expect that this measure is a fundamental first step in developing a method to evaluate new network architectures and reduce the reliance on the computationally expensive trial and error or \"brute force\" optimisation processes involved in model selection.The measure is derived in the context of multi-layer perceptrons (MLPs), but the definitions are shown to be useful also in the context of deep convolutional neural networks (CNN), where it is able to estimate and compare the relative performance of different types of neural networks, such as VGG, ResNet, and DenseNet.Our measure is also used to study the effects of some important \"hidden\" hyper-parameters of the DenseNet architecture, such as number of layers, growth rate and the dimension of 1x1 convolutions in DenseNet-BC.Ultimately, our measure facilitates the optimisation of the DenseNet design, which shows improved results compared to the baseline.", "target": ["Ein quantitatives Maß zur Vorhersage der Leistung von tiefen neuronalen Netzmodellen.", "Diese Arbeit schlägt eine neuartige Größe vor, die die Anzahl der Pfade im neuronalen Netz zählt, die die Leistung neuronaler Netze mit der gleichen Anzahl von Parametern vorhersagt.", "Diese Arbeit stellt eine Methode zum Zählen von Pfaden in tiefen neuronalen Netzen vor, die wohl zur Messung der Leistung des Netzes verwendet werden kann."]} {"source": "There is a stark disparity between the learning rate schedules used in the practice of large scale machine learning and what are considered admissible learning rate schedules prescribed in the theory of stochastic approximation.Recent results, such as in the 'super-convergence' methods which use oscillating learning rates, serve to emphasize this point even more.One plausible explanation is that non-convex neural network training procedures are better suited to the use of fundamentally different learning rate schedules, such as the ``cut the learning rate every constant number of epochs'' method (which more closely resembles an exponentially decaying learning rate schedule); note that this widely used schedule is in stark contrast to the polynomial decay schemes prescribed in the stochastic approximation literature, which are indeed shown to be (worst case) optimal for classes of convex optimization problems.The main contribution of this work shows that the picture is far more nuanced, where we do not even need to move to non-convex optimization to show other learning rate schemes can be far more effective.In fact, even for the simple case of stochastic linear regression with a fixed time horizon, the rate achieved by any polynomial decay scheme is sub-optimal compared to the statistical minimax rate (by a factor of condition number); in contrast the ```''cut the learning rate every constant number of epochs'' provides an exponential improvement (depending only logarithmically on the condition number) compared to any polynomial decay scheme. Finally, it is important to ask if our theoretical insights are somehow fundamentally tied to quadratic loss minimization (where we have circumvented minimax lower bounds for more general convex optimization problems)?Here, we conjecture that recent results which make the gradient norm small at a near optimal rate, for both convex and non-convex optimization, may also provide more insights into learning rate schedules used in practice.", "target": ["In dieser Arbeit wird eine rigorose Studie darüber vorgelegt, warum praktisch verwendete Lernratenschemata (für ein gegebenes Rechenbudget) erhebliche Vorteile bieten, obwohl diese Schemata von der klassischen Theorie der stochastischen Approximation nicht befürwortet werden.", "In diesem Beitrag wird eine theoretische Untersuchung verschiedener Lernratenpläne vorgestellt, die zu statistischen Minimax-Untergrenzen sowohl für Polynom- als auch für Constant-and-Cut-Schemata führte.", "Der Beitrag untersucht die Auswirkungen der Wahl der Lernrate bei stochastischer Optimierung, wobei der Schwerpunkt auf Least-Mean-Squares mit abnehmender Schrittweite liegt."]} {"source": "We present Value Propagation (VProp), a set of parameter-efficient differentiable planning modules built on Value Iteration which can successfully be trained using reinforcement learning to solve unseen tasks, has the capability to generalize to larger map sizes, and can learn to navigate in dynamic environments.We show that the modules enable learning to plan when the environment also includes stochastic elements, providing a cost-efficient learning system to build low-level size-invariant planners for a variety of interactive navigation problems.We evaluate on static and dynamic configurations of MazeBase grid-worlds, with randomly generated environments of several different sizes, and on a StarCraft navigation scenario, with more complex dynamics, and pixels as input.", "target": ["Wir stellen Planer vor, die auf Convnets basieren, die stichprobeneffizient sind und sich auf größere Instanzen von Navigations- und Wegfindungsproblemen verallgemeinern lassen.", "Es werden Methoden vorgeschlagen, die als Abwandlungen von Value Iteration Networks (VIN) betrachtet werden können, mit einigen Verbesserungen, die auf die Verbesserung der Stichprobeneffizienz und die Verallgemeinerung auf große Umgebungsgrößen abzielen.", "Das Papier stellt eine Erweiterung der originalen Werteiterations Netzwerke (VIN) durch die Berücksichtigung einer zustandsabhängigen Übergangsfunktion."]} {"source": "Learning high-quality word embeddings is of significant importance in achieving better performance in many down-stream learning tasks.On one hand, traditional word embeddings are trained on a large scale corpus for general-purpose tasks, which are often sub-optimal for many domain-specific tasks.On the other hand, many domain-specific tasks do not have a large enough domain corpus to obtain high-quality embeddings.We observe that domains are not isolated and a small domain corpus can leverage the learned knowledge from many past domains to augment that corpus in order to generate high-quality embeddings.In this paper, we formulate the learning of word embeddings as a lifelong learning process.Given knowledge learned from many previous domains and a small new domain corpus, the proposed method can effectively generate new domain embeddings by leveraging a simple but effective algorithm and a meta-learner, where the meta-learner is able to provide word context similarity information at the domain-level.Experimental results demonstrate that the proposed method can effectively learn new domain embeddings from a small corpus and past domain knowledges\\footnote{We will release the code after final revisions.}.Wealso demonstrate that general-purpose embeddings trained from a large scale corpus are sub-optimal in domain-specific tasks.", "target": ["Erlernen einer besseren Einbettung von Bereichen durch lebenslanges Lernen und Meta Lernen", "Stellt eine Methode des lebenslangen Lernens zum Erlernen von Worteinbettungen vor.", "Diese Arbeit schlägt einen Ansatz zum Erlernen von Einbettungen in neuen Domänen vor und übertrifft die Baseline in einer Aspekt-Extraktionsaufgabe deutlich. "]} {"source": "Parameter pruning is a promising approach for CNN compression and acceleration by eliminating redundant model parameters with tolerable performance loss.Despite its effectiveness, existing regularization-based parameter pruning methods usually drive weights towards zero with large and constant regularization factors, which neglects the fact that the expressiveness of CNNs is fragile and needs a more gentle way of regularization for the networks to adapt during pruning.To solve this problem, we propose a new regularization-based pruning method (named IncReg) to incrementally assign different regularization factors to different weight groups based on their relative importance, whose effectiveness is proved on popular CNNs compared with state-of-the-art methods.", "target": ["Wir schlagen eine neue regularisierungsbasierte Pruning Methode (IncReg genannt) vor, um verschiedene Regularisierungsfaktoren schrittweise verschiedenen Gewichtsgruppen auf der Grundlage ihrer relativen Bedeutung zuzuordnen.", "In diesem Artikel wird eine auf Regularisierung basierende Pruning Methode vorgeschlagen, um verschiedene Regularisierungsfaktoren schrittweise verschiedenen Gewichtsgruppen auf der Grundlage ihrer relativen Bedeutung zuzuordnen."]} {"source": "Momentum based stochastic gradient methods such as heavy ball (HB) and Nesterov's accelerated gradient descent (NAG) method are widely used in practice for training deep networks and other supervised learning models, as they often provide significant improvements over stochastic gradient descent (SGD).Rigorously speaking, fast gradient methods have provable improvements over gradient descent only for the deterministic case, where the gradients are exact.In the stochastic case, the popular explanations for their wide applicability is that when these fast gradient methods are applied in the stochastic case, they partially mimic their exact gradient counterparts, resulting in some practical gain.This work provides a counterpoint to this belief by proving that there exist simple problem instances where these methods cannot outperform SGD despite the best setting of its parameters.These negative problem instances are, in an informal sense, generic; they do not look like carefully constructed pathological instances.These results suggest (along with empirical evidence) that HB or NAG's practical performance gains are a by-product of minibatching.Furthermore, this work provides a viable (and provable) alternative, which, on the same set of problem instances, significantly improves over HB, NAG, and SGD's performance.This algorithm, referred to as Accelerated Stochastic Gradient Descent (ASGD), is a simple to implement stochastic algorithm, based on a relatively less popular variant of Nesterov's Acceleration.Extensive empirical results in this paper show that ASGD has performance gains over HB, NAG, and SGD.The code for implementing the ASGD Algorithm can be found at https://github.com/rahulkidambi/AccSGD.", "target": ["Bestehende Momentum-/Beschleunigungsverfahren wie die Heavy-Ball-Methode und die Nesterov-Beschleunigung, die mit stochastischen Gradienten eingesetzt werden, bringen keine Verbesserung gegenüber dem einfachen stochastischen Gradientenabstieg, insbesondere wenn sie mit kleinen Batchgrößen eingesetzt werden."]} {"source": "Oversubscription planning (OSP) is the problem of finding plans that maximize the utility value of their end state while staying within a specified cost bound.Recently, it has been shown that OSP problems can be reformulated as classical planning problems with multiple cost functions but no utilities. Here we take advantage of this reformulation to show that OSP problems can be solved optimally using the A* search algorithm, in contrast to previous approaches that have used variations on branch-and-bound search.This allows many powerful techniques developed for classical planning to be applied to OSP problems.We also introduce novel bound-sensitive heuristics, which are able to reason about the primary cost of a solution while taking into account secondary cost functions and bounds, to provide superior guidance compared to heuristics that do not take these bounds into account.We implement two such bound-sensitive variants of existing classical planning heuristics, and show experimentally that the resulting search is significantly more informed than comparable heuristics that do not consider bounds.", "target": ["Wir zeigen, dass Überbelegungs-Planungsaufgaben mit A* gelöst werden können, und stellen neuartige beschränkungssensitive Heuristiken für Überbelegungs-Planungsaufgaben vor.", "Es wird ein Ansatz zur optimalen Lösung von Überbelegungsplanungsaufgaben (OSP) vorgestellt, der eine Übersetzung der klassischen Planung mit mehreren Kostenfunktionen verwendet.", "In der Arbeit werden Änderungen an zulässigen Heuristiken vorgeschlagen, um sie in einem Multi-Kriterien Umfeld besser zu informieren."]} {"source": "Previous work on adversarially robust neural networks requires large training sets and computationally expensive training procedures. On the other hand, few-shot learning methods are highly vulnerable to adversarial examples. The goal of our work is to produce networks which both perform well at few-shot tasks and are simultaneously robust to adversarial examples. We adapt adversarial training for meta-learning, we adapt robust architectural features to small networks for meta-learning, we test pre-processing defenses as an alternative to adversarial training for meta-learning, and we investigate the advantages of robust meta-learning over robust transfer-learning for few-shot tasks. This work provides a thorough analysis of adversarially robust methods in the context of meta-learning, and we lay the foundation for future work on defenses for few-shot tasks.", "target": ["Wir entwickeln Meta-Lernmethoden für adversarial robustes Few-Shot Learning.", "In diesem Beitrag wird eine Methode vorgestellt, die die Robustheit des Few-Shot Lernens durch die Einführung eines Angriffs auf die Abfragedaten in der Fine-Tuning Phase eines Meta-Lernalgorithmus verbessert.", "Die Autoren dieser Arbeit schlagen einen neuen Ansatz für das Training eines robusten Few-Shot Modells vor. "]} {"source": "Many of our core assumptions about how neural networks operate remain empirically untested.One common assumption is that convolutional neural networks need to be stable to small translations and deformations to solve image recognition tasks.For many years, this stability was baked into CNN architectures by incorporating interleaved pooling layers.Recently, however, interleaved pooling has largely been abandoned.This raises a number of questions: Are our intuitions about deformation stability right at all?Is it important?Is pooling necessary for deformation invariance?If not, how is deformation invariance achieved in its absence?In this work, we rigorously test these questions, and find that deformation stability in convolutional networks is more nuanced than it first appears: (1) Deformation invariance is not a binary property, but rather that different tasks require different degrees of deformation stability at different layers.(2) Deformation stability is not a fixed property of a network and is heavily adjusted over the course of training, largely through the smoothness of the convolutional filters.(3) Interleaved pooling layers are neither necessary nor sufficient for achieving the optimal form of deformation stability for natural image classification.(4) Pooling confers \\emph{too much} deformation stability for image classification at initialization, and during training, networks have to learn to \\emph{counteract} this inductive bias.Together, these findings provide new insights into the role of interleaved pooling and deformation invariance in CNNs, and demonstrate the importance of rigorous empirical testing of even our most basic assumptions about the working of neural networks.", "target": ["Wir stellen fest, dass das Pooling allein nicht für die Deformationsstabilität von CNNs ausschlaggebend ist und dass die Filter Glätte eine wichtige Rolle für die Stabilität spielt."]} {"source": "Deep neural networks (DNNs) have been shown to over-fit a dataset when being trained with noisy labels for a long enough time.To overcome this problem, we present a simple and effective method self-ensemble label filtering (SELF) to progressively filter out the wrong labels during training.Our method improves the task performance by gradually allowing supervision only from the potentially non-noisy (clean) labels and stops learning on the filtered noisy labels.For the filtering, we form running averages of predictions over the entire training dataset using the network output at different training epochs.We show that these ensemble estimates yield more accurate identification of inconsistent predictions throughout training than the single estimates of the network at the most recent training epoch.While filtered samples are removed entirely from the supervised training loss, we dynamically leverage them via semi-supervised learning in the unsupervised loss.We demonstrate the positive effect of such an approach on various image classification tasks under both symmetric and asymmetric label noise and at different noise ratios.It substantially outperforms all previous works on noise-aware learning across different datasets and can be applied to a broad set of network architectures.", "target": ["Wir schlagen ein Self-Ensemble Framework vor, um robustere Deep Learning Modelle unter störhaften, gelabelten Datensätzen zu trainieren.", "In dieser Arbeit wird eine \"Self-Ensemble-Label-Filterung\" für das Lernen mit störhaften Labels vorgeschlagen, bei der das Label stören instanzunabhängig ist, was eine genauere Identifizierung inkonsistenter Vorhersagen ermöglicht. ", "In diesem Artikel wird ein Algorithmus für das Lernen aus Daten mit verrauschten Etiketten vorgeschlagen, der abwechselnd das Modell aktualisiert und Beispiele entfernt, die aussehen, als hätten sie störende Labels."]} {"source": "Long training times of deep neural networks are a bottleneck in machine learning research.The major impediment to fast training is the quadratic growth of both memory and compute requirements of dense and convolutional layers with respect to their information bandwidth.Recently, training `a priori' sparse networks has been proposed as a method for allowing layers to retain high information bandwidth, while keeping memory and compute low.However, the choice of which sparse topology should be used in these networks is unclear.In this work, we provide a theoretical foundation for the choice of intra-layer topology.First, we derive a new sparse neural network initialization scheme that allows us to explore the space of very deep sparse networks.Next, we evaluate several topologies and show that seemingly similar topologies can often have a large difference in attainable accuracy.To explain these differences, we develop a data-free heuristic that can evaluate a topology independently from the dataset the network will be trained on.We then derive a set of requirements that make a good topology, and arrive at a single topology that satisfies all of them.", "target": ["Wir untersuchen das Pruning von DNNs vor dem Training und geben eine Antwort auf die Frage, welche Topologie für das Training von a priori spärlichen Netzwerken verwendet werden sollte.", "Die Autoren schlagen vor, dichte Schichten durch spärlich verbundene lineare Schichten zu ersetzen und einen Ansatz zu finden, um die beste Topologie zu finden, indem gemessen wird, wie gut die spärlichen Schichten die zufälligen Gewichte ihrer dichten Gegenstücke approximieren.", "Die Arbeit schlägt eine dünn besetzte Kaskadenarchitektur vor, die eine Multiplikation mehrerer dünn besetzter Matrizen und ein spezielles Konnektivitätsmuster ist, das andere Überlegungen übertrifft."]} {"source": "Deep learning models require extensive architecture design exploration and hyperparameter optimization to perform well on a given task.The exploration of the model design space is often made by a human expert, and optimized using a combination of grid search and search heuristics over a large space of possible choices.Neural Architecture Search (NAS) is a Reinforcement Learning approach that has been proposed to automate architecture design.NAS has been successfully applied to generate Neural Networks that rival the best human-designed architectures.However, NAS requires sampling, constructing, and training hundreds to thousands of models to achieve well-performing architectures.This procedure needs to be executed from scratch for each new task.The application of NAS to a wide set of tasks currently lacks a way to transfer generalizable knowledge across tasks.In this paper, we present the Multitask Neural Model Search (MNMS) controller.Our goal is to learn a generalizable framework that can condition model construction on successful model searches for previously seen tasks, thus significantly speeding up the search for new tasks.We demonstrate that MNMS can conduct an automated architecture search for multiple tasks simultaneously while still learning well-performing, specialized models for each task.We then show that pre-trained MNMS controllers can transfer learning to new tasks.By leveraging knowledge from previous searches, we find that pre-trained MNMS models start from a better location in the search space and reduce search time on unseen tasks, while still discovering models that outperform published human-designed models.", "target": ["Wir präsentieren Multitask Neural Model Search, einen Meta-Learner, der Modelle für mehrere Aufgaben gleichzeitig entwerfen und das Lernen auf unbekannte Aufgaben übertragen kann.", "In dieser Arbeit wird die neuronale Architektursuche auf das Problem des Multitasking-Lernens ausgedehnt, bei dem ein aufgabenabhängiger Controller für die Modellsuche gelernt wird, um mehrere Aufgaben gleichzeitig zu bewältigen.", "In diesem Beitrag fassen die Autoren ihre Arbeit an ein Framework zusammen, das als Multitask Neural Model Search Controller bezeichnet wird und die automatische Konstruktion neuronaler Netze für mehrere Aufgaben gleichzeitig ermöglicht."]} {"source": "This work studies the problem of modeling non-linear visual processes by leveraging deep generative architectures for learning linear, Gaussian models of observed sequences.We propose a joint learning framework, combining a multivariate autoregressive model and deep convolutional generative networks.After justification of theoretical assumptions of inearization, we propose an architecture that allows Variational Autoencoders and Generative Adversarial Networks to simultaneously learn the non-linear observation as well as the linear state-transition model from a sequence of observed frames.Finally, we demonstrate our approach on conceptual toy examples and dynamic textures.", "target": ["Wir modellieren nicht-lineare visuelle Prozesse als autoregressive Störungen mittels generativem Deep Learning.", "Vorschlagen einer neuen Methode, die nichtlineare visuelle Prozesse mit einer tiefen Version eines linearen Prozesses (Markov-Prozess) modelliert.", "In diesem Artikel wird ein neues tiefes generatives Modell für Sequenzen, insbesondere Bildsequenzen und Videos, vorgeschlagen, das in einem Teil des Modells eine lineare Struktur verwendet."]} {"source": "Partial differential equations (PDEs) play a prominent role in many disciplines such as applied mathematics, physics, chemistry, material science, computer science, etc.PDEs are commonly derived based on physical laws or empirical observations.However, the governing equations for many complex systems in modern applications are still not fully known.With the rapid development of sensors, computational power, and data storage in the past decade, huge quantities of data can be easily collected and efficiently stored.Such vast quantity of data offers new opportunities for data-driven discovery of hidden physical laws.Inspired by the latest development of neural network designs in deep learning, we propose a new feed-forward deep network, called PDE-Net, to fulfill two objectives at the same time: to accurately predict dynamics of complex systems and to uncover the underlying hidden PDE models.The basic idea of the proposed PDE-Net is to learn differential operators by learning convolution kernels (filters), and apply neural networks or other machine learning methods to approximate the unknown nonlinear responses.Comparing with existing approaches, which either assume the form of the nonlinear response is known or fix certain finite difference approximations of differential operators, our approach has the most flexibility by learning both differential operators and the nonlinear responses.A special feature of the proposed PDE-Net is that all filters are properly constrained, which enables us to easily identify the governing PDE models while still maintaining the expressive and predictive power of the network.These constrains are carefully designed by fully exploiting the relation between the orders of differential operators and the orders of sum rules of filters (an important concept originated from wavelet theory).We also discuss relations of the PDE-Net with some existing networks in computer vision such as Network-In-Network (NIN) and Residual Neural Network (ResNet).Numerical experiments show that the PDE-Net has the potential to uncover the hidden PDE of the observed dynamics, and predict the dynamical behavior for a relatively long time, even in a noisy environment.", "target": ["In diesem Artikel wird ein neues Feed-Forward-Netzwerk, das so genannte PDE-Net, vorgeschlagen, um PDEs aus Daten zu lernen. ", "Diese Arbeit erläutert die Verwendung von Deep Learning Maschinen zum Zweck der Identifizierung dynamischer Systeme, die durch PDEs spezifiziert sind.", "Der Artikel schlägt einen auf einem neuronalen Netz basierenden Algorithmus für das Lernen aus Daten vor, die sich aus dynamischen Systemen mit Gleichungen ergeben, die als partielle Differentialgleichungen geschrieben werden können.", "Diese Arbeit befasst sich mit der Modellierung komplexer dynamischer Systeme durch nichtparametrische partielle Differentialgleichungen unter Verwendung neuronaler Architekturen, wobei die wichtigste Idee des Papiers (PDE-Netz) darin besteht, sowohl Differentialoperatoren als auch die Funktion, die die PDE regelt, zu lernen."]} {"source": "Each training step for a variational autoencoder (VAE) requires us to sample from the approximate posterior, so we usually choose simple (e.g. factorised) approximate posteriors in which sampling is an efficient computation that fully exploits GPU parallelism. However, such simple approximate posteriors are often insufficient, as they eliminate statistical dependencies in the posterior. While it is possible to use normalizing flow approximate posteriors for continuous latents, there is nothing analogous for discrete latents.The most natural approach to model discrete dependencies is an autoregressive distribution, but sampling from such distributions is inherently sequential and thus slow. We develop a fast, parallel sampling procedure for autoregressive distributions based on fixed-point iterations which enables efficient and accurate variational inference in discrete state-space models. To optimize the variational bound, we considered two ways to evaluate probabilities: inserting the relaxed samples directly into the pmf for the discrete distribution, or converting to continuous logistic latent variables and interpreting the K-step fixed-point iterations as a normalizing flow. We found that converting to continuous latent variables gave considerable additional scope for mismatch between the true and approximate posteriors, which resulted in biased inferences, we thus used the former approach. We tested our approach on the neuroscience problem of inferring discrete spiking activity from noisy calcium-imaging data, and found that it gave accurate connectivity estimates in an order of magnitude less time.", "target": ["Wir geben ein schnelles normalisierungsflussähnliches Stichprobenverfahren für diskrete latente Variablenmodelle an.", "In dieser Arbeit wird eine autoregressive Filter-Variationsapproximation zur Parameterschätzung in diskreten dynamischen Systemen unter Verwendung von Fixpunkt-Iterationen verwendet.", "Die Autoren setzen eine allgemeine autoregressive Posterior-Familie voraus für diskrete Variablen oder deren kontinuierliche Entlastungen. ", "Dieser Artikel hat zwei Hauptbeiträge: Es erweitert Normalisierungsflüsse auf diskrete Einstellungen und stellt eine ungefähre Festpunkt-Aktualisierungsregel für autoregressive Zeitreihen vor, die die GPU-Parallelität nutzen kann. "]} {"source": "Deep neural networks (DNNs) had great success on NLP tasks such as language modeling, machine translation and certain question answering (QA) tasks.However, the success is limited at more knowledge intensive tasks such as QA from a big corpus.Existing end-to-end deep QA models (Miller et al., 2016; Weston et al., 2014) need to read the entire text after observing the question, and therefore their complexity in responding a question is linear in the text size.This is prohibitive for practical tasks such as QA from Wikipedia, a novel, or the Web.We propose to solve this scalability issue by using symbolic meaning representations, which can be indexed and retrieved efficiently with complexity that is independent of the text size.More specifically, we use sequence-to-sequence models to encode knowledge symbolically and generate programs to answer questions from the encoded knowledge.We apply our approach, called the N-Gram Machine (NGM), to the bAbI tasks (Weston et al., 2015) and a special version of them (“life-long bAbI”) which has stories of up to 10 million sentences.Our experiments show that NGM can successfully solve both of these tasks accurately and efficiently.Unlike fully differentiable memory models, NGM’s time complexity and answering quality are not affected by the story length.The whole system of NGM is trained end-to-end with REINFORCE (Williams, 1992).To avoid high variance in gradient estimation, which is typical in discrete latent variable models, we use beam search instead of sampling.To tackle the exponentially large search space, we use a stabilized auto-encoding objective and a structure tweak procedure to iteratively reduce and refine the search space.", "target": ["Wir schlagen einen Rahmen vor, der lernt, Wissen symbolisch zu kodieren und Programme zu generieren, um über das kodierte Wissen nachzudenken.", "Die Autoren schlagen die N-Gram-Maschine vor, um Fragen über lange Dokumente zu beantworten.", "In dieser Arbeit wird die n-Gramm-Maschine vorgestellt, ein Modell, das Sätze in einfache symbolische Darstellungen kodiert, die effizient abgefragt werden können."]} {"source": "We propose to use a meta-learning objective that maximizes the speed of transfer on a modified distribution to learn how to modularize acquired knowledge.In particular, we focus on how to factor a joint distribution into appropriate conditionals, consistent with the causal directions.We explain when this can work, using the assumption that the changes in distributions are localized (e.g. to one of the marginals, for example due to an intervention on one of the variables).We prove that under this assumption of localized changes in causal mechanisms, the correct causal graph will tend to have only a few of its parameters with non-zero gradient, i.e. that need to be adapted (those of the modified variables).We argue and observe experimentally that this leads to faster adaptation, and use this property to define a meta-learning surrogate score which, in addition to a continuous parametrization of graphs, would favour correct causal graphs.Finally, motivated by the AI agent point of view (e.g. of a robot discovering its environment autonomously), we consider how the same objective can discover the causal variables themselves, as a transformation of observed low-level variables with no causal meaning.Experiments in the two-variable case validate the proposed ideas and theoretical results.", "target": ["In diesem Artikel wird ein Meta-Lernziel vorgeschlagen, das auf der Geschwindigkeit der Anpassung an Transferverteilungen basiert, um eine modulare Dekomposition und kausale Variablen zu entdecken.", "Diese Arbeit zeigt, dass sich ein Modell mit der richtigen Grundstruktur schneller an eine kausale Intervention anpasst als ein Modell mit der falschen Struktur.", "In dieser Arbeit schlugen die Autoren einen allgemeinen und systematischen Rahmen für ein Meta-Transfer Ziel vor, das das Lernen der kausalen Struktur bei unbekannten Interventionen einschließt."]} {"source": "Continual learning is a longstanding goal of artificial intelligence, but is often counfounded by catastrophic forgetting that prevents neural networks from learning tasks sequentially.Previous methods in continual learning have demonstrated how to mitigate catastrophic forgetting, and learn new tasks while retaining performance on the previous tasks.We analyze catastrophic forgetting from the perspective of change in classifier likelihood and propose a simple L1 minimization criterion which can be adapted to different use cases.We further investigate two ways to minimize forgetting as quantified by this criterion and propose strategies to achieve finer control over forgetting.Finally, we evaluate our strategies on 3 datasets of varying difficulty and demonstrate improvements over previously known L2 strategies for mitigating catastrophic forgetting.", "target": ["Eine andere Perspektive des katastrophalen Vergessens.", "In diesem Beitrag wird ein Framework zur Bekämpfung des katastrophalen Vergessens vorgestellt, der auf der Änderung des Verlustterms beruht, um die Änderungen der Klassifizierungswahrscheinlichkeit zu minimieren, die durch eine Taylor-Reihen-Approximation erzielt wird.", "Diese Arbeit versucht, das Problem des kontinuierlichen Lernens zu lösen, indem es sich auf Regularisierungsansätze konzentriert und eine L_1-Strategie zur Entschärfung des Problems vorschlägt."]} {"source": "We propose an approach to construct realistic 3D facial morphable models (3DMM) that allows an intuitive facial attributeediting workflow.Current face modeling methods using 3DMM suffer from the lack of local control.We thus create a 3DMM bycombining local part-based 3DMM for the eyes, nose, mouth, ears, and facial mask regions.Our local PCA-based approachuses a novel method to select the best eigenvectors from the local 3DMM to ensure that the combined 3DMM is expressivewhile allowing accurate reconstruction.The editing controls we provide to the user are intuitive as they are extracted fromanthropometric measurements found in the literature.Out of a large set of possible anthropometric measurements, we filter theones that have meaningful generative power given the face data set.We bind the measurements to the part-based 3DMM throughmapping matrices derived from our data set of facial scans.Our part-based 3DMM is compact yet accurate, and compared toother 3DMM methods, it provides a new trade-off between local and global control.We tested our approach on a data set of 135scans used to derive the 3DMM, plus 19 scans that served for validation.The results show that our part-based 3DMM approachhas excellent generative properties and allows intuitive local control to the user.", "target": ["Wir schlagen einen Ansatz zur Konstruktion realistischer veränderbarer 3D-Gesichtsmodelle (3DMM) vor, der einen intuitiven Arbeitsablauf zur Bearbeitung von Gesichtsattributen durch die Auswahl der besten Sätze von Eigenvektoren und anthropometrischen Messungen ermöglicht.", "Schlägt ein stückweise veränderbares Modell für menschliche Gesichtsnetze vor und schlägt auch eine Zuordnung zwischen anthropometrischen Messungen des Gesichts und den Parametern des Modells vor, um Gesichter mit gewünschten Attributen zu synthetisieren und zu bearbeiten. ", "In diesem Beitrag wird eine Methode für ein teilbasiertes veränderbares Gesichtsmodell beschrieben, das eine lokalisierte Benutzersteuerung ermöglicht."]} {"source": "We review eight machine learning classification algorithms to analyze Electroencephalographic (EEG) signals in order to distinguish EEG patterns associated with five basic educational tasks.There is a large variety of classifiers being used in this EEG-based Brain-Computer Interface (BCI) field.While previous EEG experiments used several classifiers in the same experiments or reviewed different algorithms on datasets from different experiments, our approach focuses on review eight classifier categories on the same dataset, including linear classifiers, non-linear Bayesian classifiers, nearest neighbour classifiers, ensemble methods, adaptive classifiers, tensor classifiers, transfer learning and deep learning.Besides, we intend to find an approach which can run smoothly on the current mainstream personal computers and smartphones. The empirical evaluation demonstrated that Random Forest and LSTM (Long Short-Term Memory) outperform other approaches.We used a data set which users were conducting five frequently-conduct learning-related tasks, including reading, writing, and typing.Results showed that these best two algorithms could correctly classify different users with an accuracy increase of 5% to 9%, use each task independently.Within each subject, the tasks could be recognized with an accuracy increase of 4% to 7%, compared with other approaches.This work suggests that Random Forest could be a recommended approach (fast and accurate) for current mainstream hardware, while LSTM has the potential to be the first-choice approach when the mainstream computers and smartphones can process more data in a shorter time.", "target": ["Zwei Algorithmen schnitten bei einem EEG-basierten BCI-Experiment besser ab als acht andere."]} {"source": "Multi-agent reinforcement learning offers a way to study how communication could emerge in communities of agents needing to solve specific problems.In this paper, we study the emergence of communication in the negotiation environment, a semi-cooperative model of agent interaction.We introduce two communication protocols - one grounded in the semantics of the game, and one which is a priori ungrounded. We show that self-interested agents can use the pre-grounded communication channel to negotiate fairly, but are unable to effectively use the ungrounded, cheap talk channel to do the same. However, prosocial agents do learn to use cheap talk to find an optimal negotiating strategy, suggesting that cooperation is necessary for language to emerge.We also study communication behaviour in a setting where one agent interacts with agents in a community with different levels of prosociality and show how agent identifiability can aid negotiation.", "target": ["Wir bringen Agenten bei, nur mit Hilfe von Reinforcement Learning zu verhandeln; egoistische Agenten können dies tun, aber nur über einen vertrauenswürdigen Kommunikationskanal, und prosoziale Agenten können mit billigem Gerede verhandeln.", "Die Autoren beschreiben eine Variante des Verhandlungsspiels mit der Berücksichtigung eines zweiten Kommunikationskanals für billiges Sprechen und stellen fest, dass der zweite Kanal die Verhandlungsergebnisse verbessert.", "In diesem Beitrag wird untersucht, wie Agenten lernen können, zu kommunizieren, um eine Verhandlungsaufgabe zu lösen, und es wird festgestellt, dass prosoziale Agenten in der Lage sind, mit Hilfe von RL zu lernen, Symbole zu erden, eigennützige Agenten jedoch nicht.", "Untersucht die Frage, wie Agenten die Kommunikation nutzen können, um ihre Gewinne in einem einfachen Verhandlungsspiel zu maximieren."]} {"source": "The goal of few-shot learning is to learn a classifier that generalizes well even when trained with a limited number of training instances per class.The recently introduced meta-learning approaches tackle this problem by learning a generic classifier across a large number of multiclass classification tasks and generalizing the model to a new task.Yet, even with such meta-learning, the low-data problem in the novel classification task still remains.In this paper, we propose Transductive Propagation Network (TPN), a novel meta-learning framework for transductive inference that classifies the entire test set at once to alleviate the low-data problem.Specifically, we propose to learn to propagate labels from labeled instances to unlabeled test instances, by learning a graph construction module that exploits the manifold structure in the data.TPN jointly learns both the parameters of feature embedding and the graph construction in an end-to-end manner. We validate TPN on multiple benchmark datasets, on which it largely outperforms existing few-shot learning approaches and achieves the state-of-the-art results.", "target": ["Wir schlagen ein neuartiges Meta-Lern Framework für transduktive Inferenz vor, das die gesamte Testmenge auf einmal klassifiziert, um das Problem der geringen Datenmenge zu lindern.", "Diese Arbeit schlägt vor, das Few-Shot Lernen auf eine transduktive Art und Weise anzugehen, indem es ein Label Propagations Modell in einer Ende-zu-Ende Weise lernt. Es ist das erste System, das Label Propagation für transduktives Few-Shot Lernen lernt und effektive empirische Ergebnisse produziert. ", "In dieser Arbeit wird ein Meta-Lernsystem vorgeschlagen, das unbeschriftete Daten durch das Erlernen der graphbasierten Label-Propogation in einer Ende-zu-Ende Weise nutzt.", "Studien zum Few-Shot Lernen in einer transduktiven Umgebung: Verwendung von Meta-Lernen, um zu lernen, wie man Labels von Trainingsbeispielen auf Testbeispiele überträgt. "]} {"source": "We describe the use of an automated scheduling system for observation policy design and to schedule operations of the NASA (National Aeronautics and Space Administration) ECOSystem Spaceborne Thermal Radiometer Experiment on Space Station (ECOSTRESS).We describe the adaptation of the Compressed Large-scale Activity Scheduler and Planner (CLASP) scheduling system to the ECOSTRESS scheduling problem, highlighting multiple use cases for automated scheduling and several challenges for the scheduling technology: handling long-term campaigns with changing information, Mass Storage Unit Ring Buffer operations challenges, and orbit uncertainty.The described scheduling system has been used for operations of the ECOSTRESS instrument since its nominal operations start July 2018 and is expected to operate until mission end in Summer 2019.", "target": ["Wir beschreiben den Einsatz eines automatisierten Planungssystems für die Entwicklung von Beobachtungsstrategien und die Planung des Betriebs der ECOSTRESS-Mission der NASA.", "In diesem Beitrag wird eine Anpassung eines automatischen Planungssystems, CLASP, vorgestellt, um ein EO-Experiment (ECOSTRESS) auf der ISS zu planen. "]} {"source": "Adversarial examples are modified samples that preserve original image structures but deviate classifiers.Researchers have put efforts into developing methods for generating adversarial examples and finding out origins.Past research put much attention on decision boundary changes caused by these methods.This paper, in contrast, discusses the origin of adversarial examples from a more underlying knowledge representation point of view.Human beings can learn and classify prototypes as well as transformations of objects.While neural networks store learned knowledge in a more hybrid way of combining all prototypes and transformations as a whole distribution.Hybrid storage may lead to lower distances between different classes so that small modifications can mislead the classifier.A one-step distribution imitation method is designed to imitate distribution of the nearest different class neighbor.Experiments show that simply by imitating distributions from a training set without any knowledge of the classifier can still lead to obvious impacts on classification results from deep networks.It also implies that adversarial examples can be in more forms than small perturbations.Potential ways of alleviating adversarial examples are discussed from the representation point of view.The first path is to change the encoding of data sent to the training step.Training data that are more prototypical can help seize more robust and accurate structural knowledge.The second path requires constructing learning frameworks with improved representations.", "target": ["Die hybride Speicherung und Darstellung von gelerntem Wissen kann ein Grund für adversarial Beispiele sein."]} {"source": "Differently from the popular Deep Q-Network (DQN) learning, Alternating Q-learning (AltQ) does not fully fit a target Q-function at each iteration, and is generally known to be unstable and inefficient.Limited applications of AltQ mostly rely on substantially altering the algorithm architecture in order to improve its performance.Although Adam appears to be a natural solution, its performance in AltQ has rarely been studied before.In this paper, we first provide a solid exploration on how well AltQ performs with Adam.We then take a further step to improve the implementation by adopting the technique of parameter restart.More specifically, the proposed algorithms are tested on a batch of Atari 2600 games and exhibit superior performance than the DQN learning method.The convergence rate of the slightly modified version of the proposed algorithms is characterized under the linear function approximation.To the best of our knowledge, this is the first theoretical study on the Adam-type algorithms in Q-learning.", "target": ["Neue Experimente und Theorie zum Adam basierten Q-Learning.", "Diese Arbeit liefert ein Konvergenzergebnis für traditionelles Q-Lernen mit linearer Funktionsannäherung bei Verwendung einer Adam-ähnlichen Aktualisierung. ", "In diesem Beitrag wird eine Methode zur Verbesserung des AltQ-Algorithmus beschrieben, bei der eine Kombination aus einem Adam-Optimierer und einem regelmäßigen Neustart der internen Parameter des Adam-Optimierers verwendet wird."]} {"source": "In search for more accurate predictive models, we customize capsule networks for the learning to diagnose problem.We also propose Spectral Capsule Networks, a novel variation of capsule networks, that converge faster than capsule network with EM routing.Spectral capsule networks consist of spatial coincidence filters that detect entities based on the alignment of extracted features on a one-dimensional linear subspace.Experiments on a public benchmark learning to diagnose dataset not only shows the success of capsule networks on this task, but also confirm the faster convergence of the spectral capsule networks.", "target": ["Ein neues Kapselnetz, das bei unseren Benchmark-Experimenten im Gesundheitswesen schneller konvergiert.", "Stellt eine Variante von Kapselnetzwerken vor, die anstelle von EM-Routing einen linearen Unterraum verwendet, der durch den dominanten Eigenvektor der gewichteten Abstimmungsmatrix der vorherigen Kapsel aufgespannt wird.", "Der Artikel schlägt eine verbesserte Routing-Methode, die Werkzeuge der Eigendekomposition verwendet, um Kapsel Aktivierung und Stellung zu finden."]} {"source": "One of the big challenges in machine learning applications is that training data can be different from the real-world data faced by the algorithm.In language modeling, users’ language (e.g. in private messaging) could change in a year and be completely different from what we observe in publicly available data.At the same time, public data can be used for obtaining general knowledge (i.e. general model of English).We study approaches to distributed fine-tuning of a general model on user private data with the additional requirements of maintaining the quality on the general data and minimization of communication costs.We propose a novel technique that significantly improves prediction quality on users’ language compared to a general model and outperforms gradient compression methods in terms of communication efficiency.The proposed procedure is fast and leads to an almost 70% perplexity reduction and 8.7 percentage point improvement in keystroke saving rate on informal English texts.Finally, we propose an experimental framework for evaluating differential privacy of distributed training of language models and show that our approach has good privacy guarantees.", "target": ["Wir schlagen eine Methode zum verteilten Fine-Tuning von Sprachmodellen auf Benutzergeräten ohne Erhebung privater Daten vor.", "Diese Arbeit befasst sich mit der Verbesserung von Sprachmodellen auf mobilen Geräten, die auf kleinen Textabschnitten basieren, die der Benutzer eingegeben hat, indem eine linear interpolierte Zielsetzung zwischen benutzerspezifischem Text und allgemeinem Englisch verwendet wird. "]} {"source": "We propose that approximate Bayesian algorithms should optimize a new criterion, directly derived from the loss, to calculate their approximate posterior which we refer to as pseudo-posterior.Unlike standard variational inference which optimizes a lower bound on the log marginal likelihood, the new algorithms can be analyzed to provide loss guarantees on the predictions with the pseudo-posterior.Our criterion can be used to derive new sparse Gaussian process algorithms that have error guarantees applicable to various likelihoods.", "target": ["Dieses Papier nutzt die Analyse von Lipschitz-Verlusten auf einem begrenzten Hypothesenraum, um neue ERM-Algorithmen mit starken Leistungsgarantien abzuleiten, die auf das nicht-konjugierte Sparse GP Modell angewendet werden können."]} {"source": "In this paper, we propose a novel regularization method, RotationOut, for neural networks. Different from Dropout that handles each neuron/channel independently, RotationOut regards its input layer as an entire vector and introduces regularization by randomly rotating the vector. RotationOut can also be used in convolutional layers and recurrent layers with a small modification.We further use a noise analysis method to interpret the difference between RotationOut and Dropout in co-adaptation reduction. Using this method, we also show how to use RotationOut/Dropout together with Batch Normalization. Extensive experiments in vision and language tasks are conducted to show the effectiveness of the proposed method. Codes will be available.", "target": ["Wir schlagen eine Regularisierungsmethode für neuronale Netze und eine Methode zur Störungsanalyse vor", "In diesem Beitrag wird eine neue Regularisierungsmethode vorgeschlagen, um das Problem der Überanpassung von tiefen neuronalen Netzen durch die Rotation von Merkmalen mit einer zufälligen Rotationsmatrix zu entschärfen, um die Co-Adaptation zu reduzieren.", "In diesem Beitrag wird eine neuartige Regularisierungsmethode für das Training neuronaler Netze vorgeschlagen, bei der Neuronen mit Rauschen in einer unabhängigen Weise hinzugefügt werden."]} {"source": "Formulating the reinforcement learning (RL) problem in the framework of probabilistic inference not only offers a new perspective about RL, but also yields practical algorithms that are more robust and easier to train.While this connection between RL and probabilistic inference has been extensively studied in the single-agent setting, it has not yet been fully understood in the multi-agent setup.In this paper, we pose the problem of multi-agent reinforcement learning as the problem of performing inference in a particular graphical model.We model the environment, as seen by each of the agents, using separate but related Markov decision processes.We derive a practical off-policy maximum-entropy actor-critic algorithm that we call Multi-agent Soft Actor-Critic (MA-SAC) for performing approximate inference in the proposed model using variational inference.MA-SAC can be employed in both cooperative and competitive settings.Through experiments, we demonstrate that MA-SAC outperforms a strong baseline on several multi-agent scenarios.While MA-SAC is one resultant multi-agent RL algorithm that can be derived from the proposed probabilistic framework, our work provides a unified view of maximum-entropy algorithms in the multi-agent setting.", "target": ["Ein probabilistischer Rahmen für Multi-Agenten Reinforcement Learning.", "Diese Arbeit schlägt einen neuen Algorithmus namens Multi-Agent Soft Actor-Critic (MA-SAC) vor, der auf dem Off-Policy Maximum-Entropy Actor-Critic Algorithmus Soft Actor-Critic (SAC) basiert."]} {"source": "Sorting input objects is an important step in many machine learning pipelines.However, the sorting operator is non-differentiable with respect to its inputs, which prohibits end-to-end gradient-based optimization.In this work, we propose NeuralSort, a general-purpose continuous relaxation of the output of the sorting operator from permutation matrices to the set of unimodal row-stochastic matrices, where every row sums to one and has a distinct argmax.This relaxation permits straight-through optimization of any computational graph involve a sorting operation.Further, we use this relaxation to enable gradient-based stochastic optimization over the combinatorially large space of permutations by deriving a reparameterized gradient estimator for the Plackett-Luce family of distributions over permutations.We demonstrate the usefulness of our framework on three tasks that require learning semantic orderings of high-dimensional objects, including a fully differentiable, parameterized extension of the k-nearest neighbors algorithm", "target": ["Wir bieten eine kontinuierliche Entspannung des Sortieroperators, die eine durchgängige, gradientenbasierte stochastische Optimierung ermöglicht.", "In diesem Beitrag wird untersucht, wie eine Reihe von Elementen sortiert werden kann, ohne dass deren tatsächliche Bedeutungen oder Werte explizit bekannt sind, und es wird eine Methode zur Durchführung der Optimierung mittels einer kontinuierlichen Entlastung vorgeschlagen.", "Diese Arbeit baut auf einer sum(top k)-Identität auf, um einen pfadweise differenzierbaren Prüfer von 'unimodalen zeilenstochastischen' Matrizen abzuleiten.", "Es wird eine kontinuierliche Entlastung des Sortieroperators eingeführt, um eine durchgängige gradientenbasierte Optimierung zu konstruieren, und es wird eine stochastische Erweiterung der Methode unter Verwendung von Placket-Luce-Verteilungen und Monte Carlo eingeführt."]} {"source": "Transferring knowledge across tasks to improve data-efficiency is one ofthe open key challenges in the area of global optimization algorithms.Readilyavailable algorithms are typically designed to be universal optimizers and, thus,often suboptimal for specific tasks.We propose a novel transfer learning method toobtain customized optimizers within the well-established framework of Bayesianoptimization, allowing our algorithm to utilize the proven generalizationcapabilities of Gaussian processes.Using reinforcement learning to meta-train anacquisition function (AF) on a set of related tasks, the proposed method learns toextract implicit structural information and to exploit it for improved data-efficiency.We present experiments on a sim-to-real transfer task as well as on several simulatedfunctions and two hyperparameter search problems.The results show that ouralgorithm (1) automatically identifies structural properties of objective functionsfrom available source tasks or simulations, (2) performs favourably in settings withboth scarse and abundant source data, and (3) falls back to the performance levelof general AFs if no structure is present.", "target": ["Wir führen effizientes und flexibles Transferlernen im Framework der Bayes'schen Optimierung durch meta-gelernte neuronale Erfassungsfunktionen durch.", "Die Autoren stellen MetaBO vor, das Reinforcement Learning zum Meta-Lernen der Erfassungsfunktion für Bayes'sche Optimierung einsetzt und eine zunehmende Effizienz der Stichprobe bei neuen Aufgaben zeigt.", "Die Autoren schlagen eine auf Meta-Learning basierende Alternative zu Standard-Erfassungsfunktionen (AFs) vor, bei der ein vortrainiertes neuronales Netz Erfassungswerte in Abhängigkeit von handverlesenen Merkmalen ausgibt."]} {"source": "We study the evolution of internal representations during deep neural network (DNN) training, aiming to demystify the compression aspect of the information bottleneck theory.The theory suggests that DNN training comprises a rapid fitting phase followed by a slower compression phase, in which the mutual information I(X;T) between the input X and internal representations T decreases.Several papers observe compression of estimated mutual information on different DNN models, but the true I(X;T) over these networks is provably either constant (discrete X) or infinite (continuous X).This work explains the discrepancy between theory and experiments, and clarifies what was actually measured by these past works.To this end, we introduce an auxiliary (noisy) DNN framework for which I(X;T) is a meaningful quantity that depends on the network's parameters.This noisy framework is shown to be a good proxy for the original (deterministic) DNN both in terms of performance and the learned representations.We then develop a rigorous estimator for I(X;T) in noisy DNNs and observe compression in various models.By relating I(X;T) in the noisy DNN to an information-theoretic communication problem, we show that compression is driven by the progressive clustering of hidden representations of inputs from the same class.Several methods to directly monitor clustering of hidden representations, both in noisy and deterministic DNNs, are used to show that meaningful clusters form in the T space.Finally, we return to the estimator of I(X;T) employed in past works, and demonstrate that while it fails to capture the true (vacuous) mutual information, it does serve as a measure for clustering.This clarifies the past observations of compression and isolates the geometric clustering of hidden representations as the true phenomenon of interest.", "target": ["Deterministische tiefe neuronale Netze verwerfen keine Informationen, aber sie clustern ihre Eingaben.", "Dieses Papier bietet einen prinzipiellen Weg, um die Kompressionsphrase in tiefen neuronalen Netzen zu untersuchen, indem es einen theoretisch fundierten Entropieschätzer zur Schätzung der gegenseitigen Information bereitstellt. "]} {"source": "A central challenge in multi-agent reinforcement learning is the induction of coordination between agents of a team.In this work, we investigate how to promote inter-agent coordination using policy regularization and discuss two possible avenues respectively based on inter-agent modelling and synchronized sub-policy selection.We test each approach in four challenging continuous control tasks with sparse rewards and compare them against three baselines including MADDPG, a state-of-the-art multi-agent reinforcement learning algorithm.To ensure a fair comparison, we rely on a thorough hyper-parameter selection and training methodology that allows a fixed hyper-parameter search budget for each algorithm and environment.We consequently assess both the hyper-parameter sensitivity, sample-efficiency and asymptotic performance of each learning method.Our experiments show that the proposed methods lead to significant improvements on cooperative problems.We further analyse the effects of the proposed regularizations on the behaviors learned by the agents.", "target": ["Wir schlagen Regularisierungsziele für Multi-Agenten-RL-Algorithmen vor, die die Koordination bei kooperativen Aufgaben fördern.", "In dieser Arebit werden zwei Methoden vorgeschlagen, um Agenten dazu zu bringen, koordinierte Verhaltensweisen zu erlernen, und beide werden in Multi-Agenten-Domänen von angemessener Komplexität rigoros evaluiert.", "In diesem Artikel werden zwei Methoden vorgeschlagen, die auf MADDPG aufbauen, um die Zusammenarbeit zwischen dezentralen MARL-Agenten zu fördern."]} {"source": "Multimodal sentiment analysis is a core research area that studies speaker sentiment expressed from the language, visual, and acoustic modalities.The central challenge in multimodal learning involves inferring joint representations that can process and relate information from these modalities.However, existing work learns joint representations using multiple modalities as input and may be sensitive to noisy or missing modalities at test time.With the recent success of sequence to sequence models in machine translation, there is an opportunity to explore new ways of learning joint representations that may not require all input modalities at test time.In this paper, we propose a method to learn robust joint representations by translating between modalities.Our method is based on the key insight that translation from a source to a target modality provides a method of learning joint representations using only the source modality as input.We augment modality translations with a cycle consistency loss to ensure that our joint representations retain maximal information from all modalities.Once our translation model is trained with paired multimodal data, we only need data from the source modality at test-time for prediction.This ensures that our model remains robust from perturbations or missing target modalities.We train our model with a coupled translation-prediction objective and it achieves new state-of-the-art results on multimodal sentiment analysis datasets: CMU-MOSI, ICT-MMMO, and YouTube.Additional experiments show that our model learns increasingly discriminative joint representations with more input modalities while maintaining robustness to perturbations of all other modalities.", "target": ["Wir stellen ein Modell vor, das robuste gemeinsame Repräsentationen erlernt, indem es hierarchische zyklische Übersetzungen zwischen mehreren Modalitäten durchführt.", "In diesem Beitrag wird das Multimodal Cyclic Translation Network (MCTN) vorgestellt und für die multimodale Sentiment Analyse evaluiert."]} {"source": "The geometric properties of loss surfaces, such as the local flatness of a solution, are associated with generalization in deep learning.The Hessian is often used to understand these geometric properties.We investigate the differences between the eigenvalues of the neural network Hessian evaluated over the empirical dataset, the Empirical Hessian, and the eigenvalues of the Hessian under the data generating distribution, which we term the True Hessian.Under mild assumptions, we use random matrix theory to show that the True Hessian has eigenvalues of smaller absolute value than the Empirical Hessian.We support these results for different SGD schedules on both a 110-Layer ResNet and VGG-16.To perform these experiments we propose a framework for spectral visualization, based on GPU accelerated stochastic Lanczos quadrature.This approach is an order of magnitude faster than state-of-the-art methods for spectral visualization, and can be generically used to investigate the spectral properties of matrices in deep learning.", "target": ["Das Verständnis der Hessian Eigenwerte des neuronalen Netzes unter der datenerzeugenden Verteilung.", "Diese Arbeit analysiert das Spektrum der Hessian Matrix großer neuronaler Netze, mit einer Analyse der Max/Min-Eigenwerte und einer Visualisierung der Spektren unter Verwendung eines Lanczos-Quadratur-Ansatzes.", "In dieser Arbeit wird die Theorie der Zufallsmatrix verwendet, um die Spektrumverteilung der empirischen Hessian und der wahren Hessian für Deep Learning zu untersuchen, und es wird eine effiziente Methode zur Visualisierung des Spektrums vorgeschlagen."]} {"source": "Summarization of long sequences into a concise statement is a core problem in natural language processing, requiring non-trivial understanding of the input.Based on the promising results of graph neural networks on highly structured data, we develop a framework to extend existing sequence encoders with a graph component that can reason about long-distance relationships in weakly structured data such as text.In an extensive evaluation, we show that the resulting hybrid sequence-graph models outperform both pure sequence models as well as pure graph models on a range of summarization tasks.", "target": ["Ein einfacher Trick zur Verbesserung von Sequenzmodellen: Kombinieren Sie sie mit einem Graphenmodell", "In diesem Beitrag wird ein strukturelles Zusammenfassungsmodell mit einem graphenbasierten Encoder vorgestellt, der auf RNN basiert.", "Diese Arbeit kombiniert Graph Neuronale Netze mit einem sequentiellen Ansatz zur abstrakten Zusammenfassung, der im Vergleich zu externen Baselines über alle Datensätze hinweg effektiv ist."]} {"source": "In probabilistic classification, a discriminative model based on Gaussian mixture exhibits flexible fitting capability.Nevertheless, it is difficult to determine the number of components.We propose a sparse classifier based on a discriminative Gaussian mixture model (GMM), which is named sparse discriminative Gaussian mixture (SDGM).In the SDGM, a GMM-based discriminative model is trained by sparse Bayesian learning.This learning algorithm improves the generalization capability by obtaining a sparse solution and automatically determines the number of components by removing redundant components.The SDGM can be embedded into neural networks (NNs) such as convolutional NNs and can be trained in an end-to-end manner.Experimental results indicated that the proposed method prevented overfitting by obtaining sparsity.Furthermore, we demonstrated that the proposed method outperformed a fully connected layer with the softmax function in certain cases when it was used as the last layer of a deep NN.", "target": ["Ein spärlicher Klassifikator auf der Grundlage eines diskriminativen Gaußschen Mischmodells, das auch in ein neuronales Netz eingebettet werden kann.", "Diese Arbeit stellt ein Gaußsches Mischungsmodell vor, das mit Hilfe von Gradientenabstiegsargumenten trainiert wird, die es ermöglichen, Spärlichkeit zu induzieren und die trainierbaren Modellschichtparameter zu reduzieren.", "In dieser Arbeit wird ein Klassifikator, genannt SDGM, vorgeschlagen, der auf einer diskriminativen Gaußschen Mischung und ihrer spärlichen Parameterschätzung basiert."]} {"source": "We recently observed that convolutional filters initializedfarthest apart from each other using offthe-shelf pre-computed Grassmannian subspacepacking codebooks performed surprisingly wellacross many datasets.Through this short paper,we’d like to disseminate some initial results in thisregard in the hope that we stimulate the curiosityof the deep-learning community towards consideringclassical Grassmannian subspace packingresults as a source of new ideas for more efficientinitialization strategies.", "target": ["Initialisierung von Gewichten mit handelsüblichen Grassmann'schen Codebüchern, schnelleres Training und bessere Genauigkeit"]} {"source": "Domain adaptation is critical for success in new, unseen environments.Adversarial adaptation models applied in feature spaces discover domain invariant representations, but are difficult to visualize and sometimes fail to capture pixel-level and low-level domain shifts.Recent work has shown that generative adversarial networks combined with cycle-consistency constraints are surprisingly effective at mapping images between domains, even without the use of aligned image pairs.We propose a novel discriminatively-trained Cycle-Consistent Adversarial Domain Adaptation model.CyCADA adapts representations at both the pixel-level and feature-level, enforces cycle-consistency while leveraging a task loss, and does not require aligned pairs. Our model can be applied in a variety of visual recognition and prediction settings.We show new state-of-the-art results across multiple adaptation tasks, including digit classification and semantic segmentation of road scenes demonstrating transfer from synthetic to real world domains.", "target": ["Ein unüberwachter Ansatz zur Bereichsanpassung, der sich sowohl auf der Pixel- als auch auf der Merkmalsebene anpasst", "Dieser Artikel schlägt einen Ansatz zur Domänenanpassung vor, indem es den CycleGAN um aufgabenspezifische Verlustfunktionen erweitert und den Verlust sowohl über Pixel als auch über Merkmale auferlegt. ", "In dieser Arbeit wird die Verwendung von CycleGANs für die Bereichsanpassung vorgeschlagen", "Diese Arbeit macht eine neuartige Erweiterung der bisherigen Arbeit auf CycleGAN durch die Kopplung mit adversarial Anpassungsansätzen, einschließlich einer neuen Funktion und semantischem Verlust in das übergeordnete Ziel der CycleGAN, mit klaren Vorteilen."]} {"source": "Stemming is the process of removing affixes( i.e. prefixes, infixes and suffixes) that improve the accuracy and performance of information retrieval systems.This paper presents the reduction of Amharic words to corresponding stem where with the intention that it preserves semantic information.The proposed approach efficiently removes affixes from an Amharic word.The process of removing such affixes (prefixes, infixes and suffixes) from a word to its base form is called stemming.While many stemmers exist for dominant languages such as English, under resourced languages such as Amharic which lacks such powerful tool support.In this paper, we design a light Amharic stemmer relying on the rules that receives an Amharic word and then it finds a match to the beginning of a word to the possible prefixes and to its ending with the possible suffixes and finally it checks whether it has infix.The final result is the stem if there is any prefix, infix or/and suffix, otherwise it remains in one of the earlier states.The technique does not rely on any additional resource (e.g. dictionary) to verify the generated stem.The performance of the generated stemmer is evaluated using manually annotated Amharic words.The result is compared with current state-of-the-art stemmer for Amharic showing an increase of 7% in stemmer correctness.", "target": ["Amharic Light Stemmer wurde entwickelt, um die Leistung der Amharic Sentiment Classification zu verbessern.", "In diesem Beitrag wird das Stemming für morphologisch reichhaltige Sprachen mit einem leichten Stemmer untersucht, der Affixe nur so weit entfernt, dass die ursprüngliche semantische Information im Wort erhalten bleibt.", "In diesem Beitrag wird eine Technik zur Amharic Light Stemming vorgeschlagen, die eine Kaskade von Transformationen verwendet, die die Form standardisieren und Suffixe, Präfixe und Infixe entfernen."]} {"source": "Place and grid-cells are known to aid navigation in animals and humans.Together with concept cells, they allow humans to form an internal representation of the external world, namely the concept space.We investigate the presence of such a space in deep neural networks by plotting the activation profile of its hidden layer neurons.Although place cell and concept-cell like properties are found, grid-cell like firing patterns are absent thereby indicating a lack of path integration or feature transformation functionality in trained networks.Overall, we present a plausible inadequacy in current deep learning practices that restrict deep networks from performing analogical reasoning and memory retrieval tasks.", "target": ["Wir untersuchten, ob einfache tiefe Netze über gitterzellenartige künstliche Neuronen verfügen, während der Gedächtnisabruf im erlernten Konzeptraum erfolgt."]} {"source": "We develop a comprehensive description of the active inference framework, as proposed by Friston (2010), under a machine-learning compliant perspective.Stemming from a biological inspiration and the auto-encoding principles, a sketch of a cognitive architecture is proposed that should provide ways to implement estimation-oriented control policies. Computer simulations illustrate the effectiveness of the approach through a foveated inspection of the input data.The pros and cons of the control policy are analyzed in detail, showing interesting promises in terms of processing compression.Though optimizing future posterior entropy over the actions set is shown enough to attain locally optimal action selection, offline calculation using class-specific saliency maps is shown better for it saves processing costs through saccades pathways pre-processing, with a negligible effect on the recognition/compression rates.", "target": ["Vor- und Nachteile der Sakkaden-basierten Computer Vision unter der Perspektive des voraussagenden Codens.", "Stellt einen rechnerischen Rahmen für das Problem des aktiven Sehens vor und erklärt, wie die Kontrollpolitik erlernt werden kann, um die Entropie der nachträglichen Überzeugung zu reduzieren."]} {"source": "Graphs possess exotic features like variable size and absence of natural ordering of the nodes that make them difficult to analyze and compare.To circumvent this problem and learn on graphs, graph feature representation is required.Main difficulties with feature extraction lie in the trade-off between expressiveness, consistency and efficiency, i.e. the capacity to extract features that represent the structural information of the graph while being deformation-consistent and isomorphism-invariant.While state-of-the-art methods enhance expressiveness with powerful graph neural-networks, we propose to leverage natural spectral properties of graphs to study a simple graph feature: the graph Laplacian spectrum (GLS).We analyze the representational power of this object that satisfies both isomorphism-invariance, expressiveness and deformation-consistency.In particular, we propose a theoretical analysis based on graph perturbation to understand what kind of comparison between graphs we do when comparing GLS.To do so, we derive bounds for the distance between GLS that are related to the divergence to isomorphism, a standard computationally expensive graph divergence.Finally, we experiment GLS as graph representation through consistency tests and classification tasks, and show that it is a strong graph feature representation baseline.", "target": ["Wir untersuchen theoretisch die Konsistenz des Laplacian-Spektrums und verwenden es als Ganzgrapheneinbettung", "In diesem Beitrag wird das Laplacian-Spektrum eines Graphen als Mittel zur Erstellung einer Darstellung verwendet, die zum Vergleich und zur Klassifizierung von Graphen eingesetzt werden kann.", "In dieser Arbeit wird vorgeschlagen, das Graph-Laplacian-Spektrum zum Erlernen der Graphdarstellung zu verwenden."]} {"source": "Adversarial training, a method for learning robust deep networks, is typically assumed to be more expensive than traditional training due to the necessity of constructing adversarial examples via a first-order method like projected gradient decent (PGD). In this paper, we make the surprising discovery that it is possible to train empirically robust models using a much weaker and cheaper adversary, an approach that was previously believed to be ineffective, rendering the method no more costly than standard training in practice. Specifically, we show that adversarial training with the fast gradient sign method (FGSM), when combined with random initialization, is as effective as PGD-based training but has significantly lower cost. Furthermore we show that FGSM adversarial training can be further accelerated by using standard techniques for efficient training of deep networks, allowing us to learn a robust CIFAR10 classifier with 45% robust accuracy at epsilon=8/255 in 6 minutes, and a robust ImageNet classifier with 43% robust accuracy at epsilon=2/255 in 12 hours, in comparison to past work based on ``free'' adversarial training which took 10 and 50 hours to reach the same respective thresholds.", "target": ["Das FGSM-basierte adversarial Training mit Randomisierung funktioniert genauso gut wie das PGD-basierte adversarial Training: Wir können damit einen robusten Klassifikator auf einem einzigen Rechner in 6 Minuten auf CIFAR10 und in 12 Stunden auf ImageNet trainieren.", "In diesem Artikel wird die Random+FGSM Methode überarbeitet, um robuste Modelle gegen starke PID-Umgehungsangriffe schneller als frühere Methoden zu trainieren.", "Die Hauptbehauptung dieser Arbeit ist, dass eine einfache Strategie der Randomisierung plus Fast-Gradient-Sign-Methode (FGSM) adversariales Training zu robusten neuronalen Netzen führt."]} {"source": "In seeking for sparse and efficient neural network models, many previous works investigated on enforcing L1 or L0 regularizers to encourage weight sparsity during training.The L0 regularizer measures the parameter sparsity directly and is invariant to the scaling of parameter values.But it cannot provide useful gradients and therefore requires complex optimization techniques.The L1 regularizer is almost everywhere differentiable and can be easily optimized with gradient descent.Yet it is not scale-invariant and causes the same shrinking rate to all parameters, which is inefficient in increasing sparsity.Inspired by the Hoyer measure (the ratio between L1 and L2 norms) used in traditional compressed sensing problems, we present DeepHoyer, a set of sparsity-inducing regularizers that are both differentiable almost everywhere and scale-invariant.Our experiments show that enforcing DeepHoyer regularizers can produce even sparser neural network models than previous works, under the same accuracy level.We also show that DeepHoyer can be applied to both element-wise and structural pruning.", "target": ["Wir schlagen fast überall differenzierbare und skaleninvariante Regularisierer für das DNN Pruning vor, die zu Supremum Sparsity durch Standard SGD Training führen können.", "Diese Arbeit schlägt einen skaleninvarianten Regularisierer (DeepHoyer) vor, der durch das Hoyer-Maß inspiriert ist, um Spärlichkeit in neuronalen Netzen zu erzwingen. "]} {"source": "Self-supervision, in which a target task is improved without external supervision, has primarily been explored in settings that assume the availability of additional data.However, in many cases, particularly in healthcare, one may not have access to additional data (labeled or otherwise).In such settings, we hypothesize that self-supervision based solely on the structure of the data at-hand can help.We explore a novel self-supervision framework for time-series data, in which multiple auxiliary tasks (e.g., forecasting) are included to improve overall performance on a sequence-level target task without additional training data.We call this approach limited self-supervision, as we limit ourselves to only the data at-hand.We demonstrate the utility of limited self-supervision on three sequence-level classification tasks, two pertaining to real clinical data and one using synthetic data.Within this framework, we introduce novel forms of self-supervision and demonstrate their utility in improving performance on the target task.Our results indicate that limited self-supervision leads to a consistent improvement over a supervised baseline, across a range of domains.In particular, for the task of identifying atrial fibrillation from small amounts of electrocardiogram data, we observe a nearly 13% improvement in the area under the receiver operating characteristics curve (AUC-ROC) relative to the baseline (AUC-ROC=0.55 vs. AUC-ROC=0.62).Limited self-supervision applied to sequential data can aid in learning intermediate representations, making it particularly applicable in settings where data collection is difficult.", "target": ["Wir zeigen, dass zusätzliche unbeschriftete Daten für selbstüberwachte Hilfsaufgaben nicht erforderlich sind, um für die Klassifizierung von Zeitreihen nützlich zu sein, und stellen neue und effektive Hilfsaufgaben vor.", "In diesem Beitrag wird eine selbstüberwachte Methode für das Lernen aus Zeitreihendaten im Gesundheitswesen vorgeschlagen, bei der Hilfsaufgaben auf der Grundlage der internen Struktur der Daten entworfen werden, um mehr beschriftete Hilfsaufgaben für das Training zu erstellen.", "In diesem Artikel wird ein Ansatz für selbstüberwachtes Lernen auf Zeitreihen vorgeschlagen."]} {"source": "Are neural networks biased toward simple functions?Does depth always help learn more complex features?Is training the last layer of a network as good as training all layers?These questions seem unrelated at face value, but in this work we give all of them a common treatment from the spectral perspective.We will study the spectra of the *Conjugate Kernel, CK,* (also called the *Neural Network-Gaussian Process Kernel*), and the *Neural Tangent Kernel, NTK*.Roughly, the CK and the NTK tell us respectively ``\"what a network looks like at initialization\" and \"``what a network looks like during and after training.\"Their spectra then encode valuable information about the initial distribution and the training and generalization properties of neural networks.By analyzing the eigenvalues, we lend novel insights into the questions put forth at the beginning, and we verify these insights by extensive experiments of neural networks.We believe the computational tools we develop here for analyzing the spectra of CK and NTK serve as a solid foundation for future studies of deep neural networks.We have open-sourced the code for it and for generating the plots in this paper at github.com/jxVmnLgedVwv6mNcGCBy/NNspectra.", "target": ["Eigenwerte von Conjugate (auch bekannt als NNGP) und Neural Tangent Kernel können in geschlossener Form über den Booleschen Würfel berechnet werden und zeigen die Auswirkungen von Hyperparametern auf die induktive Verzerrung, das Training und die Generalisierung neuronaler Netze.", "In diesem Beitrag wird eine Spektralanalyse des konjugierten Kernels neuronaler Netze und des neuronalen Tangentenkernels auf booleschen Würfeln durchgeführt, um zu klären, warum tiefe Netze auf einfache Funktionen ausgerichtet sind."]} {"source": "To communicate, to ground hypotheses, to analyse data, neuroscientists often refer to divisions of the brain.Here we consider atlases used to parcellate the brain when studying brain function.We discuss the meaning and the validity of these parcellations, from a conceptual point of view as well as by running various analytical tasks on popular functional brain parcellations.", "target": ["Alle funktionellen Gehirnparzellierungen sind falsch, aber einige sind nützlich."]} {"source": "High-dimensional sparse reward tasks present major challenges for reinforcement learning agents. In this work we use imitation learning to address two of these challenges: how to learn a useful representation of the world e.g. from pixels, and how to explore efficiently given the rarity of a reward signal?We show that adversarial imitation can work well even in this high dimensional observation space.Surprisingly the adversary itself, acting as the learned reward function, can be tiny, comprising as few as 128 parameters, and can be easily trained using the most basic GAN formulation.Our approach removes limitations present in most contemporary imitation approaches: requiring no demonstrator actions (only video), no special initial conditions or warm starts, and no explicit tracking of any single demo.The proposed agent can solve a challenging robot manipulation task of block stacking from only video demonstrations and sparse reward, in which the non-imitating agents fail to learn completely. Furthermore, our agent learns much faster than competing approaches that depend on hand-crafted, staged dense reward functions, and also better compared to standard GAIL baselines.Finally, we develop a new adversarial goal recognizer that in some cases allows the agent to learn stacking without any task reward, purely from imitation.", "target": ["Nachahmung aus Pixeln, mit spärlicher oder keiner Belohnung, unter Verwendung von Off-Policy-RL und einer winzigen, adversarisch erlernten Belohnungsfunktion.", "Das Papier schlägt die Verwendung eines \"minimalen Gegners\" beim generativen adversarial Imitationslernen in hochdimensionalen visuellen Räumen vor.", "Dieses Papier zielt darauf ab, das Problem der Schätzung von spärlichen Belohnungen in einer hochdimensionalen Eingabeumgebung zu lösen."]} {"source": "In this paper we show strategies to easily identify fake samples generated with the Generative Adversarial Network framework.One strategy is based on the statistical analysis and comparison of raw pixel values and features extracted from them.The other strategy learns formal specifications from the real data and shows that fake samples violate the specifications of the real data.We show that fake samples produced with GANs have a universal signature that can be used to identify fake samples.We provide results on MNIST, CIFAR10, music and speech data.", "target": ["Wir zeigen Strategien zur einfachen Identifizierung gefälschter Proben, die mit dem Generative Adversarial Network Framework erzeugt wurden.", "Es wird gezeigt, dass gefälschte Beispiele, die mit gängigen GAN-Implementierungen (Generative Adversarial Network) erstellt wurden, mit verschiedenen statistischen Verfahren leicht identifiziert werden können. ", "Diese Arbeit schlägt Statistiken vor, um gefälschte Daten, die mit GANs generiert wurden, auf der Grundlage einfacher marginaler Statistiken oder formaler Spezifikationen, die automatisch aus realen Daten generiert wurden, zu identifizieren."]} {"source": "Efforts to reduce the numerical precision of computations in deep learning training have yielded systems that aggressively quantize weights and activations, yet employ wide high-precision accumulators for partial sums in inner-product operations to preserve the quality of convergence.The absence of any framework to analyze the precision requirements of partial sum accumulations results in conservative design choices.This imposes an upper-bound on the reduction of complexity of multiply-accumulate units.We present a statistical approach to analyze the impact of reduced accumulation precision on deep learning training.Observing that a bad choice for accumulation precision results in loss of information that manifests itself as a reduction in variance in an ensemble of partial sums, we derive a set of equations that relate this variance to the length of accumulation and the minimum number of bits needed for accumulation.We apply our analysis to three benchmark networks: CIFAR-10 ResNet 32, ImageNet ResNet 18 and ImageNet AlexNet.In each case, with accumulation precision set in accordance with our proposed equations, the networks successfully converge to the single precision floating-point baseline.We also show that reducing accumulation precision further degrades the quality of the trained network, proving that our equations produce tight bounds.Overall this analysis enables precise tailoring of computation hardware to the application, yielding area- and power-optimal systems.", "target": ["Wir stellen einen analytischen Rahmen vor, um die Anforderungen an die Akkumulations-Bitbreite in allen drei Deep Learning Trainings GEMMs zu bestimmen, und verifizieren die Gültigkeit und Dichtigkeit unserer Methode durch Benchmark Experimente.", "Die Autoren schlagen eine analytische Methode zur Vorhersage der Anzahl der Mantissenbits vor, die für partielle Summierungen für Convolutional und Fully-Connected Layers benötigt werden.", "Die Autoren führen eine gründliche Analyse der numerischen Präzision durch, die für die Akkumulationsoperationen beim Training neuronaler Netze erforderlich ist, und zeigen die theoretischen Auswirkungen einer Verringerung der Anzahl der Bits im Fließkomma-Akkumulator."]} {"source": "Unsupervised domain adaptation is a promising avenue to enhance the performance of deep neural networks on a target domain, using labels only from a source domain.However, the two predominant methods, domain discrepancy reduction learning and semi-supervised learning, are not readily applicable when source and target domains do not share a common label space.This paper addresses the above scenario by learning a representation space that retains discriminative power on both the (labeled) source and (unlabeled) target domains while keeping representations for the two domains well-separated.Inspired by a theoretical analysis, we first reformulate the disjoint classification task, where the source and target domains correspond to non-overlapping class labels, to a verification one.To handle both within and cross domain verifications, we propose a Feature Transfer Network (FTN) to separate the target feature space from the original source space while aligned with a transformed source space.Moreover, we present a non-parametric multi-class entropy minimization loss to further boost the discriminative power of FTNs on the target domain.In experiments, we first illustrate how FTN works in a controlled setting of adapting from MNIST-M to MNIST with disjoint digit classes between the two domains and then demonstrate the effectiveness of FTNs through state-of-the-art performances on a cross-ethnicity face recognition problem.", "target": ["Eine neue Theorie der unüberwachten Domänenanpassung für metrisches Distanzlernen und ihre Anwendung auf die Gesichtserkennung bei verschiedenen ethnischen Variationen.", "Schlägt ein neuartiges Merkmalstransfernetzwerk vor, das den Verlust der gegnerischen Domäne und den Verlust der Domänentrennung optimiert."]} {"source": "In this paper, we consider the problem of training neural networks (NN).To promote a NN with specific structures, we explicitly take into consideration the nonsmooth regularization (such as L1-norm) and constraints (such as interval constraint).This is formulated as a constrained nonsmooth nonconvex optimization problem, and we propose a convergent proximal-type stochastic gradient descent (Prox-SGD) algorithm.We show that under properly selected learning rates, momentum eventually resembles the unknown real gradient and thus is crucial in analyzing the convergence.We establish that with probability 1, every limit point of the sequence generated by the proposed Prox-SGD is a stationary point.Then the Prox-SGD is tailored to train a sparse neural network and a binary neural network, and the theoretical analysis is also supported by extensive numerical tests.", "target": ["Wir schlagen einen konvergenten stochastischen Gradientenabstiegsalgorithmus vom proximalen Typ für eingeschränkte, nicht-glatte, nicht-konvexe Optimierungsprobleme vor.", "Dieses Papier schlägt Prox-SGD vor, einen theoretischen Rahmen für stochastische Optimierungsalgorithmen, die asymptotisch zur Stationarität konvergieren für glatte nicht-konvexe Verluste + konvexe Constraints/Regulierer.", "Diese Arbeit schlägt einen neuen gradientenbasierten stochastischen Optimierungsalgorithmus mit Gradientenmittelung vor, indem die Theorie für proximale Algorithmen an die nicht-konvexe Umgebung angepasst wird."]} {"source": "The loss of a few neurons in a brain rarely results in any visible loss of function.However, the insight into what “few” means in this context is unclear.How many random neuron failures will it take to lead to a visible loss of function?In this paper, we address the fundamental question of the impact of the crash of a random subset of neurons on the overall computation of a neural network and the error in the output it produces.We study fault tolerance of neural networks subject to small random neuron/weight crash failures in a probabilistic setting.We give provable guarantees on the robustness of the network to these crashes.Our main contribution is a bound on the error in the output of a network under small random Bernoulli crashes proved by using a Taylor expansion in the continuous limit, where close-by neurons at a layer are similar.The failure mode we adopt in our model is characteristic of neuromorphic hardware, a promising technology to speed up artificial neural networks, as well as of biological networks.We show that our theoretical bounds can be used to compare the fault tolerance of different architectures and to design a regularizer improving the fault tolerance of a given architecture.We design an algorithm achieving fault tolerance using a reasonable number of neurons.In addition to the theoretical proof, we also provide experimental validation of our results and suggest a connection to the generalization capacity problem.", "target": ["Wir geben eine Schranke für NNs für den Ausgabefehler bei zufälligen Gewichtsausfällen an, indem wir eine Taylor-Erweiterung für den kontinuierlichen Grenzwert verwenden, bei dem benachbarte Neuronen ähnlich sind", "In diesem Beitrag wird das Problem des Herausfallens von Neuronen aus einem neuronalen Netz betrachtet. Es wird gezeigt, dass es ausreicht, mit Dropout zu trainieren, wenn das Ziel darin besteht, gegenüber zufällig herausfallenden Neuronen während der Auswertung robust zu werden.", "Dieser Beitrag untersucht die Auswirkung von Löschungen zufälliger Neuronen auf die Vorhersagegenauigkeit einer trainierten Architektur, mit der Anwendung auf die Fehleranalyse und den spezifischen Kontext neuromorpher Hardware."]} {"source": "Truly intelligent agents need to capture the interplay of all their senses to build a rich physical understanding of their world.In robotics, we have seen tremendous progress in using visual and tactile perception; however we have often ignored a key sense: sound.This is primarily due to lack of data that captures the interplay of action and sound.In this work, we perform the first large-scale study of the interactions between sound and robotic action.To do this, we create the largest available sound-action-vision dataset with 15,000 interactions on 60 objects using our robotic platform Tilt-Bot.By tilting objects and allowing them to crash into the walls of a robotic tray, we collect rich four-channel audio information.Using this data, we explore the synergies between sound and action, and present three key insights.First, sound is indicative of fine-grained object class information, e.g., sound can differentiate a metal screwdriver from a metal wrench.Second, sound also contains information about the causal effects of an action, i.e. given the sound produced, we can predict what action was applied on the object.Finally, object representations derived from audio embeddings are indicative of implicit physical properties.We demonstrate that on previously unseen objects, audio embeddings generated through interactions can predict forward models 24% better than passive visual embeddings.", "target": ["Wir erforschen und untersuchen die Synergien zwischen Klang und Aktion.", "In diesem Beitrag werden die Verbindungen zwischen Aktion und Ton durch die Erstellung eines Ton-Aktions-Sicht-Datensatzes mit einem Neigungsroboter untersucht.", "In diesem Beitrag wird die Rolle von Audio bei der Wahrnehmung von Objekten und Handlungen untersucht, und es wird gezeigt, wie auditive Informationen beim Erlernen von Modellen der Vorwärts- und Rückwärtsdynamik helfen können."]} {"source": "Hierarchical label structures widely exist in many machine learning tasks, ranging from those with explicit label hierarchies such as image classification to the ones that have latent label hierarchies such as semantic segmentation.Unfortunately, state-of-the-art methods often utilize cross-entropy loss which in-explicitly assumes the independence among class labels.Motivated by the fact that class members from the same hierarchy need to be similar to each others, we design a new training diagram called Hierarchical Complement Objective Training (HCOT).In HCOT, in addition to maximizing the probability of the ground truth class, we also neutralize the probabilities of rest of the classes in a hierarchical fashion, making the model take advantage of the label hierarchy explicitly.We conduct our method on both image classification and semantic segmentation.Results show that HCOT outperforms state-of-the-art models in CIFAR100, Imagenet, and PASCAL-context.Our experiments also demonstrate that HCOT can be applied on tasks with latent label hierarchies, which is a common characteristic in many machine learning tasks.", "target": ["Wir schlagen Hierarchical Complement Objective Training vor, ein neuartiges Trainingsparadigma zur effektiven Nutzung der Kategorienhierarchie im Beschriftungsraum sowohl bei der Bildklassifikation als auch bei der semantischen Segmentierung.", "Eine Methode, die die Entropie der posterioren Verteilung über die Klassen reguliert, was für Bildklassifizierungs- und Segmentierungsaufgaben nützlich sein kann"]} {"source": "There is a growing interest in automated neural architecture search (NAS).To improve the efficiency of NAS, previous approaches adopt weight sharing method to force all models share the same set of weights. However, it has been observed that a model performing better with shared weights does not necessarily perform better when trained alone.In this paper, we analyse existing weight sharing one-shot NAS approaches from a Bayesian point of view and identify the posterior fading problem, which compromises the effectiveness of shared weights.To alleviate this problem, we present a practical approach to guide the parameter posterior towards its true distribution.Moreover, a hard latency constraint is introduced during the search so that the desired latency can be achieved.The resulted method, namely Posterior Convergent NAS (PC-NAS), achieves state-of-the-art performance under standard GPU latency constraint on ImageNet.In our small search space, our model PC-NAS-S attains76.8% top-1 accuracy, 2.1% higher than MobileNetV2 (1.4x) with the same latency.When adopted to our large search space, PC-NAS-L achieves 78.1% top-1 accuracy within 11ms.The discovered architecture also transfers well to other computer vision applications such as object detection and person re-identification.", "target": ["Diese Arbeit identifiziert das Problem der bestehenden Gewichtsteilung bei der Suche nach neuronalen Architekturen und schlägt eine praktische Methode vor, die gute Ergebnisse erzielt.", "Der Autor identifiziert ein Problem mit NAS, das als Posterior Fading bezeichnet wird, und führt Posterior Convergent NAS ein, um diesen Effekt abzuschwächen."]} {"source": "Noisy labels are very common in real-world training data, which lead to poor generalization on test data because of overfitting to the noisy labels.In this paper, we claim that such overfitting can be avoided by \"early stopping\" training a deep neural network before the noisy labels are severely memorized.Then, we resume training the early stopped network using a \"maximal safe set,\" which maintains a collection of almost certainly true-labeled samples at each epoch since the early stop point.Putting them all together, our novel two-phase training method, called Prestopping, realizes noise-free training under any type of label noise for practical use.Extensive experiments using four image benchmark data sets verify that our method significantly outperforms four state-of-the-art methods in test error by 0.4–8.2 percent points under existence of real-world noise.", "target": ["Wir schlagen einen neuartigen zweistufigen Trainingsansatz vor, der auf einem \"frühen Stoppen\" für robustes Training auf störhaften Etiketten basiert.", "In dem Papier wird untersucht, wie frühes Stoppen bei der Optimierung hilft, sichere Beispiele zu finden.", "In diesem Papier wird eine zweistufige Trainingsmethode für das Lernen mit störhaften Labels vorgeschlagen."]} {"source": "Learning when to communicate and doing that effectively is essential in multi-agent tasks.Recent works show that continuous communication allows efficient training with back-propagation in multi-agent scenarios, but have been restricted to fully-cooperative tasks.In this paper, we present Individualized Controlled Continuous Communication Model (IC3Net) which has better training efficiency than simple continuous communication model, and can be applied to semi-cooperative and competitive settings along with the cooperative settings.IC3Net controls continuous communication with a gating mechanism and uses individualized rewards foreach agent to gain better performance and scalability while fixing credit assignment issues.Using variety of tasks including StarCraft BroodWars explore and combat scenarios, we show that our network yields improved performance and convergence rates than the baselines as the scale increases.Our results convey that IC3Net agents learn when to communicate based on the scenario and profitability.", "target": ["Wir stellen IC3Net vor, ein einzelnes Netzwerk, das zum Trainieren von Agenten in kooperativen, kompetitiven und gemischten Szenarien verwendet werden kann. Wir zeigen auch, dass Agenten mit unserem Modell lernen können, wann sie kommunizieren sollen.", "Der Autor schlägt eine neue Architektur für Multi-Agenten Reinforcement Learning vor, die mehrere LSTM Controller mit verbundenen Gewichten verwendet, die sich gegenseitig einen kontinuierlichen Vektor übermitteln.", "Die Autoren schlagen ein interessantes Gating-Schema vor, das es den Agenten erlaubt, in einer Multi-Agenten RL Umgebung zu kommunizieren. "]} {"source": "Neural sequence-to-sequence models are a recently proposed family of approaches used in abstractive summarization of text documents, useful for producing condensed versions of source text narratives without being restricted to using only words from the original text.Despite the advances in abstractive summarization, custom generation of summaries (e.g. towards a user's preference) remains unexplored.In this paper, we present CATS, an abstractive neural summarization model, that summarizes content in a sequence-to-sequence fashion but also introduces a new mechanism to control the underlying latent topic distribution of the produced summaries.Our experimental results on the well-known CNN/DailyMail dataset show that our model achieves state-of-the-art performance.", "target": ["Wir stellen das erste neuronale abstrakte Zusammenfassungsmodell vor, das in der Lage ist, die generierten Zusammenfassungen individuell anzupassen."]} {"source": "We propose a software framework based on ideas of the Learning-Compression algorithm , that allows one to compress any neural network by different compression mechanisms (pruning, quantization, low-rank, etc.).By design, the learning of the neural net (handled by SGD) is decoupled from the compression of its parameters (handled by a signal compression function), so that the framework can be easily extended to handle different combinations of neural net and compression type.In addition, it has other advantages, such as easy integration with deep learning frameworks, efficient training time, competitive practical performance in the loss-compression tradeoff, and reasonable convergence guarantees.Our toolkit is written in Python and Pytorch and we plan to make it available by the workshop time, and eventually open it for contributions from the community.", "target": ["Wir schlagen ein Software-Framework vor, das auf den Ideen des Lern-Kompressions-Algorithmus basiert, der es erlaubt, jedes neuronale Netzwerk durch verschiedene Kompressionsmechanismen (Pruning, Quantisierung, Low-Rank, etc.) zu komprimieren.", "In diesem Beitrag wird der Entwurf einer Softwarebibliothek vorgestellt, die es dem Benutzer erleichtert, seine Netzwerke zu komprimieren, indem sie die Details der Kompressionsmethoden verbirgt."]} {"source": "This work seeks the possibility of generating the human face from voice solely based on the audio-visual data without any human-labeled annotations.To this end, we propose a multi-modal learning framework that links the inference stage and generation stage.First, the inference networks are trained to match the speaker identity between the two different modalities.Then the pre-trained inference networks cooperate with the generation network by giving conditional information about the voice.", "target": ["In diesem Beitrag wird eine Methode zur durchgängigen multimodalen Generierung von menschlichen Gesichtern aus Sprache vorgeschlagen, die auf einem selbstüberwachten Lernsystem basiert.", "In diesem Artikel wird ein multimodales Learning Framework vorgestellt, das die Inferenzphase und die Generierungsphase miteinander verbindet, auf der Suche nach der Möglichkeit, das menschliche Gesicht ausschließlich aus der Stimme zu generieren.", "Diese Arbeit zielt darauf ab, einen bedingten Rahmen für die Erzeugung von Gesichtsbildern aus Audiosignalen zu schaffen. "]} {"source": "We present a simple neural model that given a formula and a property tries to answer the question whether the formula has the given property, for example whether a propositional formula is always true.The structure of the formula is captured by a feedforward neural network recursively built for the given formula in a top-down manner.The results of this network are then processed by two recurrent neural networks.One of the interesting aspects of our model is how propositional atoms are treated.For example, the model is insensitive to their names, it only matters whether they are the same or distinct.", "target": ["Es wird ein Top-Down-Ansatz zur rekursiven Darstellung von Satzformeln durch neuronale Netze vorgestellt.", "Diese Arbeit bietet ein neues neuronales Netzmodell für logische Formeln, das Informationen über eine gegebene Formel sammelt, indem es ihren Parse-Baum von oben nach unten durchläuft.", "Der Beitrag verfolgt den Weg eines baumstrukturierten Netzwerks, das isomorph zum Parse-Baum einer propositionalen Kalkülformel ist, aber Informationen von oben nach unten und nicht von unten nach oben weitergibt."]} {"source": "Despite significant advances in the field of deep Reinforcement Learning (RL), today's algorithms still fail to learn human-level policies consistently over a set of diverse tasks such as Atari 2600 games.We identify three key challenges that any algorithm needs to master in order to perform well on all games: processing diverse reward distributions, reasoning over long time horizons, and exploring efficiently. In this paper, we propose an algorithm that addresses each of these challenges and is able to learn human-level policies on nearly all Atari games.A new transformed Bellman operator allows our algorithm to process rewards of varying densities and scales; an auxiliary temporal consistency loss allows us to train stably using a discount factor of 0.999 (instead of 0.99) extending the effective planning horizon by an order of magnitude; and we ease the exploration problem by using human demonstrations that guide the agent towards rewarding states.When tested on a set of 42 Atari games, our algorithm exceeds the performance of an average human on 40 games using a common set of hyper parameters.", "target": ["Ape-X DQfD = Distributed (many actors + one learner + prioritized replay) DQN mit Demonstrationen zur Optimierung der unclipped 0,999-discounted return auf Atari.", "Diese Arbeit schlägt drei Erweiterungen (Bellman-Update, temporaler Konsistenzverlust und Expertendemonstration) für DQN vor, um die Lernleistung bei Atari-Spielen zu verbessern und übertrifft damit die State-of-the-Art-Ergebnisse für Atari-Spiele. ", "In dieser Arbeit wird ein transformierter Bellman-Operator vorgeschlagen, der darauf abzielt, die Sensitivität gegenüber nicht abgegrenzter Belohnung, die Robustheit gegenüber dem Wert des Diskontierungsfaktors und das Explorationsproblem zu lösen."]} {"source": "The knowledge that humans hold about a problem often extends far beyond a set of training data and output labels.While the success of deep learning mostly relies on supervised training, important properties cannot be inferred efficiently from end-to-end annotations alone, for example causal relations or domain-specific invariances.We present a general technique to supplement supervised training with prior knowledge expressed as relations between training instances.We illustrate the method on the task of visual question answering to exploit various auxiliary annotations, including relations of equivalence and of logical entailment between questions.Existing methods to use these annotations, including auxiliary losses and data augmentation, cannot guarantee the strict inclusion of these relations into the model since they require a careful balancing against the end-to-end objective.Our method uses these relations to shape the embedding space of the model, and treats them as strict constraints on its learned representations.%The resulting model encodes relations that better generalize across instances.In the context of VQA, this approach brings significant improvements in accuracy and robustness, in particular over the common practice of incorporating the constraints as a soft regularizer.We also show that incorporating this type of prior knowledge with our method brings consistent improvements, independently from the amount of supervised data used.It demonstrates the value of an additional training signal that is otherwise difficult to extract from end-to-end annotations alone.", "target": ["Trainingsmethode zur Erzwingung strenger Beschränkungen für gelernte Einbettungen während des überwachten Trainings. Angewandt auf visuelles Fragen beantworten.", "Die Autoren schlagen ein Framework vor, um zusätzliches semantisches Vorwissen in das traditionelle Training von Deep-Learning-Modellen einzubeziehen, um den Einbettungsraum anstelle des Parameterraums zu regularisieren.", "Der Beitrag plädiert für die Kodierung von externem Wissen in der linguistischen Einbettungsschicht eines multimodalen neuronalen Netzes in Form einer Reihe von harten Einschränkungen."]} {"source": "Artificial neural networks revolutionized many areas of computer science in recent years since they provide solutions to a number of previously unsolved problems.On the other hand, for many problems, classic algorithms exist, which typically exceed the accuracy and stability of neural networks.To combine these two concepts, we present a new kind of neural networks—algorithmic neural networks (AlgoNets).These networks integrate smooth versions of classic algorithms into the topology of neural networks.Our novel reconstructive adversarial network (RAN) enables solving inverse problems without or with only weak supervision.", "target": ["Lösung inverser Probleme durch Verwendung glatter Approximationen der Forward Algorithmen zum Trainieren der inversen Modelle."]} {"source": "Pointwise localization allows more precise localization and accurate interpretability, compared to bounding box, in applications where objects are highly unstructured such as in medical domain.In this work, we focus on weakly supervised localization (WSL) where a model is trained to classify an image and localize regions of interest at pixel-level using only global image annotation.Typical convolutional attentions maps are prune to high false positive regions.To alleviate this issue, we propose a new deep learning method for WSL, composed of a localizer and a classifier, where the localizer is constrained to determine relevant and irrelevant regions using conditional entropy (CE) with the aim to reduce false positive regions.Experimental results on a public medical dataset and two natural datasets, using Dice index, show that, compared to state of the art WSL methods, our proposal can provide significant improvements in terms of image-level classification and pixel-level localization (low false positive) with robustness to overfitting.A public reproducible PyTorch implementation is provided.", "target": ["Eine Deep Learning Methode für schwach überwachte punktuelle Lokalisierung, die nur auf der Bildebene lernt. Es stützt sich auf die bedingte Entropie zur Lokalisierung relevanter und irrelevanter Regionen mit dem Ziel, falsch positive Regionen zu minimieren.", "In dieser Arbeit wird das Problem der WSL mit Hilfe eines neuartigen Designs von Regularisierungsbegriffen und eines rekursiven Löschalgorithmus untersucht.", "In diesem Beitrag wird ein neuer, schwach überwachter Ansatz zum Erlernen der Objektsegmentierung mit Klassenbezeichnungen auf Bildebene vorgestellt."]} {"source": "Model-based reinforcement learning has been empirically demonstrated as a successful strategy to improve sample efficiency.Particularly, Dyna architecture, as an elegant model-based architecture integrating learning and planning, provides huge flexibility of using a model.One of the most important components in Dyna is called search-control, which refers to the process of generating state or state-action pairs from which we query the model to acquire simulated experiences.Search-control is critical to improve learning efficiency.In this work, we propose a simple and novel search-control strategy by searching high frequency region on value function.Our main intuition is built on Shannon sampling theorem from signal processing, which indicates that a high frequency signal requires more samples to reconstruct.We empirically show that a high frequency function is more difficult to approximate.This suggests a search-control strategy: we should use states in high frequency region of the value function to query the model to acquire more samples.We develop a simple strategy to locally measure the frequency of a function by gradient norm, and provide theoretical justification for this approach.We then apply our strategy to search-control in Dyna, and conduct experiments to show its property and effectiveness on benchmark domains.", "target": ["Erfassen von Zuständen aus dem Hochfrequenzbereich für die Suchkontrolle in Dyna.", "Die Autoren schlagen vor, die Probenentnahme im Hochfrequenzbereich durchzuführen, um die Effizienz der Probenentnahme zu erhöhen.", "In dieser Arbeit wird ein neuer Weg vorgeschlagen, um Zustände auszuwählen, von denen aus Übergänge im Dyna-Algorithmus durchgeführt werden."]} {"source": "We propose a new architecture for distributed image compression from a group of distributed data sources.The work is motivated by practical needs of data-driven codec design, low power consumption, robustness, and data privacy.The proposed architecture, which we refer to as Distributed Recurrent Autoencoder for Scalable Image Compression (DRASIC), is able to train distributed encoders and one joint decoder on correlated data sources.Its compression capability is much better than the method of training codecs separately.Meanwhile, for 10 distributed sources, our distributed system remarkably performs within 2 dB peak signal-to-noise ratio (PSNR) of that of a single codec trained with all data sources.We experiment distributed sources with different correlations and show how our methodology well matches the Slepian-Wolf Theorem in Distributed Source Coding (DSC).Our method is also shown to be robust to the lack of presence of encoded data from a number of distributed sources.Moreover, it is scalable in the sense that codes can be decoded simultaneously at more than one compression quality level.To the best of our knowledge, this is the first data-driven DSC framework for general distributed code design with deep learning.", "target": ["Wir stellen ein datengesteuertes Distributed Source Coding Framework vor, das auf Distributed Recurrent Autoencoder for Scalable Image Compression (DRASIC) basiert.", "In dem Artikel wird ein verteilter rekurrenter Autoencoder für die Bildkomprimierung vorgeschlagen, der ein ConvLSTM verwendet, um binäre Codes zu lernen, die nach und nach aus den Residuals der zuvor kodierten Informationen aufgebaut werden", "Die Autoren schlagen eine Methode zum Trainieren von Bildkompressionsmodellen auf mehreren Quellen vor, mit einem separaten Encoder für jede Quelle und einem gemeinsamen Decoder. "]} {"source": "Long short-term memory networks (LSTMs) were introduced to combat vanishing gradients in simple recurrent neural networks (S-RNNs) by augmenting them with additive recurrent connections controlled by gates.We present an alternate view to explain the success of LSTMs: the gates themselves are powerful recurrent models that provide more representational power than previously appreciated.We do this by showing that the LSTM's gates can be decoupled from the embedded S-RNN, producing a restricted class of RNNs where the main recurrence computes an element-wise weighted sum of context-independent functions of the inputs.Experiments on a range of challenging NLP problems demonstrate that the simplified gate-based models work substantially better than S-RNNs, and often just as well as the original LSTMs, strongly suggesting that the gates are doing much more in practice than just alleviating vanishing gradients.", "target": ["Gates übernehmen in LSTMs die ganze schwere Arbeit, indem sie elementweise gewichtete Summen berechnen, und das Entfernen des internen einfachen RNN verschlechtert die Leistung des Modells nicht.", "In diesem Beitrag wird eine vereinfachte LSTM Variante vorgeschlagen, bei der die Nichtlinearität von Inhaltselement und Ausgangs Gate entfernt wird", "Diese Arbeit präsentiert eine Analyse von LSTMS, die zeigt, dass sie eine Form haben, in der der Inhalt der Speicherzelle bei jedem Schritt eine gewichtete Kombination der Inhaltsaktualisierten Werte ist, die bei jedem Zeitschritt berechnet werden, und bietet eine Vereinfachung von LSTMs, die den Wert berechnen, mit dem die Speicherzelle bei jedem Zeitschritt in Bezug auf eine deterministische Funktion der Eingabe anstatt einer Funktion der Eingabe und des aktuellen Kontexts.", "Diese Arbeit schlägt eine neue Einsicht in LSTM vor, bei der der Kern eine elementweise gewichtete Summe ist, und argumentiert, dass LSTM redundant ist, indem nur Eingabe- und Vergessensgatter zur Berechnung der Gewichte beibehalten werden."]} {"source": "Machine learning algorithms designed to characterize, monitor, and intervene on human health (ML4H) are expected to perform safely and reliably when operating at scale, potentially outside strict human supervision.This requirement warrants a stricter attention to issues of reproducibility than other fields of machine learning.In this work, we conduct a systematic evaluation of over 100 recently published ML4H research papers along several dimensions related to reproducibility we identified.We find that the field of ML4H compares poorly to more established machine learning fields, particularly concerning data accessibility and code accessibility. Finally, drawing from success in other fields of science, we propose recommendations to data providers, academic publishers, and the ML4H research community in order to promote reproducible research moving forward.", "target": ["Bei der Analyse von mehr als 300 Beiträgen auf aktuellen Konferenzen zum maschinellen Lernen haben wir festgestellt, dass Anwendungen des maschinellen Lernens für die Gesundheit (ML4H) in Bezug auf die Reproduzierbarkeit hinter anderen Bereichen des maschinellen Lernens zurückbleiben.", "In diesem Beitrag wird der Stand der Reproduzierbarkeit für ML-Anwendungen im Gesundheitswesen quantitativ und qualitativ untersucht und es werden Empfehlungen für eine bessere Reproduzierbarkeit der Forschung gegeben."]} {"source": "We propose a solution for evaluation of mathematical expression.However, instead of designing a single end-to-end model we propose a Lego bricks style architecture.In this architecture instead of training a complex end-to-end neural network, many small networks can be trained independently each accomplishing one specific operation and acting a single lego brick.More difficult or complex task can then be solved using a combination of these smaller network.In this work we first identify 8 fundamental operations that are commonly used to solve arithmetic operations (such as 1 digit multiplication, addition, subtraction, sign calculator etc).These fundamental operations are then learned using simple feed forward neural networks.We then shows that different operations can be designed simply by reusing these smaller networks.As an example we reuse these smaller networks to develop larger and a more complex network to solve n-digit multiplication, n-digit division, and cross product.This bottom-up strategy not only introduces reusability, we also show that it allows to generalize for computations involving n-digits and we show results for up to 7 digit numbers.Unlike existing methods, our solution also generalizes for both positive as well as negative numbers.", "target": ["Wir trainieren viele kleine Netze, jedes für eine bestimmte Operation, die dann kombiniert werden, um komplexe Operationen durchzuführen.", "In diesem Artikel wird vorgeschlagen, neuronale Netze zur Auswertung mathematischer Ausdrücke zu verwenden, indem 8 kleine Bausteine für 8 grundlegende Operationen, z. B. Addition, Subtraktion usw., entworfen werden und dann mehrstellige Multiplikationen und Divisionen mit diesen kleinen Bausteinen entworfen werden.", "In dem Papier wird eine Methode zur Entwicklung eines NN-basierten Bewertungssystems für mathematische Ausdrücke vorgeschlagen."]} {"source": "In standard generative adversarial network (SGAN), the discriminator estimates the probability that the input data is real.The generator is trained to increase the probability that fake data is real.We argue that it should also simultaneously decrease the probability that real data is real because1) this would account for a priori knowledge that half of the data in the mini-batch is fake,2) this would be observed with divergence minimization, and3) in optimal settings, SGAN would be equivalent to integral probability metric (IPM) GANs. We show that this property can be induced by using a relativistic discriminator which estimate the probability that the given real data is more realistic than a randomly sampled fake data.We also present a variant in which the discriminator estimate the probability that the given real data is more realistic than fake data, on average.We generalize both approaches to non-standard GAN loss functions and we refer to them respectively as Relativistic GANs (RGANs) and Relativistic average GANs (RaGANs).We show that IPM-based GANs are a subset of RGANs which use the identity function. Empirically, we observe that1) RGANs and RaGANs are significantly more stable and generate higher quality data samples than their non-relativistic counterparts,2) Standard RaGAN with gradient penalty generate data of better quality than WGAN-GP while only requiring a single discriminator update per generator update (reducing the time taken for reaching the state-of-the-art by 400%), and3) RaGANs are able to generate plausible high resolutions images (256x256) from a very small sample (N=2011), while GAN and LSGAN cannot; these images are of significantly better quality than the ones generated by WGAN-GP and SGAN with spectral normalization.The code is freely available on https://github.com/AlexiaJM/RelativisticGAN.", "target": ["Verbesserung der Qualität und Stabilität von GANs unter Verwendung eines relativistischen Diskriminators; IPM-GANs (wie WGAN-GP) sind ein Spezialfall.", "Diese Arbeit schlägt einen relativistischen Diskriminator vor, der in einigen Situationen hilft, obwohl er etwas empfindlich auf Hyperparameter, Architekturen und Datensätze reagiert.", "In dieser Arbeit betrachten die Autoren eine Variante des GAN, indem sie gleichzeitig die Wahrscheinlichkeit verringern, dass die Daten für den Generator real sind."]} {"source": "Some of the most successful applications of deep reinforcement learning to challenging domains in discrete and continuous control have used policy gradient methods in the on-policy setting.However, policy gradients can suffer from large variance that may limit performance, and in practice require carefully tuned entropy regularization to prevent policy collapse.As an alternative to policy gradient algorithms, we introduce V-MPO, an on-policy adaptation of Maximum a Posteriori Policy Optimization (MPO) that performs policy iteration based on a learned state-value function.We show that V-MPO surpasses previously reported scores for both the Atari-57 and DMLab-30 benchmark suites in the multi-task setting, and does so reliably without importance weighting, entropy regularization, or population-based tuning of hyperparameters.On individual DMLab and Atari levels, the proposed algorithm can achieve scores that are substantially higher than has previously been reported.V-MPO is also applicable to problems with high-dimensional, continuous action spaces, which we demonstrate in the context of learning to control simulated humanoids with 22 degrees of freedom from full state observations and 56 degrees of freedom from pixel observations, as well as example OpenAI Gym tasks where V-MPO achieves substantially higher asymptotic scores than previously reported.", "target": ["Eine auf Zustandswertfunktionen basierende Version von MPO, die gute Ergebnisse bei einer Vielzahl von Aufgaben in der diskreten und kontinuierlichen Steuerung erzielt.", "In diesem Beitrag wird ein Algorithmus für On-Policy Reinforcement Learning vorgestellt, der sowohl kontinuierliche/diskrete Steuerung als auch Single-/Multi-Task-Lernen ermöglicht und sowohl niedrigdimensionale Zustände als auch Pixel verwendet.", "Das Papier schlägt eine Online-Variante von MPO, V-MPO, vor, die die V-Funktion erlernt und die nicht-parametrische Verteilung in Richtung der Vorteile aktualisiert."]} {"source": "Turing complete computation and reasoning are often regarded as necessary pre- cursors to general intelligence.There has been a significant body of work studying neural networks that mimic general computation, but these networks fail to generalize to data distributions that are outside of their training set.We study this problem through the lens of fundamental computer science problems: sorting and graph processing.We modify the masking mechanism of a transformer in order to allow them to implement rudimentary functions with strong generalization.We call this model the Neural Execution Engine, and show that it learns, through supervision, to numerically compute the basic subroutines comprising these algorithms with near perfect accuracy.Moreover, it retains this level of accuracy while generalizing to unseen data and long sequences outside of the training distribution.", "target": ["Wir schlagen neuronale Ausführungs Engines (NEEs) vor, die eine gelernte Maske und überwachte Ausführungsspuren nutzen, um die Funktionalität von Unterprogrammen zu imitieren und eine starke Generalisierung zu zeigen.", "In diesem Beitrag wird das Problem des Aufbaus einer Programmausführungs Engine mit neuronalen Netzen untersucht und ein Transformer-basiertes Modell zum Erlernen grundlegender Unterprogramme vorgeschlagen, das in mehreren Standardalgorithmen angewendet wird.", "Diese Arbeit befasst sich mit dem Problem des Entwurfs neuronaler Netzwerkarchitekturen, die allgemeine Programme lernen und implementieren können."]} {"source": "Meta-learning is a promising strategy for learning to efficiently learn within new tasks, using data gathered from a distribution of tasks.However, the meta-learning literature thus far has focused on the task segmented setting, where at train-time, offline data is assumed to be split according to the underlying task, and at test-time, the algorithms are optimized to learn in a single task.In this work, we enable the application of generic meta-learning algorithms to settings where this task segmentation is unavailable, such as continual online learning with a time-varying task.We present meta-learning via online changepoint analysis (MOCA), an approach which augments a meta-learning algorithm with a differentiable Bayesian changepoint detection scheme.The framework allows both training and testing directly on time series data without segmenting it into discrete tasks.We demonstrate the utility of this approach on a nonlinear meta-regression benchmark as well as two meta-image-classification benchmarks.", "target": ["Die Bayes'sche Changepoint-Erkennung ermöglicht Meta-Learning direkt aus Zeitreihendaten.", "Diese Arbeit betrachtet Meta-Lernen in einer Aufgabe unsegmentierten Umgebung und wendet Bayesian Online Änderungspunkt Erkennung mit Meta-Lernen an.", "In diesem Beitrag wird das Meta-Lernen auf nicht segmentierte Aufgaben ausgedehnt, wobei das MOCA Framework ein Bayessches Changepoint Schätzschema zur Erkennung von Aufgabenänderungen verwendet."]} {"source": "People with high-frequency hearing loss rely on hearing aids that employ frequency lowering algorithms.These algorithms shift some of the sounds from the high frequency band to the lower frequency band where the sounds become more perceptible for the people with the condition.Fricative phonemes have an important part of their content concentrated in high frequency bands.It is important that the frequency lowering algorithm is activated exactly for the duration of a fricative phoneme, and kept off at all other times.Therefore, timely (with zero delay) and accurate fricative phoneme detection is a key problem for high quality hearing aids.In this paper we present a deep learning based fricative phoneme detection algorithm that has zero detection delay and achieves state-of-the-art fricative phoneme detection accuracy on the TIMIT Speech Corpus.All reported results are reproducible and come with easy to use code that could serve as a baseline for future research.", "target": ["Ein auf tiefem Lernen basierender Ansatz zur Erkennung von fricative Phoneme ohne Verzögerung.", "In diesem Beitrag werden Methoden des überwachten tiefen Lernens angewandt, um die exakte Dauer eines fricative Phoneme zu erkennen und so den Algorithmus zur praktischen Frequenzabsenkung zu verbessern."]} {"source": "Sequence-to-sequence models with soft attention have been successfully applied to a wide variety of problems, but their decoding process incurs a quadratic time and space cost and is inapplicable to real-time sequence transduction.To address these issues, we propose Monotonic Chunkwise Attention (MoChA), which adaptively splits the input sequence into small chunks over which soft attention is computed.We show that models utilizing MoChA can be trained efficiently with standard backpropagation while allowing online and linear-time decoding at test time.When applied to online speech recognition, we obtain state-of-the-art results and match the performance of a model using an offline soft attention mechanism.In document summarization experiments where we do not expect monotonic alignments, we show significantly improved performance compared to a baseline monotonic attention-based model.", "target": ["Ein Online- und Linearzeit-Aufmerksamkeitsmechanismus, der schwache Aufmerksamkeit über adaptiv platzierte Teile der Eingabesequenz ausübt.", "In diesem Beitrag wird eine kleine Modifikation der monotonen Aufmerksamkeit in [1] vorgeschlagen, indem dem durch die monotone Aufmerksamkeit vorhergesagten Segment eine schwache Aufmerksamkeit hinzugefügt wird.", "Die Arbeit schlägt eine Erweiterung eines früheren monotonen Aufmerksamkeitsmodells (Raffel et al. 2017) vor, um ein Fenster fester Größe bis zur Ausrichtungsposition zu beachten."]} {"source": "We present a framework for automatically ordering image patches that enables in-depth analysis of dataset relationship to learnability of a classification task using convolutional neural network.An image patch is a group of pixels residing in a continuous area contained in the sample.Our preliminary experimental results show that an informed smart shuffling of patches at a sample level can expedite training by exposing important features at early stages of training.In addition, we conduct systematic experiments and provide evidence that CNN’s generalization capabilities do not correlate with human recognizable features present in training samples.We utilized the framework not only to show that spatial locality of features within samples do not correlate with generalization, but also to expedite convergence while achieving similar generalization performance.Using multiple network architectures and datasets, we show that ordering image regions using mutual information measure between adjacent patches, enables CNNs to converge in a third of the total steps required to train the same network without patch ordering.", "target": ["Entwicklung neuer Techniken, die sich auf die Neuordnung von Bereichen stützen, um eine detaillierte Analyse der Beziehung zwischen Datensatz und Trainings- und Generalisierungsleistung zu ermöglichen."]} {"source": "Producing agents that can generalize to a wide range of environments is a significant challenge in reinforcement learning.One method for overcoming this issue is domain randomization, whereby at the start of each training episode some parameters of the environment are randomized so that the agent is exposed to many possible variations.However, domain randomization is highly inefficient and may lead to policies with high variance across domains.In this work, we formalize the domain randomization problem, and show that minimizing the policy's Lipschitz constant with respect to the randomization parameters leads to low variance in the learned policies.We propose a method where the agent only needs to be trained on one variation of the environment, and its learned state representations are regularized during training to minimize this constant.We conduct experiments that demonstrate that our technique leads to more efficient and robust learning than standard domain randomization, while achieving equal generalization scores.", "target": ["Wir produzieren Reinforcement Learning Agenten, die sich mit Hilfe einer neuartigen Regularisierungstechnik gut auf eine Vielzahl von Umgebungen verallgemeinern lassen.", "Diese Arbeit stellt die Herausforderung der hohen Varianzregelungen bei der Domänenrandomisierung für Reinforcement Learning vor und konzentriert sich hauptsächlich auf das Problem der visuellen Randomisierung, bei der sich die verschiedenen randomisierten Domänen nur im Zustandsraum unterscheiden und die zugrunde liegenden Belohnungen und Dynamiken gleich sind.", "Um die Generalisierungsfähigkeit von Deep-RL-Agenten bei Aufgaben mit unterschiedlichen visuellen Mustern zu verbessern, wurde in diesem Beitrag eine einfache Regularisierungstechnik für die Domänenrandomisierung vorgeschlagen."]} {"source": "Claims from the fields of network neuroscience and connectomics suggest that topological models of the brain involving complex networks are of particular use and interest.The field of deep neural networks has mostly left inspiration from these claims out.In this paper, we propose three architectures and use each of them to explore the intersection of network neuroscience and deep learning in an attempt to bridge the gap between the two fields.Using the teachings from network neuroscience and connectomics, we show improvements over the ResNet architecture, we show a possible connection between early training and the spectral properties of the network, and we show the trainability of a DNN based on the neuronal network of C.Elegans.", "target": ["Wir erforschen den Schnittpunkt von Netzwerk Neurowissenschaften und Deep Learning. "]} {"source": "Creating a knowledge base that is accurate, up-to-date and complete remains a significant challenge despite substantial efforts in automated knowledge base construction. In this paper, we present Alexandria -- a system for unsupervised, high-precision knowledge base construction.Alexandria uses a probabilistic program to define a process of converting knowledge base facts into unstructured text. Using probabilistic inference, we can invert this program and so retrieve facts, schemas and entities from web text.The use of a probabilistic program allows uncertainty in the text to be propagated through to the retrieved facts, which increases accuracy and helps merge facts from multiple sources.Because Alexandria does not require labelled training data, knowledge bases can be constructed with the minimum of manual input.We demonstrate this by constructing a high precision (typically 97\\%+) knowledge base for people from a single seed fact.", "target": ["In diesem Beitrag wird ein System für die unüberwachte, hochpräzise Konstruktion von Knowledge Bases vorgestellt, das ein probabilistisches Programm verwendet, um einen Prozess der Umwandlung von Fakten aus der Knowledge Base in unstrukturierten Text zu definieren.", "Überblick über die bestehende Knowledge Base, die mit einem probabilistischen Modell erstellt wird, wobei der Ansatz zur Erstellung der Knowledge Base im Vergleich zu anderen Knowledge Base Ansätzen wie YAGO2, NELL, Knowledge Vault und DeepDive bewertet wird.", "Diese Arbeit verwendet ein probabilistisches Programm, das den Prozess beschreibt, durch den Fakten, die Entitäten beschreiben, in Texten und einer großen Anzahl von Webseiten realisiert werden können, um zu lernen, wie man Fakten über Personen anhand eines einzigen Seed-Fakts extrahiert."]} {"source": "Recent advances have made it possible to create deep complex-valued neural networks.Despite this progress, many challenging learning tasks have yet to leverage the power of complex representations.Building on recent advances, we propose a new deep complex-valued method for signal retrieval and extraction in the frequency domain.As a case study, we perform audio source separation in the Fourier domain.Our new method takes advantage of the convolution theorem which states that the Fourier transform of two convolved signals is the elementwise product of their Fourier transforms.Our novel method is based on a complex-valued version of Feature-Wise Linear Modulation (FiLM) and serves as the keystone of our proposed signal extraction method.We also introduce a new and explicit amplitude and phase-aware loss, which is scale and time invariant, taking into account the complex-valued components of the spectrogram.Using the Wall Street Journal Dataset, we compared our phase-aware loss to several others that operate both in the time and frequency domains and demonstrate the effectiveness of our proposed signal extraction method and proposed loss.", "target": ["Neue Methode zur Signalextraktion im Fourier-Bereich"]} {"source": "We propose an implementation of GNN that predicts and imitates the motion be- haviors from observed swarm trajectory data.The network’s ability to capture interaction dynamics in swarms is demonstrated through transfer learning.We finally discuss the inherent availability and challenges in the scalability of GNN, and proposed a method to improve it with layer-wise tuning and mixing of data enabled by padding.", "target": ["Verbesserung der Skalierbarkeit graphischer neuronaler Netze beim Imitation Learning und der Vorhersage von Schwarmbewegungen", "In der Arbeit wird ein neues Zeitreihenmodell für das Lernen einer Folge von Graphen vorgeschlagen.", "Diese Arbeit befasst sich mit Problemen der Sequenzvorhersage in einem Multiagentensystem."]} {"source": "Embedding layers are commonly used to map discrete symbols into continuous embedding vectors that reflect their semantic meanings.Despite their effectiveness, the number of parameters in an embedding layer increases linearly with the number of symbols and poses a critical challenge on memory and storage constraints.In this work, we propose a generic and end-to-end learnable compression framework termed differentiable product quantization (DPQ).We present two instantiations of DPQ that leverage different approximation techniques to enable differentiability in end-to-end learning.Our method can readily serve as a drop-in alternative for any existing embedding layer.Empirically, DPQ offers significant compression ratios (14-238x) at negligible or no performance cost on 10 datasets across three different language tasks.", "target": ["Wir schlagen ein differenzierbares Produktquantisierungsverfahren vor, das die Größe der Einbettungsschicht in einem Ende-zu-Ende Training ohne Leistungseinbußen reduzieren kann.", "Diese Arbeit befasst sich mit Methoden zur Komprimierung von Einbettungsschichten für Inferenzen mit geringem Speicherbedarf, wobei komprimierte Einbettungen zusammen mit den aufgabenspezifischen Modellen in einer differenzierbaren Ende-zu-Ende Methode gelernt werden."]} {"source": "For multi-valued functions---such as when the conditional distribution on targets given the inputs is multi-modal---standard regression approaches are not always desirable because they provide the conditional mean.Modal regression approaches aim to instead find the conditional mode, but are restricted to nonparametric approaches.Such approaches can be difficult to scale, and make it difficult to benefit from parametric function approximation, like neural networks, which can learn complex relationships between inputs and targets.In this work, we propose a parametric modal regression algorithm, by using the implicit function theorem to develop an objective for learning a joint parameterized function over inputs and targets.We empirically demonstrate on several synthetic problems that our method(i) can learn multi-valued functions and produce the conditional modes,(ii) scales well to high-dimensional inputs and(iii) is even more effective for certain unimodal problems, particularly for high frequency data where the joint function over inputs and targets can better capture the complex relationship between them.We conclude by showing that our method provides small improvements on two regression datasets that have asymmetric distributions over the targets.", "target": ["Wir stellen einen einfachen und neuartigen modalen Regressionsalgorithmus vor, der sich leicht auf große Probleme übertragen lässt. ", "Diese Arbeit schlägt einen impliziten Funktionsansatz zum Erlernen der Modi der multimodalen Regression vor.", "In der vorliegenden Arbeit wird ein parametrischer Ansatz zur Schätzung des bedingten Modus unter Verwendung des Impliziten Funktionssatzes für multimodale Verteilungen vorgeschlagen. "]} {"source": "Deep reinforcement learning algorithms require large amounts of experience to learn an individual task.While in principle meta-reinforcement learning (meta-RL) algorithms enable agents to learn new skills from small amounts of experience, several major challenges preclude their practicality.Current methods rely heavily on on-policy experience, limiting their sample efficiency.They also lack mechanisms to reason about task uncertainty when adapting to new tasks, limiting their effectiveness in sparse reward problems.In this paper, we address these challenges by developing an off-policy meta-RL algorithm that disentangles task inference and control.In our approach, we perform online probabilistic filtering of latent task variables to infer how to solve a new task from small amounts of experience.This probabilistic interpretation enables posterior sampling for structured and efficient exploration.We demonstrate how to integrate these task variables with off-policy RL algorithms to achieve both meta-training and adaptation efficiency.Our method outperforms prior algorithms in sample efficiency by 20-100X as well as in asymptotic performance on several meta-RL benchmarks.", "target": ["Effiziente Meta-RL durch die Kombination von Variationsschlussfolgerung von probabilistischen Aufgabenvariablen mit Off-Policy RL.", "In diesem Beitrag wird vorgeschlagen, während der Meta-Trainingszeit Off-Policy RL einzusetzen, um die Stichprobeneffizienz von Meta-RL Methoden erheblich zu verbessern."]} {"source": "Knowledge bases, massive collections of facts (RDF triples) on diverse topics, support vital modern applications.However, existing knowledge bases contain very little data compared to the wealth of information on the Web.This is because the industry standard in knowledge base creation and augmentation suffers from a serious bottleneck: they rely on domain experts to identify appropriate web sources to extract data from.Efforts to fully automate knowledge extraction have failed to improve this standard: these automated systems are able to retrieve much more data and from a broader range of sources, but they suffer from very low precision and recall.As a result, these large-scale extractions remain unexploited.In this paper, we present MIDAS, a system that harnesses the results of automated knowledge extraction pipelines to repair the bottleneck in industrial knowledge creation and augmentation processes.MIDAS automates the suggestion of good-quality web sources and describes what to extract with respect to augmenting an existing knowledge base.We make three major contributions.First, we introduce a novel concept, web source slices, to describe the contents of a web source.Second, we define a profit function to quantify the value of a web source slice with respect to augmenting an existing knowledge base.Third, we develop effective and highly-scalable algorithms to derive high-profit web source slices.We demonstrate that MIDAS produces high-profit results and outperforms the baselines significantly on both real-word and synthetic datasets.", "target": ["Dieser Artikel konzentriert sich auf die Identifizierung von qualitativ hochwertigen Webquellen für industrielle Knowledge Base Augmentation Pipelines."]} {"source": "We explore the match prediction problem where one seeks to estimate the likelihood of a group of M items preferred over another, based on partial group comparison data.Challenges arise in practice.As existing state-of-the-art algorithms are tailored to certain statistical models, we have different best algorithms across distinct scenarios.Worse yet, we have no prior knowledge on the underlying model for a given scenario.These call for a unified approach that can be universally applied to a wide range of scenarios and achieve consistently high performances.To this end, we incorporate deep learning architectures so as to reflect the key structural features that most state-of-the-art algorithms, some of which are optimal in certain settings, share in common.This enables us to infer hidden models underlying a given dataset, which govern in-group interactions and statistical patterns of comparisons, and hence to devise the best algorithm tailored to the dataset at hand.Through extensive experiments on synthetic and real-world datasets, we evaluate our framework in comparison to state-of-the-art algorithms.It turns out that our framework consistently leads to the best performance across all datasets in terms of cross entropy loss and prediction accuracy, while the state-of-the-art algorithms suffer from inconsistent performances across different datasets.Furthermore, we show that it can be easily extended to attain satisfactory performances in rank aggregation tasks, suggesting that it can be adaptable for other tasks as well.", "target": ["Wir untersuchen die Vorzüge des Einsatzes neuronaler Netze bei dem Problem der Vorhersage von Übereinstimmungen, bei dem es darum geht, die Wahrscheinlichkeit abzuschätzen, dass eine Gruppe von M Gegenständen gegenüber einer anderen bevorzugt wird, und zwar auf der Grundlage partieller Gruppenvergleichsdaten.", "In diesem Beitrag wird eine Lösung für das Problem der Rangfolge von Mengen durch ein tiefes neuronales Netz vorgeschlagen und eine Architektur für diese Aufgabe entworfen, die sich an früheren, manuell entwickelten Algorithmen orientiert.", "In diesem Beitrag wird eine Technik zur Lösung des Problems der Treffervorhersage mithilfe einer Deep-Learning-Architektur vorgestellt."]} {"source": "Recurrent Neural Networks (RNNs) are designed to handle sequential data but suffer from vanishing or exploding gradients. Recent work on Unitary Recurrent Neural Networks (uRNNs) have been used to address this issue and in some cases, exceed the capabilities of Long Short-Term Memory networks (LSTMs). We propose a simpler and novel update scheme to maintain orthogonal recurrent weight matrices without using complex valued matrices.This is done by parametrizing with a skew-symmetric matrix using the Cayley transform.Such a parametrization is unable to represent matrices with negative one eigenvalues, but this limitation is overcome by scaling the recurrent weight matrix by a diagonal matrix consisting of ones and negative ones. The proposed training scheme involves a straightforward gradient calculation and update step.In several experiments, the proposed scaled Cayley orthogonal recurrent neural network (scoRNN) achieves superior results with fewer trainable parameters than other unitary RNNs.", "target": ["Ein neuartiger Ansatz zur Erhaltung orthogonaler rekurrenter Gewichtsmatrizen in einem RNN.", "Stellt ein Schema zum Lernen der rekurrenten Parametermatrix in einem neuronalen Netz vor, das die Cayley-Transformation und eine skalierende Gewichtsmatrix verwendet. ", "Diese Arbeit schlägt eine RNN-Reparametrisierung der rekurrenten Gewichte mit einer schiefsymmetrischen Matrix unter Verwendung der Cayley-Transformation vor, um die rekurrente Gewichtsmatrix orthogonal zu halten.", "Die neuartige Parametrisierung von RNNs ermöglicht die relativ einfache Darstellung orthogonaler Gewichtsmatrizen."]} {"source": "A large number of natural language processing tasks exist to analyze syntax, semantics, and information content of human language.These seemingly very different tasks are usually solved by specially designed architectures.In this paper, we provide the simple insight that a great variety of tasks can be represented in a single unified format consisting of labeling spans and relations between spans, thus a single task-independent model can be used across different tasks.We perform extensive experiments to test this insight on 10 disparate tasks as broad as dependency parsing (syntax), semantic role labeling (semantics), relation extraction (information content), aspect based sentiment analysis (sentiment), and many others, achieving comparable performance as state-of-the-art specialized models.We further demonstrate benefits in multi-task learning.We convert these datasets into a unified format to build a benchmark, which provides a holistic testbed for evaluating future models for generalized natural language analysis.", "target": ["Wir verwenden ein einziges Modell, um eine Vielzahl von Aufgaben zur Analyse natürlicher Sprache zu lösen, indem wir sie in einem einheitlichen Span-Relation-Format formulieren.", "Diese Arbeit verallgemeinert eine breite Palette von Aufgaben zur Verarbeitung natürlicher Sprache in einem einzigen, auf Spans basierenden Rahmen und schlägt eine allgemeine Architektur zur Lösung all dieser Probleme vor.", "In dieser Arbeit wird eine einheitliche Formulierung für verschiedene NLP-Aufgaben auf Phrasen- und Token-Ebene vorgestellt."]} {"source": "Large matrix inversions have often been cited as a major impediment to scaling Gaussian process (GP) models.With the use of GPs as building blocks for ever more sophisticated Bayesian deep learning models, removing these impediments is a necessary step for achieving large scale results.We present a variational approximation for a wide range of GP models that does not require a matrix inverse to be performed at each optimisation step.Our bound instead directly parameterises a free matrix, which is an additional variational parameter.At the local maxima of the bound, this matrix is equal to the matrix inverse.We prove that our bound gives the same guarantees as earlier variational approximations.We demonstrate some beneficial properties of the bound experimentally, although significant wall clock time speed improvements will require future improvements in optimisation and implementation.", "target": ["Wir stellen eine untere Variationsschranke für GP-Modelle vor, die ohne die Berechnung teurer Matrixoperationen wie Inversen optimiert werden kann und dabei die gleichen Garantien wie bestehende Variationsannäherungen bietet."]} {"source": "It has been shown that using geometric spaces with non-zero curvature instead of plain Euclidean spaces with zero curvature improves performance on a range of Machine Learning tasks for learning representations.Recent work has leveraged these geometries to learn latent variable models like Variational Autoencoders (VAEs) in spherical and hyperbolic spaces with constant curvature.While these approaches work well on particular kinds of data that they were designed for e.g.~tree-like data for a hyperbolic VAE, there exists no generic approach unifying all three models.We develop a Mixed-curvature Variational Autoencoder, an efficient way to train a VAE whose latent space is a product of constant curvature Riemannian manifolds, where the per-component curvature can be learned.This generalizes the Euclidean VAE to curved latent spaces, as the model essentially reduces to the Euclidean VAE if curvatures of all latent space components go to 0.", "target": ["Variationale Autoencoder mit latenten Räumen, die als Produkte von Riemannschen Mannigfaltigkeiten mit konstanter Krümmung modelliert sind, verbessern die Bildrekonstruktion gegenüber Varianten mit nur einer Mannigfaltigkeit.", "In diesem Artikel wird eine allgemeine Formulierung des Begriffs einer VAE mit einem latenten Raum, der aus einer gekrümmten Mannigfaltigkeit besteht, vorgestellt.", "In diese geht Arbeit es um die Entwicklung von VAEs in nicht-euklidischen Räumen."]} {"source": "Machine learning algorithms for generating molecular structures offer a promising new approach to drug discovery.We cast molecular optimization as a translation problem, where the goal is to map an input compound to a target compound with improved biochemical properties.Remarkably, we observe that when generated molecules are iteratively fed back into the translator, molecular compound attributes improve with each step.We show that this finding is invariant to the choice of translation model, making this a \"black box\" algorithm.We call this method Black Box Recursive Translation (BBRT), a new inference method for molecular property optimization.This simple, powerful technique operates strictly on the inputs and outputs of any translation model.We obtain new state-of-the-art results for molecular property optimization tasks using our simple drop-in replacement with well-known sequence and graph-based models.Our method provides a significant boost in performance relative to its non-recursive peers with just a simple \"``for\" loop.Further, BBRT is highly interpretable, allowing users to map the evolution of newly discovered compounds from known starting points.", "target": ["Wir stellen einen Black-Box-Algorithmus für die wiederholte Optimierung von Verbindungen unter Verwendung eines Übersetzungs Frameworks vor.", "Die Autoren stellen die Moleküloptimierung als ein equence-to-sequence Problem dar und erweitern bestehende Methoden zur Verbesserung von Molekülen. Sie zeigen, dass dies für die Optimierung von logP, nicht aber von QED von Vorteil ist.", "Die Arbeit baut auf bestehenden Übersetzungsmodellen auf, die für die molekulare Optimierung entwickelt wurden, und verwendet iterative sequence-to-sequence oder Graph-zu-Graph Übersetzungsmodelle."]} {"source": "Deep Neural Networks (DNNs) are increasingly deployed in cloud servers and autonomous agents due to their superior performance.The deployed DNN is either leveraged in a white-box setting (model internals are publicly known) or a black-box setting (only model outputs are known) depending on the application.A practical concern in the rush to adopt DNNs is protecting the models against Intellectual Property (IP) infringement.We propose BlackMarks, the first end-to-end multi-bit watermarking framework that is applicable in the black-box scenario.BlackMarks takes the pre-trained unmarked model and the owner’s binary signature as inputs.The output is the corresponding marked model with specific keys that can be later used to trigger the embedded watermark.To do so, BlackMarks first designs a model-dependent encoding scheme that maps all possible classes in the task to bit ‘0’ and bit ‘1’.Given the owner’s watermark signature (a binary string), a set of key image and label pairs is designed using targeted adversarial attacks.The watermark (WM) is then encoded in the distribution of output activations of the DNN by fine-tuning the model with a WM-specific regularized loss.To extract the WM, BlackMarks queries the model with the WM key images and decodes the owner’s signature from the corresponding predictions using the designed encoding scheme.We perform a comprehensive evaluation of BlackMarks’ performance on MNIST, CIFAR-10, ImageNet datasets and corroborate its effectiveness and robustness.BlackMarks preserves the functionality of the original DNN and incurs negligible WM embedding overhead as low as 2.054%.", "target": ["Vorschlag des ersten Wasserzeichen Frameworks für die Einbettung und Extraktion von Multibit-Signaturen unter Verwendung der DNN-Ausgänge. ", "Es wird eine Methode für das Multi-Bit-Wasserzeichen von neuronalen Netzen in einer Black-Box Umgebung vorgeschlagen und gezeigt, dass die Vorhersagen bestehender Modelle eine Multi-Bit-Zeichenkette tragen können, die später zur Überprüfung des Eigentums verwendet werden kann.", "Die Arbeit schlägt einen Ansatz für Modell-Wasserzeichen vor, bei dem das Wasserzeichen eine in das Modell eingebettete Bitfolge ist, die Teil eines Feinabstimmungsprozesses ist."]} {"source": "Adversarial training provides a principled approach for training robust neural networks.From an optimization perspective, the adversarial training is essentially solving a minmax robust optimization problem.The outer minimization is trying to learn a robust classifier, while the inner maximization is trying to generate adversarial samples.Unfortunately, such a minmax problem is very difficult to solve due to the lack of convex-concave structure.This work proposes a new adversarial training method based on a general learning-to-learn framework.Specifically, instead of applying the existing hand-design algorithms for the inner problem, we learn an optimizer, which is parametrized as a convolutional neural network.At the same time, a robust classifier is learned to defense the adversarial attack generated by the learned optimizer.From the perspective of generative learning, our proposed method can be viewed as learning a deep generative model for generating adversarial samples, which is adaptive to the robust classification.Our experiments demonstrate that our proposed method significantly outperforms existing adversarial training methods on CIFAR-10 and CIFAR-100 datasets.", "target": ["Sie wissen nicht, wie man optimiert? Dann lernen Sie einfach zu optimieren!", "In diesem Artikel wird ein Weg vorgeschlagen, Bildklassifizierungsmodelle so zu trainieren, dass sie resistent gegen L-infinity perturbation Attacken sind.", "In diesem Beitrag wird vorgeschlagen, einen Angreifer mit Hilfe des Learning-to-Learn-Konzepts zu erlernen."]} {"source": "In this work we introduce a new framework for performing temporal predictionsin the presence of uncertainty.It is based on a simple idea of disentangling com-ponents of the future state which are predictable from those which are inherentlyunpredictable, and encoding the unpredictable components into a low-dimensionallatent variable which is fed into the forward model.Our method uses a simple su-pervised training objective which is fast and easy to train.We evaluate it in thecontext of video prediction on multiple datasets and show that it is able to consi-tently generate diverse predictions without the need for alternating minimizationover a latent space or adversarial training.", "target": ["Eine einfache und leicht zu trainierende Methode für multimodale Vorhersagen in Zeitreihen. ", "In diesem Beitrag wird ein Modell zur Vorhersage von Zeitserien vorgestellt, das eine deterministische Zuordnung erlernt und ein weiteres Netz trainiert, um künftige Bilder anhand der Eingabe und des residual Fehlers des ersten Netzes vorherzusagen.", "In dem Beitrag wird ein Modell für die Vorhersage unter Unsicherheit vorgeschlagen, bei dem zwischen deterministischer Komponentenvorhersage und unsicherer Komponentenvorhersage unterschieden wird."]} {"source": "Conducting reinforcement-learning experiments can be a complex and timely process.A full experimental pipeline will typically consist of a simulation of an environment, an implementation of one or many learning algorithms, a variety of additional components designed to facilitate the agent-environment interplay, and any requisite analysis, plotting, and logging thereof.In light of this complexity, this paper introduces simple rl, a new open source library for carrying out reinforcement learning experiments in Python 2 and 3 with a focus on simplicity.The goal of simple_rl is to support seamless, reproducible methods for running reinforcement learning experiments.This paper gives an overview of the core design philosophy of the package, how it differs from existing libraries, and showcases its central features.", "target": ["Dieser Artikel stellt simple_rl vor, eine neue Open-Source-Bibliothek zur Durchführung von Reinforcement Learning Experimenten in Python 2 und 3 mit dem Schwerpunkt auf Einfachheit."]} {"source": "Wasserstein GAN(WGAN) is a model that minimizes the Wasserstein distance between a data distribution and sample distribution.Recent studies have proposed stabilizing the training process for the WGAN and implementing the Lipschitz constraint.In this study, we prove the local stability of optimizing the simple gradient penalty $\\mu$-WGAN(SGP $\\mu$-WGAN) under suitable assumptions regarding the equilibrium and penalty measure $\\mu$.The measure valued differentiation concept is employed to deal with the derivative of the penalty terms, which is helpful for handling abstract singular measures with lower dimensional support.Based on this analysis, we claim that penalizing the data manifold or sample manifold is the key to regularizing the original WGAN with a gradient penalty.Experimental results obtained with unintuitive penalty measures that satisfy our assumptions are also provided to support our theoretical results.", "target": ["Diese Arbeit beschäftigt sich mit der Stabilität einer einfachen Gradientenbestrafung, $\\mu$-WGAN-Optimierung genannt, durch Einführung eines Konzepts der maßstäblichen Differenzierung.", "WGAN mit einem quadratischen, nullpunktzentrierten Gradientenbestrafungs Term für ein allgemeines Maß wird untersucht.", "Charakterisiert die Konvergenz des gradientenbestraften Wasserstein-GAN."]} {"source": "We present Random Partition Relaxation (RPR), a method for strong quantization of the parameters of convolutional neural networks to binary (+1/-1) and ternary (+1/0/-1) values.Starting from a pretrained model, we first quantize the weights and then relax random partitions of them to their continuous values for retraining before quantizing them again and switching to another weight partition for further adaptation. We empirically evaluate the performance of RPR with ResNet-18, ResNet-50 and GoogLeNet on the ImageNet classification task for binary and ternary weight networks.We show accuracies beyond the state-of-the-art for binary- and ternary-weight GoogLeNet and competitive performance for ResNet-18 and ResNet-50 using a SGD-based training method that can easily be integrated into existing frameworks.", "target": ["Modernste Trainingsmethode für binäre und ternäre Gewichtsnetze auf der Grundlage der alternierenden Optimierung von zufällig entspannten Gewichtspartitionen.", "In der Arbeit wird ein neues Trainingsschema zur Optimierung eines ternären neuronalen Netzes vorgeschlagen.", "Die Autoren schlagen RPR vor, eine Methode zur zufälligen Partitionierung und Quantisierung von Gewichten und zum Trainieren der verbleibenden Parameter, gefolgt von Entspannung in abwechselnden Zyklen, um quantisierte Modelle zu trainieren."]} {"source": "Learning long-term dependencies is a key long-standing challenge of recurrent neural networks (RNNs).Hierarchical recurrent neural networks (HRNNs) have been considered a promising approach as long-term dependencies are resolved through shortcuts up and down the hierarchy.Yet, the memory requirements of Truncated Backpropagation Through Time (TBPTT) still prevent training them on very long sequences.In this paper, we empirically show that in (deep) HRNNs, propagating gradients back from higher to lower levels can be replaced by locally computable losses, without harming the learning capability of the network, over a wide range of tasks.This decoupling by local losses reduces the memory requirements of training by a factor exponential in the depth of the hierarchy in comparison to standard TBPTT.", "target": ["Wir ersetzen einige Gradientenpfade in hierarchischen RNNs durch einen Hilfsverlust. Wir zeigen, dass dies die Speicherkosten reduzieren kann, während die Leistung erhalten bleibt.", "In dem Artikel wird eine hierarchische RNN-Architektur vorgestellt, die speichereffizienter trainiert werden kann.", "Die vorgeschlagene Arbeit schlägt vor, die verschiedenen Schichten der Hierarchie in RNN mit Hilfe von Hilfsverlusten zu entkoppeln."]} {"source": "In a typical deep learning approach to a computer vision task, Convolutional Neural Networks (CNNs) are used to extract features at varying levels of abstraction from an image and compress a high dimensional input into a lower dimensional decision space through a series of transformations.In this paper, we investigate how a class of input images is eventually compressed over the course of these transformations.In particular, we use singular value decomposition to analyze the relevant variations in feature space.These variations are formalized as the effective dimension of the embedding.We consider how the effective dimension varies across layers within class.We show that across datasets and architectures, the effective dimension of a class increases before decreasing further into the network, suggesting some sort of initial whitening transformation.Further, the decrease rate of the effective dimension deeper in the network corresponds with training performance of the model.", "target": ["Neuronale Netze, die eine gute Klassifizierung vornehmen, projizieren Punkte in sphärische Formen, bevor sie in geringere Dimensionen komprimiert werden."]} {"source": "Deep learning methods have achieved high performance in sound recognition tasks.Deciding how to feed the training data is important for further performance improvement.We propose a novel learning method for deep sound recognition: Between-Class learning (BC learning).Our strategy is to learn a discriminative feature space by recognizing the between-class sounds as between-class sounds.We generate between-class sounds by mixing two sounds belonging to different classes with a random ratio.We then input the mixed sound to the model and train the model to output the mixing ratio.The advantages of BC learning are not limited only to the increase in variation of the training data; BC learning leads to an enlargement of Fisher’s criterion in the feature space and a regularization of the positional relationship among the feature distributions of the classes.The experimental results show that BC learning improves the performance on various sound recognition networks, datasets, and data augmentation schemes, in which BC learning proves to be always beneficial.Furthermore, we construct a new deep sound recognition network (EnvNet-v2) and train it with BC learning.As a result, we achieved a performance surpasses the human level.", "target": ["Wir schlagen eine neuartige Lernmethode für tiefe Tonerkennung vor, das sogenannte BC-Lernen.", "Die Autoren definierten eine neue Lernaufgabe, bei der ein DNN das Mischungsverhältnis zwischen Geräuschen aus zwei verschiedenen Klassen vorhersagen muss, um die Unterscheidungskraft des schließlich gelernten Netzwerks zu erhöhen.", "Es wird eine Methode zur Verbesserung der Leistung einer generischen Lernmethode durch die Erzeugung von Trainingsmustern \"zwischen den Klassen\" vorgeschlagen und die grundlegende Intuition und Notwendigkeit der vorgeschlagenen Technik vorgestellt."]} {"source": "Spatiotemporal forecasting has become an increasingly important prediction task in machine learning and statistics due to its vast applications, such as climate modeling, traffic prediction, video caching predictions, and so on.While numerous studies have been conducted, most existing works assume that the data from different sources or across different locations are equally reliable.Due to cost, accessibility, or other factors, it is inevitable that the data quality could vary, which introduces significant biases into the model and leads to unreliable prediction results.The problem could be exacerbated in black-box prediction models, such as deep neural networks.In this paper, we propose a novel solution that can automatically infer data quality levels of different sources through local variations of spatiotemporal signals without explicit labels.Furthermore, we integrate the estimate of data quality level with graph convolutional networks to exploit their efficient structures.We evaluate our proposed method on forecasting temperatures in Los Angeles.", "target": ["Wir schlagen eine Methode vor, die das zeitlich variierende Qualitätsniveau der Daten für räumlich-zeitliche Prognosen ohne explizit zugewiesene Labels ableitet.", "Einführung einer neuen Definition von Datenqualität, die sich auf den Begriff der lokalen Variation stützt, der in (Zhou und Scholkopf) definiert wurde, und ihn auf mehrere heterogene Datenquellen ausweitet.", "In dieser Arbeit wurde eine neue Methode zur Bewertung der Qualität verschiedener Datenquellen mit dem zeitvariablen Graphenmodell vorgeschlagen, wobei das Qualitätsniveau als Regularisierungsterm in der Zielfunktion verwendet wird"]} {"source": "Human perception of 3D shapes goes beyond reconstructing them as a set of points or a composition of geometric primitives: we also effortlessly understand higher-level shape structure such as the repetition and reflective symmetry of object parts.In contrast, recent advances in 3D shape sensing focus more on low-level geometry but less on these higher-level relationships.In this paper, we propose 3D shape programs, integrating bottom-up recognition systems with top-down, symbolic program structure to capture both low-level geometry and high-level structural priors for 3D shapes.Because there are no annotations of shape programs for real shapes, we develop neural modules that not only learn to infer 3D shape programs from raw, unannotated shapes, but also to execute these programs for shape reconstruction.After initial bootstrapping, our end-to-end differentiable model learns 3D shape programs by reconstructing shapes in a self-supervised manner.Experiments demonstrate that our model accurately infers and executes 3D shape programs for highly complex shapes from various categories.It can also be integrated with an image-to-shape module to infer 3D shape programs directly from an RGB image, leading to 3D shape reconstructions that are both more accurate and more physically plausible.", "target": ["Wir schlagen 3D-Form Programme vor, eine strukturierte, kompositorische Formdarstellung. Unser Modell lernt, Form Programme abzuleiten und auszuführen, um 3D Formen zu erklären.", "Ein Ansatz zur Ableitung von Formprogrammen aus 3D Modellen. Die Architektur besteht aus einem rekurrenten Netzwerk, das eine 3D Form kodiert und Anweisungen ausgibt, und einem zweiten Modul, das das Programm in 3D rendert.", "In diesem Beitrag wird eine semantische Beschreibung auf hoher Ebene für 3D Formen vorgestellt, die durch das ShapeProgram gegeben ist."]} {"source": "Deep Reinforcement Learning (Deep RL) has been receiving increasingly more attention thanks to its encouraging performance on a variety of control tasks.Yet, conventional regularization techniques in training neural networks (e.g., $L_2$ regularization, dropout) have been largely ignored in RL methods, possibly because agents are typically trained and evaluated in the same environment.In this work, we present the first comprehensive study of regularization techniques with multiple policy optimization algorithms on continuous control tasks.Interestingly, we find conventional regularization techniques on the policy networks can often bring large improvement on the task performance, and the improvement is typically more significant when the task is more difficult.We also compare with the widely used entropy regularization and find $L_2$ regularization is generally better.Our findings are further confirmed to be robust against the choice of training hyperparameters.We also study the effects of regularizing different components and find that only regularizing the policy network is typically enough.We hope our study provides guidance for future practices in regularizing policy optimization algorithms.", "target": ["Wir zeigen, dass konventionelle Regularisierungsmethoden (z.B. $L_2$, Dropout), die in RL Methoden weitgehend ignoriert wurden, bei der Optimierung von Richtlinien sehr effektiv sein können.", "Die Autoren untersuchen eine Reihe bestehender Methoden zur direkten Optimierung von Strategien im Bereich des Reinforcement Learnings und bieten eine detaillierte Untersuchung der Auswirkungen von Vorschriften auf die Leistung und das Verhalten von Agenten, die diesen Methoden folgen.", "Diese Arbeit bietet eine Studie über die Auswirkung der Regularisierung auf die Leistung in Trainingsumgebungen in Regel-Optimierungsmethoden in mehreren kontinuierlichen Steuerungsaufgaben."]} {"source": "We introduce FigureQA, a visual reasoning corpus of over one million question-answer pairs grounded in over 100,000 images.The images are synthetic, scientific-style figures from five classes: line plots, dot-line plots, vertical and horizontal bar graphs, and pie charts.We formulate our reasoning task by generating questions from 15 templates; questions concern various relationships between plot elements and examine characteristics like the maximum, the minimum, area-under-the-curve, smoothness, and intersection.To resolve, such questions often require reference to multiple plot elements and synthesis of information distributed spatially throughout a figure.To facilitate the training of machine learning systems, the corpus also includes side data that can be used to formulate auxiliary objectives.In particular, we provide the numerical data used to generate each figure as well as bounding-box annotations for all plot elements.We study the proposed visual reasoning task by training several models, including the recently proposed Relation Network as strong baseline.Preliminary results indicate that the task poses a significant machine learning challenge.We envision FigureQA as a first step towards developing models that can intuitively recognize patterns from visual representations of data.", "target": ["Wir präsentieren einen Frage-Antwort-Datensatz, FigureQA, als einen ersten Schritt zur Entwicklung von Modellen, die intuitiv Muster aus visuellen Darstellungen von Daten erkennen können.", "In diesem Beitrag wird ein Datensatz zur Beantwortung von Beispiel Fragen zu Abbildungen vorgestellt, der Schlussfolgerungen zu Abbildungselementen enthält.", "In diesem Beitrag wird ein neuer Datensatz für visuelles Reasoning namens Figure-QA vorgestellt, der aus 140.000 Abbildungen und 1,55 Mio. QA-Paaren besteht und bei der Entwicklung von Modellen helfen kann, die nützliche Informationen aus visuellen Datendarstellungen extrahieren können."]} {"source": "In this paper, I discuss some varieties of explanation that can arisein intelligent agents.I distinguish between process accounts, whichaddress the detailed decisions made during heuristic search, andpreference accounts, which clarify the ordering of alternativesindependent of how they were generated.I also hypothesize which types of users will appreciate which types of explanation.In addition, I discuss three facets of multi-step decision making-- conceptual inference, plan generation, and plan execution --in which explanations can arise.I also consider alternative waysto present questions to agents and for them provide their answers.", "target": ["In diesem Positionspapier werden verschiedene Arten von Selbsterklärungen analysiert, die in Planungs- und verwandten Systemen auftreten können. ", "Erörtert verschiedene Aspekte von Erklärungen, insbesondere im Zusammenhang mit sequenzieller Entscheidungsfindung. "]} {"source": "Generative deep learning has sparked a new wave of Super-Resolution (SR) algorithms that enhance single images with impressive aesthetic results, albeit with imaginary details.Multi-frame Super-Resolution (MFSR) offers a more grounded approach to the ill-posed problem, by conditioning on multiple low-resolution views.This is important for satellite monitoring of human impact on the planet -- from deforestation, to human rights violations -- that depend on reliable imagery.To this end, we present HighRes-net, the first deep learning approach to MFSR that learns its sub-tasks in an end-to-end fashion:(i) co-registration,(ii) fusion,(iii) up-sampling, and(iv) registration-at-the-loss.Co-registration of low-res views is learned implicitly through a reference-frame channel, with no explicit registration mechanism.We learn a global fusion operator that is applied recursively on an arbitrary number of low-res pairs.We introduce a registered loss, by learning to align the SR output to a ground-truth through ShiftNet.We show that by learning deep representations of multiple views, we can super-resolve low-resolution signals and enhance Earth observation data at scale.Our approach recently topped the European Space Agency's MFSR competition on real-world satellite imagery.", "target": ["Der erste Deep-Learning-Ansatz für MFSR, der Registrierung, Fusion und Up-Sampling durchgängig löst.", "In diesem Beitrag wird ein durchgängiger Algorithmus für die Superauflösung mehrerer Bilder vorgeschlagen, der auf paarweisen Co-Registrierungen und Fusionsblöcken (Convolutional Residual Blocks) beruht, die in einem Encoder-Decoder Netzwerk \"HighRes-Net\" eingebettet sind, das das Superauflösungsbild schätzt.", "Diese Arbeit schlägt einen Rahmen vor, der eine rekursive Fusion mit Co-Registrierungs Verlusten beinhaltet, um das Problem zu lösen, dass die Ergebnisse der Superauflösung und die hochauflösenden Labels nicht pixelgenau ausgerichtet sind."]} {"source": "Large mini-batch parallel SGD is commonly used for distributed training of deep networks.Approaches that use tightly-coupled exact distributed averaging based on AllReduce are sensitive to slow nodes and high-latency communication.In this work we show the applicability of Stochastic Gradient Push (SGP) for distributed training.SGP uses a gossip algorithm called PushSum for approximate distributed averaging, allowing for much more loosely coupled communications which can be beneficial in high-latency or high-variability scenarios.The tradeoff is that approximate distributed averaging injects additional noise in the gradient which can affect the train and test accuracies.We prove that SGP converges to a stationary point of smooth, non-convex objective functions.Furthermore, we validate empirically the potential of SGP.For example, using 32 nodes with 8 GPUs per node to train ResNet-50 on ImageNet, where nodes communicate over 10Gbps Ethernet, SGP completes 90 epochs in around 1.5 hours while AllReduce SGD takes over 5 hours, and the top-1 validation accuracy of SGP remains within 1.2% of that obtained using AllReduce SGD.", "target": ["Verwenden von Gossip-basierten approximativen verteilten Durchschnittsberechnungen für verteiltes Training über Netze mit hoher Latenz, anstelle von exakten verteilten Durchschnittsberechnungen wie AllReduce.", "Die Autoren schlagen vor, Gossip-Algorithmen als allgemeine Methode zur Berechnung des ungefähren Durchschnitts über eine Gruppe von Arbeitnehmern zu verwenden.", "Die Arbeit beweist die Konvergenz der SGP für nicht-konvexe glatte Funktionen und zeigt, dass die SGP eine signifikante Beschleunigung in der Low-Latency-Umgebung erreichen kann, ohne zu viel Vorhersageleistung zu opfern. "]} {"source": "In this paper, we extend the persona-based sequence-to-sequence (Seq2Seq) neural network conversation model to a multi-turn dialogue scenario by modifying the state-of-the-art hredGAN architecture to simultaneously capture utterance attributes such as speaker identity, dialogue topic, speaker sentiments and so on.The proposed system, phredGAN has a persona-based HRED generator (PHRED) and a conditional discriminator.We also explore two approaches to accomplish the conditional discriminator: (1) $phredGAN_a$, a system that passes the attribute representation as an additional input into a traditional adversarial discriminator, and (2) $phredGAN_d$, a dual discriminator system which in addition to the adversarial discriminator, collaboratively predicts the attribute(s) that generated the input utterance.To demonstrate the superior performance of phredGAN over the persona SeqSeq model, we experiment with two conversational datasets, the Ubuntu Dialogue Corpus (UDC) and TV series transcripts from the Big Bang Theory and Friends.Performance comparison is made with respect to a variety of quantitative measures as well as crowd-sourced human evaluation.We also explore the trade-offs from using either variant of $phredGAN$ on datasets with many but weak attribute modalities (such as with Big Bang Theory and Friends) and ones with few but strong attribute modalities (customer-agent interactions in Ubuntu dataset).", "target": ["In diesem Beitrag wird ein adversarial Learning Framework für neuronale Konversationsmodelle mit Personas entwickelt.", "Diese Arbeit schlägt eine Erweiterung von hredGAN vor, um gleichzeitig eine Reihe von Attributeinbettungen zu lernen, die die Persona jedes Sprechers repräsentieren und Persona-basierte Antworten generieren kann."]} {"source": "We introduce bio-inspired artificial neural networks consisting of neurons that are additionally characterized by spatial positions.To simulate properties of biological systems we add the costs penalizing long connections and the proximity of neurons in a two-dimensional space.Our experiments show that in the case where the network performs two different tasks, the neurons naturally split into clusters, where each cluster is responsible for processing a different task.This behavior not only corresponds to the biological systems, but also allows for further insight into interpretability or continual learning.", "target": ["Biologisch inspirierte künstliche neuronale Netze, die aus in einem zweidimensionalen Raum angeordneten Neuronen bestehen, sind in der Lage, unabhängige Gruppen für die Ausführung verschiedener Aufgaben zu bilden."]} {"source": "The transformer has become a central model for many NLP tasks from translation to language modeling to representation learning.Its success demonstrates the effectiveness of stacked attention as a replacement for recurrence for many tasks.In theory attention also offers more insights into the model’s internal decisions; however, in practice when stacked it quickly becomes nearly as fully-connected as recurrent models.In this work, we propose an alternative transformer architecture, discrete transformer, with the goal of better separating out internal model decisions.The model uses hard attention to ensure that each step only depends on a fixed context.Additionally, the model uses a separate “syntactic” controller to separate out network structure from decision making.Finally we show that this approach can be further sparsified with direct regularization.Empirically, this approach is able to maintain the same level of performance on several datasets, while discretizing reasoning decisions over the data.", "target": ["Diskreter Transformer, der mit harter Aufmerksamkeit sicherstellt, dass jeder Schritt nur von einem festen Kontext abhängt.", "In diesem Beitrag werden Modifikationen an der Standard Transformer Architektur vorgestellt, mit dem Ziel, die Interpretierbarkeit zu verbessern und gleichzeitig die Leistung bei NLP-Aufgaben zu erhalten.", "In diesem Artikel werden drei diskrete Transformer vorgeschlagen: ein diskretes und stochastisches Gumbel-Softmax-basiertes Aufmerksamkeitsmodul, ein syntaktischer und semantischer Two-Stream Transformer und eine Sparsity-Regularisierung."]} {"source": "Deep predictive coding networks are neuroscience-inspired unsupervised learning models that learn to predict future sensory states.We build upon the PredNet implementation by Lotter, Kreiman, and Cox (2016) to investigate if predictive coding representations are useful to predict brain activity in the visual cortex.We use representational similarity analysis (RSA) to compare PredNet representations to functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG) data from the Algonauts Project (Cichy et al., 2019).In contrast to previous findings in the literature (Khaligh-Razavi & Kriegeskorte, 2014), we report empirical data suggesting that unsupervised models trained to predict frames of videos without further fine-tuning may outperform supervised image classification baselines in terms of correlation to spatial (fMRI) and temporal (MEG) data.", "target": ["Wir zeigen empirische Belege dafür, dass Vorhersehende Coding Modelle Repräsentationen liefern, die stärker mit Gehirndaten korrelieren als überwachte Bilderkennungsmodelle."]} {"source": "The incorporation of prior knowledge into learning is essential in achieving good performance based on small noisy samples.Such knowledge is often incorporated through the availability of related data arising from domains and tasks similar to the one of current interest.Ideally one would like to allow both the data for the current task and for previous related tasks to self-organize the learning system in such a way that commonalities and differences between the tasks are learned in a data-driven fashion.We develop a framework for learning multiple tasks simultaneously, based on sharing features that are common to all tasks, achieved through the use of a modular deep feedforward neural network consisting of shared branches, dealing with the common features of all tasks, and private branches, learning the specific unique aspects of each task.Once an appropriate weight sharing architecture has been established, learning takes place through standard algorithms for feedforward networks, e.g., stochastic gradient descent and its variations.The method deals with meta-learning (such as domain adaptation, transfer and multi-task learning) in a unified fashion, and can easily deal with data arising from different types of sources.Numerical experiments demonstrate the effectiveness of learning in domain adaptation and transfer learning setups, and provide evidence for the flexible and task-oriented representations arising in the network.", "target": ["Ein allgemeines Framework für die Handhabung von Transfer- und Multi-Task Lernen unter Verwendung von Paaren von Autoencodern mit aufgabenspezifischen und gemeinsamen Gewichten.", "Vorschlagen eines generischen Frameworks für Ende-zu-Ende Transfer Lernen / Domänenanpassung mit tiefen neuronalen Netzen. ", "Diese Arbeit schlägt ein Modell vor, das es Architekturen von tiefen neuronalen Netzen ermöglicht, Parameter über verschiedene Datensätze hinweg gemeinsam zu nutzen, und wendet es auf das Transferlernen an.", "Der Artikel konzentriert sich auf das Erlernen gemeinsamer Merkmale aus Daten aus verschiedenen Bereichen und endet mit einer allgemeinen Architektur für Multi-Task-, Semi-Supervised- und Transfer-Lernen."]} {"source": "Deep neural networks and decision trees operate on largely separate paradigms; typically, the former performs representation learning with pre-specified architectures, while the latter is characterised by learning hierarchies over pre-specified features with data-driven architectures.We unite the two via adaptive neural trees (ANTs), a model that incorporates representation learning into edges, routing functions and leaf nodes of a decision tree, along with a backpropagation-based training algorithm that adaptively grows the architecture from primitive modules (e.g., convolutional layers).ANTs allow increased interpretability via hierarchical clustering, e.g., learning meaningful class associations, such as separating natural vs. man-made objects.We demonstrate this on classification and regression tasks, achieving over 99% and 90% accuracy on the MNIST and CIFAR-10 datasets, and outperforming standard neural networks, random forests and gradient boosted trees on the SARCOS dataset.Furthermore, ANT optimisation naturally adapts the architecture to the size and complexity of the training data.", "target": ["Wir schlagen ein Framework vor, um Entscheidungsbäume und neuronale Netze zu kombinieren, und zeigen anhand von Bildklassifizierungsaufgaben, dass er die komplementären Vorteile der beiden Ansätze nutzt und gleichzeitig die Grenzen früherer Arbeiten überwindet.", "Die Autoren schlugen ein neues Modell, Adaptive Neural Trees, vor, indem sie das Repräsentationslernen und die Gradientenoptimierung von neuronalen Netzen mit dem Architekturlernen von Entscheidungsbäumen kombinierten.", "In diesem Beitrag wird der Ansatz der adaptiven neuronalen Bäume vorgeschlagen, um die beiden Lernparadigmen der tiefen neuronalen Netze und der Entscheidungsbäume zu kombinieren."]} {"source": "While natural language processing systems often focus on a single language, multilingual transfer learning has the potential to improve performance, especially for low-resource languages. We introduce XLDA, cross-lingual data augmentation, a method that replaces a segment of the input text with its translation in another language.XLDA enhances performance of all 14 tested languages of the cross-lingual natural language inference (XNLI) benchmark.With improvements of up to 4.8, training with XLDA achieves state-of-the-art performance for Greek, Turkish, and Urdu.XLDA is in contrast to, and performs markedly better than, a more naive approach that aggregates examples in various languages in a way that each example is solely in one language.On the SQuAD question answering task, we see that XLDA provides a 1.0 performance increase on the English evaluation set.Comprehensive experiments suggest that most languages are effective as cross-lingual augmentors, that XLDA is robust to a wide range of translation quality, and that XLDA is even more effective for randomly initialized models than for pretrained models.", "target": ["Die Übersetzung von Teilen der Eingabe während des Trainings kann die sprachübergreifende Leistung verbessern.", "In diesem Beitrag wird eine Methode zur sprachübergreifenden Datenerweiterung vorgeschlagen, um die Sprachinferenz und die Beantwortung von Fragen zu verbessern.", "In diesem Beitrag wird vorgeschlagen, sprachübergreifende Daten durch heuristische Swaps mit alignierten Übersetzungen zu ergänzen, wie es zweisprachige Menschen beim Code-Switching tun."]} {"source": "Training conditional generative latent-variable models is challenging in scenarios where the conditioning signal is very strong and the decoder is expressive enough to generate a plausible output given only the condition; the generative model tends to ignore the latent variable, suffering from posterior collapse. We find, and empirically show, that one of the major reasons behind posterior collapse is rooted in the way that generative models are conditioned, i.e., through concatenation of the latent variable and the condition. To mitigate this problem, we propose to explicitly make the latent variables depend on the condition by unifying the conditioning and latent variable sampling, thus coupling them so as to prevent the model from discarding the root of variations. To achieve this, we develop a conditional Variational Autoencoder architecture that learns a distribution not only of the latent variables, but also of the condition, the latter acting as prior on the former. Our experiments on the challenging tasks of conditional human motion prediction and image captioning demonstrate the effectiveness of our approach at avoiding posterior collapse. Video results of our approach are anonymously provided in http://bit.ly/iclr2020", "target": ["Wir schlagen einen bedingten variationalen Autoencoder vor, der den nachgelagerten Kollaps in Szenarien abmildert, in denen das Konditionierungssignal stark genug ist, damit ein ausdrucksstarker Decoder eine plausible Ausgabe daraus erzeugen kann.", "In diesem Beitrag werden stark konditionierte generative Modelle betrachtet und eine Zielfunktion sowie eine Parametrisierung der Variationsverteilung vorgeschlagen, so dass die latenten Variablen explizit von den Eingabebedingungen abhängen.", "In diesem Artikel wird argumentiert, dass ein nachgelagerter Kollaps wahrscheinlicher ist als bei einer einfachen VAE, wenn der Decoder auf der Verkettung von latenten Variablen und Hilfsinformationen beruht."]} {"source": "We propose a study of the stability of several few-shot learning algorithms subject to variations in the hyper-parameters and optimization schemes while controlling the random seed. We propose a methodology for testing for statistical differences in model performances under several replications.To study this specific design, we attempt to reproduce results from three prominent papers: Matching Nets, Prototypical Networks, and TADAM.We analyze on the miniImagenet dataset on the standard classification task in the 5-ways, 5-shots learning setting at test time.We find that the selected implementations exhibit stability across random seed, and repeats.", "target": ["Wir schlagen eine Studie über die Stabilität verschiedener Algorithmen zum Few-Shot Learning vor, die Variationen in den Hyperparametern und Optimierungsschemata unterworfen sind, während wir die zufällige Seed kontrollieren.", "In dieser Arbeit wird die Reproduzierbarkeit beim Few-Shot Learning untersucht."]} {"source": "We study the problem of representation learning in goal-conditioned hierarchical reinforcement learning.In such hierarchical structures, a higher-level controller solves tasks by iteratively communicating goals which a lower-level policy is trained to reach.Accordingly, the choice of representation -- the mapping of observation space to goal space -- is crucial.To study this problem, we develop a notion of sub-optimality of a representation, defined in terms of expected reward of the optimal hierarchical policy using this representation.We derive expressions which bound the sub-optimality and show how these expressions can be translated to representation learning objectives which may be optimized in practice.Results on a number of difficult continuous-control tasks show that our approach to representation learning yields qualitatively better representations as well as quantitatively better hierarchical policies, compared to existing methods.", "target": ["Wir übersetzen eine Beschränkung der Suboptimalität von Repräsentationen in ein praktisches Trainingsziel im Kontext des hierarchischen Reinforcement Learning.", "Die Autoren schlagen einen neuartigen Ansatz für das Lernen einer Repräsentation für HRL vor und stellen eine interessante Verbindung zwischen dem Lernen der Repräsentation und der Begrenzung der Suboptimalität her, die zu einem gradientenbasierten Algorithmus führt.", "In dieser Arbeit wird ein Weg vorgeschlagen, mit Suboptimalität im Zusammenhang mit Lernrepräsentationen umzugehen, die sich auf die Suboptimalität der hierarchischen Regeln in Bezug auf die Aufgabenbelohnung beziehen."]} {"source": "Heuristic search research often deals with finding algorithms for offline planning which aim to minimize the number of expanded nodes or planning time.In online planning, algorithms for real-time search or deadline-aware search have been considered before.However, in this paper, we are interested in the problem of {\\em situated temporal planning} in which an agent's plan can depend on exogenous events in the external world, and thus it becomes important to take the passage of time into account during the planning process. Previous work on situated temporal planning has proposed simple pruning strategies, as well as complex schemes for a simplified version of the associated metareasoning problem. In this paper, we propose a simple metareasoning technique, called the crude greedy scheme, which can be applied in a situated temporal planner.Our empirical evaluation shows that the crude greedy scheme outperforms standard heuristic search based on cost-to-go estimates.", "target": ["Metareasoning in einem situierten temporalen Planer.", "Dieser Beitrag befasst sich mit dem Problem der situierten zeitlichen Planung und schlägt eine weitere Vereinfachung der zuvor von Shperberg vorgeschlagenen gierigen Strategien vor."]} {"source": "Neural networks are vulnerable to small adversarial perturbations.Existing literature largely focused on understanding and mitigating the vulnerability of learned models.In this paper, we demonstrate an intriguing phenomenon about the most popular robust training method in the literature, adversarial training: Adversarial robustness, unlike clean accuracy, is sensitive to the input data distribution.Even a semantics-preserving transformations on the input data distribution can cause a significantly different robustness for the adversarial trained model that is both trained and evaluated on the new distribution.Our discovery of such sensitivity on data distribution is based on a study which disentangles the behaviors of clean accuracy and robust accuracy of the Bayes classifier.Empirical investigations further confirm our finding.We construct semantically-identical variants for MNIST and CIFAR10 respectively, and show that standardly trained models achieve comparable clean accuracies on them, but adversarially trained models achieve significantly different robustness accuracies.This counter-intuitive phenomenon indicates that input data distribution alone can affect the adversarial robustness of trained neural networks, not necessarily the tasks themselves.Lastly, we discuss the practical implications on evaluating adversarial robustness, and make initial attempts to understand this complex phenomenon.", "target": ["Die Robustheit trainierter PGD-Modelle reagiert empfindlich auf semantikerhaltende Transformationen von Bilddatensätzen, was bedeutet, dass die Bewertung robuster Lernalgorithmen in der Praxis heikel ist.", "Die Arbeit verdeutlicht den Unterschied zwischen sauberer und robuster Genauigkeit und zeigt, dass eine Änderung der Randverteilung der Eingabedaten P(x) unter Beibehaltung ihrer Semantik P(y|x) die Robustheit des Modells beeinflusst.", "In diesem Beitrag wird die Ursache für die mangelnde Robustheit von Klassifikatoren gegenüber Störungen der aadversarial Eingaben bei l-inf begrenzten Störungen untersucht."]} {"source": "Many tasks in natural language processing involve comparing two sentences to compute some notion of relevance, entailment, or similarity.Typically this comparison is done either at the word level or at the sentence level, with no attempt to leverage the inherent structure of the sentence.When sentence structure is used for comparison, it is obtained during a non-differentiable pre-processing step, leading to propagation of errors.We introduce a model of structured alignments between sentences, showing how to compare two sentences by matching their latent structures.Using a structured attention mechanism, our model matches possible spans in the first sentence to possible spans in the second sentence, simultaneously discovering the tree structure of each sentence and performing a comparison, in a model that is fully differentiable and is trained only on the comparison objective.We evaluate this model on two sentence comparison tasks: the Stanford natural language inference dataset and the TREC-QA dataset.We find that comparing spans results in superior performance to comparing words individually, and that the learned trees are consistent with actual linguistic structures.", "target": ["Übereinstimmung von Sätzen durch Lernen der latenten Konstituentenbaumstrukturen mit einer Variante des Inside-Outside Algorithmus, eingebettet in eine neuronale Netzwerkschicht.", "In diesem Beitrag wird ein strukturierter Aufmerksamkeitsmechanismus zur Berechnung von Alignment-Scores unter allen möglichen Abständen in zwei gegebenen Sätzen vorgestellt.", "In diesem Artikel wird ein Modell für strukturierte Übereinstimmungen zwischen Sätzen vorgeschlagen, um Sätze durch den Abgleich ihrer latenten Strukturen zu vergleichen."]} {"source": "Learning disentangled representation from any unlabelled data is a non-trivial problem.In this paper we propose Information Maximising Autoencoder (InfoAE) where the encoder learns powerful disentangled representation through maximizing the mutual information between the representation and given information in an unsupervised fashion.We have evaluated our model on MNIST dataset and achieved approximately 98.9 % test accuracy while using complete unsupervised training.", "target": ["Unüberwachtes Lernen der Entflechtungsdarstellung.", "Die Autoren stellen einFramework vor, in dem ein Auto Encoder (E, D) so regularisiert wird, dass seine latente Repräsentation gegenseitige Informationen mit generierten latenten Raumrepräsentation teilt."]} {"source": "Effective training of neural networks requires much data.In the low-data regime,parameters are underdetermined, and learnt networks generalise poorly.DataAugmentation (Krizhevsky et al., 2012) alleviates this by using existing datamore effectively.However standard data augmentation produces only limitedplausible alternative data.Given there is potential to generate a much broader setof augmentations, we design and train a generative model to do data augmentation.The model, based on image conditional Generative Adversarial Networks, takesdata from a source domain and learns to take any data item and generalise itto generate other within-class data items.As this generative process does notdepend on the classes themselves, it can be applied to novel unseen classes of data.We show that a Data Augmentation Generative Adversarial Network (DAGAN)augments standard vanilla classifiers well.We also show a DAGAN can enhancefew-shot learning systems such as Matching Networks.We demonstrate theseapproaches on Omniglot, on EMNIST having learnt the DAGAN on Omniglot, andVGG-Face data.In our experiments we can see over 13% increase in accuracy inthe low-data regime experiments in Omniglot (from 69% to 82%), EMNIST (73.9%to 76%) and VGG-Face (4.5% to 12%); in Matching Networks for Omniglot weobserve an increase of 0.5% (from 96.9% to 97.4%) and an increase of 1.8% inEMNIST (from 59.5% to 61.3%).", "target": ["Bedingte GANs, die so trainiert werden, dass sie datenerweiterte Beispiele ihrer bedingten Eingaben erzeugen, die zur Verbesserung von Standard Klassifizierungs- und One-Shot-Lernsystemen wie Matching-Netzwerken und Pixel-Distanz verwendet werden.", "Die Autoren schlagen eine Methode zur Datenerweiterung vor, bei der die klassenübergreifenden Transformationen mit Hilfe von bedingten GAN auf einen niedrigdimensionalen latenten Raum abgebildet werden."]} {"source": "Answering questions about data can require understanding what parts of an input X influence the response Y. Finding such an understanding can be built by testing relationships between variables through a machine learning model.For example, conditional randomization tests help determine whether a variable relates to the response given the rest of the variables.However, randomization tests require users to specify test statistics.We formalize a class of proper test statistics that are guaranteed to select a feature when it provides information about the response even when the rest of the features are known.We show that f-divergences provide a broad class of proper test statistics.In the class of f-divergences, the KL-divergence yields an easy-to-compute proper test statistic that relates to the AMI.Questions of feature importance can be asked at the level of an individual sample. We show that estimators from the same AMI test can also be used to find important features in a particular instance.We provide an example to show that perfect predictive models are insufficient for instance-wise feature selection.We evaluate our method on several simulation experiments, on a genomic dataset, a clinical dataset for hospital readmission, and on a subset of classes in ImageNet.Our method outperforms several baselines in various simulated datasets, is able to identify biologically significant genes, can select the most important predictors of a hospital readmission event, and is able to identify distinguishing features in an image-classification task.", "target": ["Wir entwickeln eine einfache, auf Regression basierende, modellagnostische Methode zur Auswahl von Merkmalen, um datengenerierende Prozesse mit FDR-Kontrolle zu interpretieren, und übertreffen mehrere populäre Grundlinien auf mehreren simulierten, medizinischen und Bilddatensätzen.", "Diese Arbeit schlägt eine praktische Verbesserung des bedingten Randomisierungstests und eine neue Teststatistik vor, beweist, dass f-Divergenz eine mögliche Wahl ist, und zeigt, dass KL-Divergenz einige bedingte Verteilungen aufhebt.", "Diese Arbeit befasst sich mit dem Problem, nützliche Merkmale in einer Eingabe zu finden, die von einer variablen Antwort abhängig sind, selbst wenn alle anderen Eingabevariablen konditioniert sind.", "Eine modellunabhängige Methode zur Interpretation des Einflusses von Eingabemerkmalen auf die Reaktion eines Modells auf Maschinenebene bis hin zur Instanzebene sowie geeignete Teststatistiken für die modellunabhängige Merkmalsauswahl."]} {"source": "Supervised learning depends on annotated examples, which are taken to be the ground truth.But these labels often come from noisy crowdsourcing platforms, like Amazon Mechanical Turk.Practitioners typically collect multiple labels per example and aggregate the results to mitigate noise (the classic crowdsourcing problem).Given a fixed annotation budget and unlimited unlabeled data, redundant annotation comes at the expense of fewer labeled examples.This raises two fundamental questions: (1) How can we best learn from noisy workers?(2) How should we allocate our labeling budget to maximize the performance of a classifier?We propose a new algorithm for jointly modeling labels and worker quality from noisy crowd-sourced data.The alternating minimization proceeds in rounds, estimating worker quality from disagreement with the current model and then updating the model by optimizing a loss function that accounts for the current estimate of worker quality.Unlike previous approaches, even with only one annotation per example, our algorithm can estimate worker quality.We establish a generalization error bound for models learned with our algorithm and establish theoretically that it's better to label many examples once (vs less multiply) when worker quality exceeds a threshold.Experiments conducted on both ImageNet (with simulated noisy workers) and MS-COCO (using the real crowdsourced labels) confirm our algorithm's benefits.", "target": ["Ein neuer Ansatz zum Lernen eines Modells aus verrauschten Crowdsourced Annotations.", "In diesem Artikel wird eine Methode zum Lernen aus störhaften Labels vorgeschlagen, die sich auf den Fall konzentriert, dass die Daten nicht redundant beschriftet sind, mit theoretischer und experimenteller Validierung", "Diese Arbeit konzentriert sich auf das Problem des Lernens aus der Crowd, bei dem die gemeinsame Aktualisierung der Klassifikatorgewichte und der Konfusionsmatrizen der Arbeiter bei dem Schätzproblem mit seltenen Crowdsourced Labels helfen kann.", "Es wird ein überwachter Lernalgorithmus für die Modellierung der Qualität von Labels und Mitarbeitern vorgeschlagen und der Algorithmus wird verwendet, um zu untersuchen, wie viel Redundanz beim Crowdsourcing erforderlich ist und ob eine geringe Redundanz mit reichlich Störbeispielen zu besseren Labels führt."]} {"source": "Neural networks make mistakes.The reason why a mistake is made often remains a mystery.As such neural networks often are considered a black box.It would be useful to have a method that can give an explanation that is intuitive to a user as to why an image is misclassified.In this paper we develop a method for explaining the mistakes of a classifier model by visually showing what must be added to an image such that it is correctly classified.Our work combines the fields of adversarial examples, generative modeling and a correction technique based on difference target propagation to create an technique that creates explanations of why an image is misclassified.In this paper we explain our method and demonstrate it on MNIST and CelebA.This approach could aid in demystifying neural networks for a user.", "target": ["Neue Methode zur Erklärung, warum ein neuronales Netz ein Bild falsch klassifiziert hat.", "In diesem Artikel wird eine Methode zur Erklärung der Klassifizierungsfehler von neuronalen Netzen vorgeschlagen. ", "Ziel ist es, die Klassifizierung neuronaler Netze besser zu verstehen und den latenten Raum eines variationalen Autoencoders zu erforschen und die Störungen des latenten Raums zu berücksichtigen, um eine korrekte Klassifizierung zu erhalten."]} {"source": "In the context of multi-task learning, neural networks with branched architectures have often been employed to jointly tackle the tasks at hand.Such ramified networks typically start with a number of shared layers, after which different tasks branch out into their own sequence of layers.Understandably, as the number of possible network configurations is combinatorially large, deciding what layers to share and where to branch out becomes cumbersome.Prior works have either relied on ad hoc methods to determine the level of layer sharing, which is suboptimal, or utilized neural architecture search techniques to establish the network design, which is considerably expensive.In this paper, we go beyond these limitations and propose a principled approach to automatically construct branched multi-task networks, by leveraging the employed tasks' affinities.Given a specific budget, i.e. number of learnable parameters, the proposed approach generates architectures, in which shallow layers are task-agnostic, whereas deeper ones gradually grow more task-specific.Extensive experimental analysis across numerous, diverse multi-tasking datasets shows that, for a given budget, our method consistently yields networks with the highest performance, while for a certain performance threshold it requires the least amount of learnable parameters.", "target": ["Eine Methode zur automatischen Konstruktion von verzweigten Multitasking-Netzwerken mit starker experimenteller Bewertung auf verschiedenen Multitasking-Datensätzen.", "In diesem Beitrag wird ein neuartiges Multi-Task Learning Framework mit sanften Parametern vorgeschlagen, das auf einer baumartigen Struktur basiert.", "In diesem Artikel wird eine Methode zur Ableitung der Architektur von Multitasking Netzwerken vorgestellt, um zu bestimmen, welcher Teil des Netzwerks von den verschiedenen Aufgaben gemeinsam genutzt werden sollte."]} {"source": "Typical recent neural network designs are primarily convolutional layers, but the tricks enabling structured efficient linear layers (SELLs) have not yet been adapted to the convolutional setting.We present a method to express the weight tensor in a convolutional layer using diagonal matrices, discrete cosine transforms (DCTs) and permutations that can be optimised using standard stochastic gradient methods.A network composed of such structured efficient convolutional layers (SECL) outperforms existing low-rank networks and demonstrates competitive computational efficiency.", "target": ["Es ist möglich, die Gewichtsmatrix in einem Convolutional Layer zu ersetzen, um sie als strukturierte, effiziente Schicht zu trainieren, die genauso gut funktioniert wie die Low-Rank Decomposition.", "Diese Arbeit wendet frühere Structured Efficient Linear Layers auf Convolutional Layers an und schlägt Structured Efficient Convolutional Layers als Ersatz für die ursprünglichen Convolutional Layers vor."]} {"source": "Blind document deblurring is a fundamental task in the field of document processing and restoration, having wide enhancement applications in optical character recognition systems, forensics, etc.Since this problem is highly ill-posed, supervised and unsupervised learning methods are well suited for this application.Using various techniques, extensive work has been done on natural-scene deblurring.However, these extracted features are not suitable for document images.We present SVDocNet, an end-to-end trainable U-Net based spatial recurrent neural network (RNN) for blind document deblurring where the weights of the RNNs are determined by different convolutional neural networks (CNNs).This network achieves state of the art performance in terms of both quantitative measures and qualitative results.", "target": ["Wir stellen SVDocNet vor, ein durchgängig trainierbares U-Netz auf der Basis eines räumlich rekurrenten neuronalen Netzes (RNN) für die blinde Entschlüsselung von Dokumenten."]} {"source": "In contrast to the monolithic deep architectures used in deep learning today for computer vision, the visual cortex processes retinal images via two functionally distinct but interconnected networks: the ventral pathway for processing object-related information and the dorsal pathway for processing motion and transformations.Inspired by this cortical division of labor and properties of the magno- and parvocellular systems, we explore an unsupervised approach to feature learning that jointly learns object features and their transformations from natural videos.We propose a new convolutional bilinear sparse coding model that (1) allows independent feature transformations and (2) is capable of processing large images.Our learning procedure leverages smooth motion in natural videos.Our results show that our model can learn groups of features and their transformations directly from natural videos in a completely unsupervised manner.The learned \"dynamic filters\" exhibit certain equivariance properties, resemble cortical spatiotemporal filters, and capture the statistics of transitions between video frames.Our model can be viewed as one of the first approaches to demonstrate unsupervised learning of primary \"capsules\" (proposed by Hinton and colleagues for supervised learning) and has strong connections to the Lie group approach to visual perception.", "target": ["Wir erweitern die bilineare Sparse Codierung und nutzen Videosequenzen, um dynamische Filter zu lernen."]} {"source": "Conventional out-of-distribution (OOD) detection schemes based on variational autoencoder or Random Network Distillation (RND) are known to assign lower uncertainty to the OOD data than the target distribution.In this work, we discover that such conventional novelty detection schemes are also vulnerable to the blurred images.Based on the observation, we construct a novel RND-based OOD detector, SVD-RND, that utilizes blurred images during training.Our detector is simple, efficient in test time, and outperforms baseline OOD detectors in various domains.Further results show that SVD-RND learns a better target distribution representation than the baselines.Finally, SVD-RND combined with geometric transform achieves near-perfect detection accuracy in CelebA domain.", "target": ["Wir schlagen einen neuartigen OOD-Detektor vor, der unscharfe Bilder als Negativbeispiele verwendet. Unser Modell erreicht eine signifikante OOD-Erkennungsleistung in verschiedenen Bereichen.", "In diesem Beitrag wird die Idee vorgestellt, unscharfe Bilder als Regularisierungsbeispiele zu verwenden, um die Leistung bei der Erkennung von Verteilungsfehlern auf der Grundlage von Random Network Distillation zu verbessern.", "In diesem Artikel wird die Out-of-Data Verteilung angegangen, indem RND auf Datenerweiterungen angewandt wird, indem ein Modell trainiert wird, um die Ausgaben eines Zufallsnetzwerks mit einer Erweiterung als Eingabe abzugleichen."]} {"source": "Training large deep neural networks on massive datasets is  computationally very challenging.There has been recent surge in interest in using large batch stochastic optimization methods to tackle this issue.The most prominent algorithm in this line of research is LARS, which by  employing layerwise adaptive learning rates trains ResNet on ImageNet in a few minutes.However, LARS performs poorly for attention models like BERT, indicating that its performance gains are not consistent across tasks.In this paper, we first study a principled layerwise adaptation strategy to accelerate training of deep neural networks using large mini-batches.Using this strategy, we develop a new layerwise adaptive large batch optimization technique called LAMB; we then provide convergence analysis of LAMB as well as LARS, showing convergence to a stationary point in general nonconvex settings.Our empirical results demonstrate the superior performance of LAMB across various tasks such as BERT and ResNet-50 training with very little hyperparameter tuning.In particular, for BERT training, our optimizer enables use of very large batch sizes of 32868 without any degradation of performance.  By increasing the batch size to the memory limit of a TPUv3 Pod, BERT training time can be reduced from 3 days to just 76 minutes (Table 1).", "target": ["Ein schneller Optimierer für allgemeine Anwendungen und Training in großen Batches.", "In dieser Arbeit haben die Autoren eine Studie zum Large-Batch-Training für BERT durchgeführt und erfolgreich ein BERT-Modell in 76 Minuten trainiert.", "In diesem Beitrag wird eine schichtweise Anpassungsstrategie entwickelt, die es ermöglicht, BERT-Modelle mit großen 32k-Minibatches im Vergleich zur Basislinie von 512 Batches zu trainieren."]} {"source": "Model-agnostic meta-learning (MAML) is known as a powerful meta-learning method.However, MAML is notorious for being hard to train because of the existence of two learning rates.Therefore, in this paper, we derive the conditions that inner learning rate $\\alpha$ and meta-learning rate $\\beta$ must satisfy for MAML to converge to minima with some simplifications.We find that the upper bound of $\\beta$ depends on $ \\alpha$, in contrast to the case of using the normal gradient descent method.Moreover, we show that the threshold of $\\beta$ increases as $\\alpha$ approaches its own upper bound.This result is verified by experiments on various few-shot tasks and architectures; specifically, we perform sinusoid regression and classification of Omniglot and MiniImagenet datasets with a multilayer perceptron and a convolutional neural network.Based on this outcome, we present a guideline for determining the learning rates: first, search for the largest possible $\\alpha$; next, tune $\\beta$ based on the chosen value of $\\alpha$.", "target": ["Wir haben die Rolle von zwei Lernraten beim modellagnostischen Meta-Lernen bei der Konvergenz analysiert.", "Die Autoren haben das Problem der Instabilität der Optimierung in MAML durch die Untersuchung der beiden Lernraten angegangen.", "In dieser Arbeit wird eine Methode untersucht, mit der die beiden im MAML-Trainingsalgorithmus verwendeten Lernraten eingestellt werden können."]} {"source": "We present a neural framework for learning associations between interrelated groups of words such as the ones found in Subject-Verb-Object (SVO) structures.Our model induces a joint function-specific word vector space, where vectors of e.g. plausible SVO compositions lie close together.The model retains information about word group membership even in the joint space, and can thereby effectively be applied to a number of tasks reasoning over the SVO structure.We show the robustness and versatility of the proposed framework by reporting state-of-the-art results on the tasks of estimating selectional preference (i.e., thematic fit) and event similarity.The results indicate that the combinations of representations learned with our task-independent model outperform task-specific architectures from prior work, while reducing the number of parameters by up to 95%.The proposed framework is versatile and holds promise to support learning function-specific representations beyond the SVO structures.", "target": ["Aufgabenunabhängiges neuronales Modell für das Lernen von Assoziationen zwischen zusammenhängenden Wortgruppen.", "In dem Papier wird eine Methode zum Training funktionsspezifischer Wortvektoren vorgeschlagen, bei der jedes Wort mit drei Vektoren in jeweils einer anderen Kategorie (Subjekt-Verb-Objekt) dargestellt wird.", "In diesem Beitrag wird ein neuronales Netz zum Erlernen funktionsspezifischer Arbeitsrepräsentationen vorgeschlagen und der Vorteil gegenüber Alternativen aufgezeigt."]} {"source": "The fabrication of semiconductor involves etching process to remove selected areas from wafers.However, the measurement of etched structure in micro-graph heavily relies on time-consuming manual routines.Traditional image processing usually demands on large number of annotated data and the performance is still poor.We treat this challenge as segmentation problem and use deep learning approach to detect masks of objects in etched structure of wafer.Then, we use simple image processing to carry out automatic measurement on the objects.We attempt Generative Adversarial Network (GAN) to generate more data to overcome the problem of very limited dataset.We download 10 SEM (Scanning Electron Microscope) images of 4 types from Internet, based on which we carry out our experiments.Our deep learning based method demonstrates superiority over image processing approach with mean accuracy reaching over 96% for the measurements, compared with the ground truth.To the best of our knowledge, it is the first time that deep learning has been applied in semiconductor industry for automatic measurement.", "target": ["Einsatz einer Deep-Learning-Methode zur automatischen Vermessung von SEM-Bildern in der Halbleiterindustrie."]} {"source": "Generating and scheduling activities is particularly challengingwhen considering both consumptive resources andcomplex resource interactions such as time-dependent resourceusage.We present three methods of determining validtemporal placement intervals for an activity in a temporallygrounded plan in the presence of such constraints.We introducethe Max Duration and Probe algorithms which aresound, but incomplete, and the Linear algorithm which issound and complete for linear rate resource consumption.We apply these techniques to the problem of schedulingawakes for a planetary rover where the awake durationsare affected by existing activities.We demonstrate how theProbe algorithm performs competitively with the Linear algorithmgiven an advantageous problem space and well-definedheuristics.We show that the Probe and Linear algorithmsoutperform the Max Duration algorithm empirically.We then empirically present the runtime differences betweenthe three algorithms.The Probe algorithm is currently base-linedfor use in the onboard scheduler for NASA’s next planetaryrover, the Mars 2020 rover.", "target": ["Dieses Papier beschreibt und analysiert drei Methoden zur Terminierung von Aktivitäten mit nicht fester Dauer bei Vorhandensein von verbrauchenden Ressourcen.", "In diesem Beitrag werden drei Ansätze für die Planung von Aktivitäten an Bord eines planetarischen Rovers unter Berücksichtigung von Ressourcenbeschränkungen vorgestellt."]} {"source": "A disentangled representation of a data set should be capable of recovering the underlying factors that generated it.One question that arises is whether using Euclidean space for latent variable models can produce a disentangled representation when the underlying generating factors have a certain geometrical structure.Take for example the images of a car seen from different angles.The angle has a periodic structure but a 1-dimensional representation would fail to capture this topology.How can we address this problem?The submissions presented for the first stage of the NeurIPS2019 Disentanglement Challenge consist of a Diffusion Variational Autoencoder ($\\Delta$VAE) with a hyperspherical latent space which can for example recover periodic true factors.The training of the $\\Delta$VAE is enhanced by incorporating a modified version of the Evidence Lower Bound (ELBO) for tailoring the encoding capacity of the posterior approximate.", "target": ["Beschreibung der Einreichung zur NeurIPS2019 Disentanglement Challenge basierend auf hypersphärischen variationalen Autoencodern."]} {"source": "Anomaly detection, finding patterns that substantially deviate from those seen previously, is one of the fundamental problems of artificial intelligence.Recently, classification-based methods were shown to achieve superior results on this task.In this work, we present a unifying view and propose an open-set method to relax current generalization assumptions.Furthermore, we extend the applicability of transformation-based methods to non-image data using random affine transformations.Our method is shown to obtain state-of-the-art accuracy and is applicable to broad data types.The strong performance of our method is extensively validated on multiple datasets from different domains.", "target": ["Eine Anomalie-Erkennung, die Zufallstransformationen zur Klassifizierung für die Verallgemeinerung auf Nicht-Bilddaten verwendet.", "In diesem Beitrag wird ein tiefes Verfahren zur Erkennung von Anomalien vorgeschlagen, das die jüngsten tiefen One-class Klassifizierungs- und transformationsbasierten Klassifizierungsansätze vereint.", "In dieser Arbeit wird ein Ansatz zur klassifikationsbasierten Erkennung von Anomalien für allgemeine Daten unter Verwendung der affinen Transformation y = Wx+b vorgeschlagen."]} {"source": "Recent improvements in large-scale language models have driven progress on automatic generation of syntactically and semantically consistent text for many real-world applications.Many of these advances leverage the availability of large corpora.While training on such corpora encourages the model to understand long-range dependencies in text, it can also result in the models internalizing the social biases present in the corpora.This paper aims to quantify and reduce biases exhibited by language models.Given a conditioning context (e.g. a writing prompt) and a language model, we analyze if (and how) the sentiment of the generated text is affected by changes in values of sensitive attributes (e.g. country names, occupations, genders, etc.) in the conditioning context, a.k.a. counterfactual evaluation.We quantify these biases by adapting individual and group fairness metrics from the fair machine learning literature.Extensive evaluation on two different corpora (news articles and Wikipedia) shows that state-of-the-art Transformer-based language models exhibit biases learned from data.We propose embedding-similarity and sentiment-similarity regularization methods that improve both individual and group fairness metrics without sacrificing perplexity and semantic similarity---a positive step toward development and deployment of fairer language models for real-world applications.", "target": ["Wir verringern die Verzerrung der Stimmung auf der Grundlage einer kontrafaktischen Bewertung der Texterstellung mithilfe von Sprachmodellen.", "In diesem Artikel wird die semantische Verzerrung in Sprachmodellen gemessen, wie sie sich in dem von den Modellen erzeugten Text widerspiegelt, und es werden andere objektive Begriffe zu den üblichen Sprachmodellierungszielen hinzugefügt, um die Verzerrung zu verringern.", "In diesem Papier wird vorgeschlagen, die Verzerrung in vortrainierten Sprachmodellen zu bewerten, indem ein festes Sentiment-System verwendet und verschiedene Präfix-Vorlagen getestet werden.", "Eine Methode, die auf semantischer Ähnlichkeit basiert, und eine Methode, die auf sentimentaler Ähnlichkeit basiert, um die neuronalen Sprachmodelle, die aus großen Datensätzen trainiert wurden, zu entzerren."]} {"source": "Topic modeling of text documents is one of the most important tasks in representation learning.In this work, we propose iTM-VAE, which is a Bayesian nonparametric (BNP) topic model with variational auto-encoders.On one hand, as a BNP topic model, iTM-VAE potentially has infinite topics and can adapt the topic number to data automatically.On the other hand, different with the other BNP topic models, the inference of iTM-VAE is modeled by neural networks, which has rich representation capacity and can be computed in a simple feed-forward manner.Two variants of iTM-VAE are also proposed in this paper, where iTM-VAE-Prod models the generative process in products-of-experts fashion for better performance and iTM-VAE-G places a prior over the concentration parameter such that the model can adapt a suitable concentration parameter to data automatically.Experimental results on 20News and Reuters RCV1-V2 datasets show that the proposed models outperform the state-of-the-arts in terms of perplexity, topic coherence and document retrieval tasks.Moreover, the ability of adjusting the concentration parameter to data is also confirmed by experiments.", "target": ["Ein nichtparametrisches Bayes'sches Themenmodell mit variationalen Autoencodern, das bei öffentlichen Benchmarks den Stand der Technik in Bezug auf Komplexität, Themenkohärenz und Abfrageaufgaben erreicht.", "In diesem Beitrag wird ein unendliches Themenmodell mit variationalen Autoencodern konstruiert, indem der Stick-breaking variationale Autoencoder von Nalisnick & Smith mit latenter Dirichlet-Zuweisung und mehreren in Miao verwendeten Inferenztechniken kombiniert wird."]} {"source": "Knowledge Distillation (KD) is a widely used technique in recent deep learning research to obtain small and simple models whose performance is on a par with their large and complex counterparts.Standard Knowledge Distillation tends to be time-consuming because of the training time spent to obtain a teacher model that would then provide guidance for the student model.It might be possible to cut short the time by training a teacher model on the fly, but it is not trivial to have such a high-capacity teacher that gives quality guidance to student models this way.To improve this, we present a novel framework of Knowledge Distillation exploiting dark knowledge from the whole training set.In this framework, we propose a simple and effective implementation named Distillation by Utilizing Peer Samples (DUPS) in one generation.We verify our algorithm on numerous experiments.Compared with standard training on modern architectures, DUPS achieves an average improvement of 1%-2% on various tasks with nearly zero extra cost.Considering some typical Knowledge Distillation methods which are much more time-consuming, we also get comparable or even better performance using DUPS.", "target": ["Wir stellen einen neuen Rahmen für die Wissensdestillation vor, der Peer-Beispiele als Lehrer verwendet.", "Vorschlagen einer Methode zur Verbesserung der Effektivität der Wissensdestillation, indem die verwendeten Bezeichnungen abgeschwächt werden und ein Datensatz anstelle eines einzelnen Beispiels verwendet wird.", "In dieser Arbeit wird vorgeschlagen, den zusätzlichen Rechenaufwand für das Training durch Wissensdestillation zu bewältigen, indem auf der kürzlich vorgeschlagenen Snapshot-Destillationstechnik aufgebaut wird."]} {"source": "We develop a metalearning approach for learning hierarchically structured poli- cies, improving sample efficiency on unseen tasks through the use of shared primitives—policies that are executed for large numbers of timesteps.Specifi- cally, a set of primitives are shared within a distribution of tasks, and are switched between by task-specific policies.We provide a concrete metric for measuring the strength of such hierarchies, leading to an optimization problem for quickly reaching high reward on unseen tasks.We then present an algorithm to solve this problem end-to-end through the use of any off-the-shelf reinforcement learning method, by repeatedly sampling new tasks and resetting task-specific policies.We successfully discover meaningful motor primitives for the directional movement of four-legged robots, solely by interacting with distributions of mazes.We also demonstrate the transferability of primitives to solve long-timescale sparse-reward obstacle courses, and we enable 3D humanoid robots to robustly walk and crawl with the same policy.", "target": ["Erlernen hierarchischer Teilstrategien durch Ende-zu-Ende Training über eine Verteilung von Aufgaben", "Die Autoren betrachten das Problem des Lernens einer sinnvollen Menge von Subregeln, die zwischen Aufgaben geteilt werden können, um das Lernen auf neue Aufgaben aus der Aufgabenverteilung zu starten. ", "In dieser Arbeit wird eine neuartige Methode zur Induzierung einer zeitlichen hierarchischen Struktur in einer spezialisierten Multi-Task-Umgebung vorgeschlagen."]} {"source": "This paper proposes a new model for document embedding.Existing approaches either require complex inference or use recurrent neural networks that are difficult to parallelize.We take a different route and use recent advances in language modeling to develop a convolutional neural network embedding model.This allows us to train deeper architectures that are fully parallelizable.Stacking layers together increases the receptive filed allowing each successive layer to model increasingly longer range semantic dependences within the document.Empirically we demonstrate superior results on two publicly available benchmarks.Full code will be released with the final version of this paper.", "target": ["Convolutional Neural Network Modell für die unbeaufsichtigte Einbettung von Dokumenten.", "Es wird ein neues Modell für die allgemeine Aufgabe der Induktion von Dokumentrepräsentationen (Einbettungen) vorgestellt, das eine CNN-Architektur zur Verbesserung der Recheneffizienz verwendet.", "In diesem Artikel wird vorgeschlagen, CNNs mit einem Skip-Gram-ähnlichen Ziel als schnellen Weg zur Ausgabe von Dokumenteneinbettungen zu verwenden."]} {"source": "We prove bounds on the generalization error of convolutional networks.The bounds are in terms of the training loss, the number ofparameters, the Lipschitz constant of the loss and the distance fromthe weights to the initial weights. They are independent of thenumber of pixels in the input, and the height and width of hiddenfeature maps. We present experiments with CIFAR-10, along with varyinghyperparameters of a deep convolutional network, comparing our boundswith practical generalization gaps.", "target": ["Wir beweisen Verallgemeinerungsgrenzen für Convolutional neuronale Netze, die die Gewichtskopplung berücksichtigen.", "Untersucht die Verallgemeinerungsfähigkeit von CNNs und verbessert die oberen Schranken des Verallgemeinerungsfehlers, wobei eine Korrelation zwischen dem Verallgemeinerungsfehler von gelernten CNNs und dem dominanten Term der oberen Schranke gezeigt wird.", "In diesem Papier wird eine Verallgemeinerungsschranke für neuronale Convolutional Neural Networks vorgestellt, die auf der Anzahl der Parameter, der Lipschitz-Konstante und dem Abstand der endgültigen Gewichte von der Initialisierung basiert."]} {"source": "MobileNets family of computer vision neural networks have fueled tremendous progress in the design and organization of resource-efficient architectures in recent years.New applications with stringent real-time requirements in highly constrained devices require further compression of MobileNets-like already computeefficient networks.Model quantization is a widely used technique to compress and accelerate neural network inference and prior works have quantized MobileNets to 4 − 6 bits albeit with a modest to significant drop in accuracy.While quantization to sub-byte values (i.e. precision ≤ 8 bits) has been valuable, even further quantization of MobileNets to binary or ternary values is necessary to realize significant energy savings and possibly runtime speedups on specialized hardware, such as ASICs and FPGAs.Under the key observation that convolutional filters at each layer of a deep neural network may respond differently to ternary quantization, we propose a novel quantization method that generates per-layer hybrid filter banks consisting of full-precision and ternary weight filters for MobileNets.The layer-wise hybrid filter banks essentially combine the strengths of full-precision and ternary weight filters to derive a compact, energy-efficient architecture for MobileNets.Using this proposed quantization method, we quantized a substantial portion of weight filters of MobileNets to ternary values resulting in 27.98% savings in energy, and a 51.07% reduction in the model size, while achieving comparable accuracy and no degradation in throughput on specialized hardware in comparison to the baseline full-precision MobileNets.", "target": ["Zweifache Einsparungen bei der Modellgröße, 28 % Energieeinsparung für MobileNets auf ImageNet ohne Genauigkeitsverlust durch hybride Schichten, die aus konventionellen Filtern mit voller Genauigkeit und ternären Filtern bestehen.", "Der Schwerpunkt liegt auf der Quantisierung der MobileNets-Architektur auf ternäre Werte, wodurch der Platz- und Rechenbedarf gesenkt wird, um neuronale Netze energieeffizienter zu machen.", "Das Papier schlägt eine schichtweise hybride Filter Bank vor, die nur einen Teil der Convolutional Filter auf ternäre Werte für die MobileNets Architektur quantisiert."]} {"source": "Performing controlled experiments on noisy data is essential in thoroughly understanding deep learning across a spectrum of noise levels.Due to the lack of suitable datasets, previous research have only examined deep learning on controlled synthetic noise, and real-world noise has never been systematically studied in a controlled setting.To this end, this paper establishes a benchmark of real-world noisy labels at 10 controlled noise levels.As real-world noise possesses unique properties, to understand the difference, we conduct a large-scale study across a variety of noise levels and types, architectures, methods, and training settings.Our study shows that: (1) Deep Neural Networks (DNNs) generalize much better on real-world noise.(2) DNNs may not learn patterns first on real-world noisy data.(3) When networks are fine-tuned, ImageNet architectures generalize well on noisy data.(4) Real-world noise appears to be less harmful, yet it is more difficult for robust DNN methods to improve.(5) Robust learning methods that work well on synthetic noise may not work as well on real-world noise, and vice versa.We hope our benchmark, as well as our findings, will facilitate deep learning research on noisy data.", "target": ["Wir haben einen Benchmark für kontrolliertes echtes Stören erstellt und einige interessante Erkenntnisse über gestörte Daten in der realen Welt gewonnen.", "In diesem Beitrag werden 6 bestehende Methoden zum Erlernen von gestörten Labeln in zwei Trainingseinstellungen verglichen: von Grund auf und mit Fine-Tuning.", "Die Autoren erstellen einen großen Datensatz und einen Benchmark für kontrolliertes echte Störungen, um kontrollierte Experimente mit gestörten Daten beim Deep Learning durchzuführen."]} {"source": "Designing RNA molecules has garnered recent interest in medicine, synthetic biology, biotechnology and bioinformatics since many functional RNA molecules were shown to be involved in regulatory processes for transcription, epigenetics and translation.Since an RNA's function depends on its structural properties, the RNA Design problem is to find an RNA sequence which satisfies given structural constraints.Here, we propose a new algorithm for the RNA Design problem, dubbed LEARNA.LEARNA uses deep reinforcement learning to train a policy network to sequentially design an entire RNA sequence given a specified target structure.By meta-learning across 65000 different RNA Design tasks for one hour on 20 CPU cores, our extension Meta-LEARNA constructs an RNA Design policy that can be applied out of the box to solve novel RNA Design tasks.Methodologically, for what we believe to be the first time, we jointly optimize over a rich space of architectures for the policy network, the hyperparameters of the training procedure and the formulation of the decision process.Comprehensive empirical results on two widely-used RNA Design benchmarks, as well as a third one that we introduce, show that our approach achieves new state-of-the-art performance on the former while also being orders of magnitudes faster in reaching the previous state-of-the-art performance.In an ablation study, we analyze the importance of our method's different components.", "target": ["Wir lernen, das RNA-Design Problem mit Reinforcement Learning unter Verwendung von Meta-Learning und AutoML Ansätzen zu lösen.", "Anwendung der Policy-Gradienten-Optimierung zur Generierung von RNA-Sequenzen, die sich in eine Ziel-Sekundärstruktur falten, was zu einer deutlichen Verbesserung der Genauigkeit und der Laufzeit führte. "]} {"source": "Pruning is a popular technique for compressing a neural network: a large pre-trained network is fine-tuned while connections are successively removed.However, the value of pruning has largely evaded scrutiny.In this extended abstract, we examine residual networks obtained through Fisher-pruning and make two interesting observations.First, when time-constrained, it is better to train a simple, smaller network from scratch than prune a large network.Second, it is the architectures obtained through the pruning process --- not the learnt weights --- that prove valuable.Such architectures are powerful when trained from scratch.Furthermore, these architectures are easy to approximate without any further pruning: we can prune once and obtain a family of new, scalable network architectures for different memory requirements.", "target": ["Das Trainieren kleiner Netze schlägt das Pruning, aber das Pruning findet gute kleine Netze zum Trainieren, die leicht zu kopieren sind."]} {"source": "Supervised learning problems---particularly those involving social data---are often subjective.That is, human readers, looking at the same data, might come to legitimate but completely different conclusions based on their personal experiences.Yet in machine learning settings feedback from multiple human annotators is often reduced to a single ``ground truth'' label, thus hiding the true, potentially rich and diverse interpretations of the data found across the social spectrum.We explore the rewards and challenges of discovering and learning representative distributions of the labeling opinions of a large human population.A major, critical cost to this approach is the number of humans needed to provide enough labels not only to obtain representative samples but also to train a machine to predict representative distributions on unlabeled data.We propose aggregating label distributions over, not just individuals, but also data items, in order to maximize the costs of humans in the loop.We test different aggregation approaches on state-of-the-art deep learning models.Our results suggest that careful label aggregation methods can greatly reduce the number of samples needed to obtain representative distributions.", "target": ["Wir untersuchen das Problem des Lernens, die zugrundeliegende Vielfalt von Überzeugungen in überwachten Lernbereichen vorherzusagen."]} {"source": "Recent advancements in deep learning techniques such as Convolutional Neural Networks(CNN) and Generative Adversarial Networks(GAN) have achieved breakthroughs in the problem of semantic image inpainting, the task of reconstructing missing pixels in given images.While much more effective than conventional approaches, deep learning models require large datasets and great computational resources for training, and inpainting quality varies considerably when training data vary in size and diversity.To address these problems, we present in this paper a inpainting strategy of \\textit{Comparative Sample Augmentation}, which enhances the quality of training set by filtering out irrelevant images and constructing additional images using information about the surrounding regions of the images to be inpainted.Experiments on multiple datasets demonstrate that our method extends the applicability of deep inpainting models to training sets with varying sizes, while maintaining inpainting quality as measured by qualitative and quantitative metrics for a large class of deep models, with little need for model-specific consideration.", "target": ["Wir haben eine Strategie eingeführt, die das Inpainting von Modellen auf Datensätzen unterschiedlicher Größe ermöglicht.", "Hilfe beim Bild Inpainting mit GANs durch Verwendung eines vergleichenden Augmentierungsfilters und Hinzufügen von Zufallsstörungen zu jedem Pixel."]} {"source": "Generative adversarial networks (GANs) are a family of generative models that do not minimize a single training criterion.Unlike other generative models, the data distribution is learned via a game between a generator (the generative model) and a discriminator (a teacher providing training signal) that each minimize their own cost.GANs are designed to reach a Nash equilibrium at which each player cannot reduce their cost without changing the other players’ parameters.One useful approach for the theory of GANs is to show that a divergence between the training distribution and the model distribution obtains its minimum value at equilibrium.Several recent research directions have been motivated by the idea that this divergence is the primary guide for the learning process and that every step of learning should decrease the divergence.We show that this view is overly restrictive.During GAN training, the discriminator provides learning signal in situations where the gradients of the divergences between distributions would not be useful.We provide empirical counterexamples to the view of GAN training as divergence minimization.Specifically, we demonstrate that GANs are able to learn distributions in situations where the divergence minimization point of view predicts they would fail.We also show that gradient penalties motivated from the divergence minimization perspective are equally helpful when applied in other contexts in which the divergence minimization perspective does not predict they would be helpful.This contributes to a growing body of evidence that GAN training may be more usefully viewed as approaching Nash equilibria via trajectories that do not necessarily minimize a specific divergence at each step.", "target": ["Wir finden Belege dafür, dass die Minimierung der Divergenz möglicherweise keine genaue Charakterisierung des GAN-Trainings ist.", "Der Beitrag zielt darauf ab, empirische Beweise dafür zu liefern, dass die Theorie der Divergenzminimierung eher ein Werkzeug ist, um das Ergebnis des Trainings von GANs zu verstehen, als eine notwendige Bedingung, die während des Trainings selbst durchgesetzt werden muss.", "Diese Arbeit untersucht nicht-sättigende GANs und die Auswirkungen von zwei bestraften Gradienten-Ansätzen, unter Berücksichtigung mehrerer Gedankenexperimente, um Beobachtungen zu demonstrieren und sie an realen Datenexperimenten zu validieren."]} {"source": "Measuring Mutual Information (MI) between high-dimensional, continuous, random variables from observed samples has wide theoretical and practical applications.Recent works have developed accurate MI estimators through provably low-bias approximations and tight variational lower bounds assuming abundant supply of samples, but require an unrealistic number of samples to guarantee statistical significance of the estimation.In this work, we focus on improving data efficiency and propose a Data-Efficient MINE Estimator (DEMINE) that can provide a tight lower confident interval of MI under limited data, through adding cross-validation to the MINE lower bound (Belghazi et al., 2018).Hyperparameter search is employed and a novel meta-learning approach with task augmentation is developed to increase robustness to hyperparamters, reduce overfitting and improve accuracy.With improved data-efficiency, our DEMINE estimator enables statistical testing of dependency at practical dataset sizes.We demonstrate the effectiveness of DEMINE on synthetic benchmarks and a real world fMRI dataset, with application of inter-subject correlation analysis.", "target": ["Ein neuer und praktischer statistischer Test der Abhängigkeit unter Verwendung neuronaler Netze, der an synthetischen und realen fMRI-Datensätzen getestet wurde.", "Es wird eine auf neuronalen Netzen basierende Schätzung von gegenseitigen Informationen vorgeschlagen, die zuverlässig mit kleinen Datensätzen arbeiten kann, wobei die Komplexität der Stichprobe durch Entkopplung des Netzwerk-Lernproblems und des Schätzproblems reduziert wird."]} {"source": "Language and vision are processed as two different modal in current work for image captioning.However, recent work on Super Characters method shows the effectiveness of two-dimensional word embedding, which converts text classification problem into image classification problem.In this paper, we propose the SuperCaptioning method, which borrows the idea of two-dimensional word embedding from Super Characters method, and processes the information of language and vision together in one single CNN model.The experimental results on Flickr30k data shows the proposed method gives high quality image captions.An interactive demo is ready to show at the workshop.", "target": ["Bildbeschriftung durch zweidimensionale Worteinbettung."]} {"source": "Determining the optimal order in which data examples are presented to Deep Neural Networks during training is a non-trivial problem.However, choosing a non-trivial scheduling method may drastically improve convergence.In this paper, we propose a Self-Paced Learning (SPL)-fused Deep Metric Learning (DML) framework, which we call Learning Embeddings for Adaptive Pace (LEAP).Our method parameterizes mini-batches dynamically based on the \\textit{easiness} and \\textit{true diverseness} of the sample within a salient feature representation space.In LEAP, we train an \\textit{embedding} Convolutional Neural Network (CNN) to learn an expressive representation space by adaptive density discrimination using the Magnet Loss.The \\textit{student} CNN classifier dynamically selects samples to form a mini-batch based on the \\textit{easiness} from cross-entropy losses and \\textit{true diverseness} of examples from the representation space sculpted by the \\textit{embedding} CNN.We evaluate LEAP using deep CNN architectures for the task of supervised image classification on MNIST, FashionMNIST, CIFAR-10, CIFAR-100, and SVHN.We show that the LEAP framework converges faster with respect to the number of mini-batch updates required to achieve a comparable or better test performance on each of the datasets.", "target": ["LEAP kombiniert die Stärken des adaptiven Sampling mit denen des Mini-Batch-Online-Lernens und des adaptiven Repräsentationslernens, um eine repräsentative, selbstgesteuerte Strategie in einem Ende-zu-Ende DNN Trainingsprotokoll zu formulieren. ", "Es wird eine Methode zur Erstellung von Minibatches für ein Schülernetz vorgestellt, bei der ein zweiter erlernter Repräsentationsraum zur dynamischen Auswahl von Beispielen nach ihrer \"Einfachheit und wahren Vielfalt\" verwendet wird.", "Experimente die Klassifizierungsgenauigkeit auf MNIST, FashionMNIST und CIFAR-10 Datensätze zu lernen, eine Darstellung mit Lehrplan Lern-Stil Minibatch-Auswahl in einem Ende-zu-Ende Rahmen."]} {"source": "Conventional deep reinforcement learning typically determines an appropriate primitive action at each timestep, which requires enormous amount of time and effort for learning an effective policy, especially in large and complex environments.To deal with the issue fundamentally, we incorporate macro actions, defined as sequences of primitive actions, into the primitive action space to form an augmented action space.The problem lies in how to find an appropriate macro action to augment the primitive action space. The agent using a proper augmented action space is able to jump to a farther state and thus speed up the exploration process as well as facilitate the learning procedure.In previous researches, macro actions are developed by mining the most frequently used action sequences or repeating previous actions.However, the most frequently used action sequences are extracted from a past policy, which may only reinforce the original behavior of that policy.On the other hand, repeating actions may limit the diversity of behaviors of the agent.Instead, we propose to construct macro actions by a genetic algorithm, which eliminates the dependency of the macro action derivation procedure from the past policies of the agent. Our approach appends a macro action to the primitive action space once at a time and evaluates whether the augmented action space leads to promising performance or not. We perform extensive experiments and show that the constructed macro actions are able to speed up the learning process for a variety of deep reinforcement learning methods.Our experimental results also demonstrate that the macro actions suggested by our approach are transferable among deep reinforcement learning methods and similar environments.We further provide a comprehensive set of ablation analysis to validate our methodology.", "target": ["Wir schlagen vor, Makro-Aktionen mit Hilfe eines genetischen Algorithmus zu konstruieren, der die Abhängigkeit der Makro-Aktionsableitung von den vergangenen Strategien des Agenten eliminiert.", "Dieses Papier schlägt einen generischen Algorithmus für die Konstruktion von Makro-Aktionen für Deep Reinforcement Learning vor, indem eine Makro-Aktion an den primitiven Aktionsraum angehängt wird."]} {"source": "A key problem in neuroscience and life sciences more generally is that the data generation process is often best thought of as a hierarchy of dynamic systems.One example of this is in-vivo calcium imaging data, where observed calcium transients are driven by a combination of electro-chemical kinetics where hypothesized trajectories around manifolds determining the frequency of these transients.A recent approach using sequential variational auto-encoders demonstrated it was possible to learn the latent dynamic structure of reaching behaviour from spiking data modelled as a Poisson process.Here we extend this approach using a ladder method to infer the spiking events driving calcium transients along with the deeper latent dynamic system.We show strong performance of this approach on a benchmark synthetic dataset against a number of alternatives.", "target": ["Wir schlagen eine Erweiterung von LFADS vor, die in der Lage ist, Spike Trains abzuleiten, um Kalzium-Fluoreszenzspuren mit hierarchischen VAEs zu rekonstruieren."]} {"source": "In spite of the recent success of neural machine translation (NMT) in standard benchmarks, the lack of large parallel corpora poses a major practical problem for many language pairs.There have been several proposals to alleviate this issue with, for instance, triangulation and semi-supervised learning techniques, but they still require a strong cross-lingual signal.In this work, we completely remove the need of parallel data and propose a novel method to train an NMT system in a completely unsupervised manner, relying on nothing but monolingual corpora.Our model builds upon the recent work on unsupervised embedding mappings, and consists of a slightly modified attentional encoder-decoder model that can be trained on monolingual corpora alone using a combination of denoising and backtranslation.Despite the simplicity of the approach, our system obtains 15.56 and 10.21 BLEU points in WMT 2014 French-to-English and German-to-English translation.The model can also profit from small parallel corpora, and attains 21.81 and 15.24 points when combined with 100,000 parallel sentences, respectively.Our implementation is released as an open source project.", "target": ["Wir stellen die erste erfolgreiche Methode vor, um neuronale maschinelle Übersetzung auf unüberwachte Weise zu trainieren, indem wir nichts anderes als einsprachige Korpora verwenden", "Die Autoren stellen ein Modell für unüberwachte NMT vor, das keine parallelen Korpora zwischen den beiden interessierenden Sprachen erfordert. ", "Dies ist eine Arbeit über unüberwachte MT, die eine Standardarchitektur mit Worteinbettungen in einem gemeinsamen Einbettungsraum nur mit zweisprachigen Wortpapieren und einem Encoder-Decoder trainiert, der mit einsprachigen Daten trainiert wird."]} {"source": "We describe a new training methodology for generative adversarial networks.The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses.This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2.We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10.Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator.Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation.As an additional contribution, we construct a higher-quality version of the CelebA dataset.", "target": ["Wir trainieren generative adversarial Netze auf progressive Weise und können so hochauflösende Bilder mit hoher Qualität erzeugen.", "Einführung von progressivem Wachstum und einer einfachen parameterfreien statistischen Minibatch-Zusammenfassung zur Verwendung beim GAN-Training, um die Synthese von hochauflösenden Bildern zu ermöglichen."]} {"source": "Designing a convolution for a spherical neural network requires a delicate tradeoff between efficiency and rotation equivariance.DeepSphere, a method based on a graph representation of the discretized sphere, strikes a controllable balance between these two desiderata.This contribution is twofold.First, we study both theoretically and empirically how equivariance is affected by the underlying graph with respect to the number of pixels and neighbors.Second, we evaluate DeepSphere on relevant problems.Experiments show state-of-the-art performance and demonstrates the efficiency and flexibility of this formulation.Perhaps surprisingly, comparison with previous work suggests that anisotropic filters might be an unnecessary price to pay.", "target": ["Ein graphbasiertes sphärisches CNN, das ein interessantes Gleichgewicht von Kompromissen für eine Vielzahl von Anwendungen bietet.", "Kombiniert bestehende CNN-Frameworks, die auf der Diskretisierung einer Kugel als Graph basieren, um ein Konvergenzergebnis zu zeigen, das mit der Rotationsäquivalenz auf einer Kugel zusammenhängt.", "Die Autoren verwenden die bestehende Graph-CNN-Formulierung und eine Pooling-Strategie, die hierarchische Pixelierungen der Kugel ausnutzt, um aus der diskretisierten Kugel zu lernen."]} {"source": "The notion of the stationary equilibrium ensemble has played a central role in statistical mechanics.In machine learning as well, training serves as generalized equilibration that drives the probability distribution of model parameters toward stationarity.Here, we derive stationary fluctuation-dissipation relations that link measurable quantities and hyperparameters in the stochastic gradient descent algorithm.These relations hold exactly for any stationary state and can in particular be used to adaptively set training schedule.We can further use the relations to efficiently extract information pertaining to a loss-function landscape such as the magnitudes of its Hessian and anharmonicity.Our claims are empirically verified.", "target": ["Wir beweisen Fluktuations-Dissipations-Beziehungen für SGD, die verwendet werden können, um (i) Lernraten adaptiv festzulegen und (ii) Verlustflächen zu untersuchen.", "Die Konzepte des Artikels arbeiten im zeitdiskreten Formalismus, verwenden die Master-Gleichung und sind nicht auf eine lokal quadratische Annäherung der Verlustfunktion oder auf Gaußsche Annahmen für SGD-Störungen angewiesen. ", "Die Autoren leiten die stationären Fluktuations-Dissipations-Relationen ab, die messbare Größen und Hyperparameter in SGD miteinander verknüpfen, und verwenden die Relationen, um den Trainingsplan adaptiv festzulegen und die Verlustfunktionslandschaft zu analysieren."]} {"source": "Recurrent neural networks (RNNs) are difficult to train on sequence processing tasks, not only because input noise may be amplified through feedback, but also because any inaccuracy in the weights has similar consequences as input noise.We describe a method for denoising the hidden state during training to achieve more robust representations thereby improving generalization performance.Attractor dynamics are incorporated into the hidden state to `clean up' representations at each step of a sequence.The attractor dynamics are trained through an auxillary denoising loss to recover previously experienced hidden states from noisy versions of those states.This state-denoised recurrent neural network (SDRNN) performs multiple steps of internal processing for each external sequence step.On a range of tasks, we show that the SDRNN outperforms a generic RNN as well as a variant of the SDRNN with attractor dynamics on the hidden state but without the auxillary loss.We argue that attractor dynamics---and corresponding connectivity constraints---are an essential component of the deep learning arsenal and should be invoked not only for recurrent networks but also for improving deep feedforward nets and intertask transfer.", "target": ["Wir schlagen einen Mechanismus zum Entfernen von Störfaktoren des internen Zustands eines RNN vor, um die Generalisierungsleistung zu verbessern."]} {"source": "We consider reinforcement learning in input-driven environments, where an exogenous, stochastic input process affects the dynamics of the system.Input processes arise in many applications, including queuing systems, robotics control with disturbances, and object tracking.Since the state dynamics and rewards depend on the input process, the state alone provides limited information for the expected future returns.Therefore, policy gradient methods with standard state-dependent baselines suffer high variance during training.We derive a bias-free, input-dependent baseline to reduce this variance, and analytically show its benefits over state-dependent baselines.We then propose a meta-learning approach to overcome the complexity of learning a baseline that depends on a long sequence of inputs.Our experimental results show that across environments from queuing systems, computer networks, and MuJoCo robotic locomotion, input-dependent baselines consistently improve training stability and result in better eventual policies.", "target": ["Für Umgebungen, die teilweise durch externe Eingabeprozesse diktiert werden, leiten wir eine eingabeabhängige Basislinie ab, die nachweislich die Varianz für Policy-Gradienten Methoden reduziert und die Strategie-Leistung in einem breiten Spektrum von RL Aufgaben verbessert.", "Die Autoren betrachten das Problem des Lernens in eingabegesteuerten Umgebungen, zeigen, wie das PG-Theorem immer noch für einen eingabebewussten Kritiker gilt, und zeigen, dass eingabeabhängige Basislinien am besten für Vermutungen mit diesem Kritiker zu verwenden sind.", "In diesem Beitrag wird der Begriff der eingabeabhängigen Basislinien in Policy-Gradienten Methoden in RL eingeführt und es werden verschiedene Methoden zum Trainieren der eingabeabhängigen Basislinienfunktion vorgeschlagen, um die Varianz von Störungen durch externe Faktoren zu verringern."]} {"source": "Deep networks have shown great performance in classification tasks.However, the parameters learned by the classifier networks usually discard stylistic information of the input, in favour of information strictly relevant to classification.We introduce a network that has the capacity to do both classification and reconstruction by adding a \"style memory\" to the output layer of the network.We also show how to train such a neural network as a deep multi-layer autoencoder, jointly minimizing both classification and reconstruction losses.The generative capacity of our network demonstrates that the combination of style-memory neurons with the classifier neurons yield good reconstructions of the inputs when the classification is correct.We further investigate the nature of the style memory, and how it relates to composing digits and letters.", "target": ["Durch die Erweiterung der obersten Schicht eines Klassifizierungsnetzes mit einem Stylespeicher kann es generativ arbeiten.", "In dieser Arbeit wird vorgeschlagen, ein neuronales Klassifizierungsnetz nicht nur für die Klassifizierung zu trainieren, sondern auch für die Rekonstruktion einer Repräsentation seiner Eingabe, um die Klasseninformation aus dem Erscheinungsbild zu faktorisieren.", "Die Arbeit schlägt vor, einen Autoencoder so zu trainieren, dass die Repräsentation der mittleren Schicht aus dem Klassenlabel des Inputs und einer versteckten Vektordarstellung besteht."]} {"source": "Routing models, a form of conditional computation where examples are routed through a subset of components in a larger network, have shown promising results in recent works.Surprisingly, routing models to date have lacked important properties, such as architectural diversity and large numbers of routing decisions.Both architectural diversity and routing depth can increase the representational power of a routing network.In this work, we address both of these deficiencies.We discuss the significance of architectural diversity in routing models, and explain the tradeoffs between capacity and optimization when increasing routing depth.In our experiments, we find that adding architectural diversity to routing models significantly improves performance, cutting the error rates of a strong baseline by 35% on an Omniglot setup.However, when scaling up routing depth, we find that modern routing techniques struggle with optimization.We conclude by discussing both the positive and negative results, and suggest directions for future research.", "target": ["Pro-Beispiel Routing Modelle profitieren von der architektonischen Vielfalt, haben aber immer noch Probleme bei der Skalierung auf eine große Anzahl von Routing Entscheidungen.", "Erweitert die Art der architektonischen Einheit, die dem Router bei jeder Entscheidung zur Verfügung steht, und skaliert auf tiefere Netze, um die modernste Leistung auf Omniglot zu erreichen. ", "Diese Arbeit erweitert Routing Netzwerke, um verschiedene Architekturen über geroutete Module hinweg zu nutzen."]} {"source": "Across numerous applications, forecasting relies on numerical solvers for partial differential equations (PDEs).Although the use of deep-learning techniques has been proposed, the uses have been restricted by the fact the training data are obtained using PDE solvers.Thereby, the uses were limited to domains, where the PDE solver was applicable, but no further. We present methods for training on small domains, while applying the trained models on larger domains, with consistency constraints ensuring the solutions are physically meaningful even at the boundary of the small domains.We demonstrate the results on an air-pollution forecasting model for Dublin, Ireland.", "target": ["Wir stellen RNNs für das Training von Ersatzmodellen von PDEs vor, bei denen Konsistenzbeschränkungen sicherstellen, dass die Lösungen physikalisch sinnvoll sind, selbst wenn beim Training viel kleinere Bereiche verwendet werden als das trainierte Modell angewendet wird."]} {"source": "We address the issue of limit cycling behavior in training Generative Adversarial Networks and propose the use of Optimistic Mirror Decent (OMD) for training Wasserstein GANs.Recent theoretical results have shown that optimistic mirror decent (OMD) can enjoy faster regret rates in the context of zero-sum games.WGANs is exactly a context of solving a zero-sum game with simultaneous no-regret dynamics. Moreover, we show that optimistic mirror decent addresses the limit cycling problem in training WGANs.We formally show that in the case of bi-linear zero-sum games the last iterate of OMD dynamics converges to an equilibrium, in contrast to GD dynamics which are bound to cycle.We also portray the huge qualitative difference between GD and OMD dynamics with toy examples, even when GD is modified with many adaptations proposed in the recent literature, such as gradient penalty or momentum.We apply OMD WGAN training to a bioinformatics problem of generating DNA sequences.We observe that models trained with OMD achieve consistently smaller KL divergence with respect to the true underlying distribution, than models trained with GD variants.Finally, we introduce a new algorithm, Optimistic Adam, which is an optimistic variant of Adam.We apply it to WGAN training on CIFAR10 and observe improved performance in terms of inception score as compared to Adam.", "target": ["Wir schlagen die Verwendung von optimistischen Spiegeln vor, um Zyklusprobleme beim Training von GANs anzugehen. Außerdem stellen wir den Optimistischen Adam-Algorithmus vor", "In dieser Arbeit wird die Verwendung von optimistischem Mirror Descent zum Training von WGANs vorgeschlagen.", "Der Artikel schlägt vor, optimistischen Gradientenabstieg für das GAN-Training zu verwenden, der das bei SGD und seinen Varianten beobachtete Zyklusverhalten vermeidet und vielversprechende Ergebnisse beim GAN-Training liefert.", "In diesem Beitrag wird eine einfache Modifikation des Standard-Gradientenabstiegs vorgeschlagen, die die Konvergenz von GANs und anderen Minimax-Optimierungsproblemen verbessern soll."]} {"source": "Learning good representations of users and items is crucially important to recommendation with implicit feedback.Matrix factorization is the basic idea to derive the representations of users and items by decomposing the given interaction matrix.However, existing matrix factorization based approaches share the limitation in that the interaction between user embedding and item embedding is only weakly enforced by fitting the given individual rating value, which may lose potentially useful information.In this paper, we propose a novel Augmented Generalized Matrix Factorization (AGMF) approach that is able to incorporate the historical interaction information of users and items for learning effective representations of users and items.Despite the simplicity of our proposed approach, extensive experiments on four public implicit feedback datasets demonstrate that our approach outperforms state-of-the-art counterparts.Furthermore, the ablation study demonstrates that by using multi-hot encoding to enrich user embedding and item embedding for Generalized Matrix Factorization, better performance, faster convergence, and lower training loss can be achieved.", "target": ["Eine einfache Erweiterung der verallgemeinerten Matrixfaktorisierung kann den Stand der Technik bei Empfehlungen übertreffen.", "Die Arbeit stellt ein Matrixfaktorisierungs Framework vor, um den Effekt historischer Daten beim Lernen von Benutzerpräferenzen in kollaborativen Filtereinstellungen zu verstärken."]} {"source": "We propose an unsupervised method for building dynamic representations of sequential data, particularly of observed interactions.The method simultaneously acquires representations of input data and its dynamics.It is based on a hierarchical generative model composed of two levels.In the first level, a model learns representations to generate observed data.In the second level, representational states encode the dynamics of the lower one.The model is designed as a Bayesian network with switching variables represented in the higher level, and which generates transition models.The method actively explores the latent space guided by its knowledge and the uncertainty about it.That is achieved by updating the latent variables from prediction error signals backpropagated to the latent space.So, no encoder or inference models are used since the generators also serve as their inverse transformations.The method is evaluated in two scenarios, with static images and with videos.The results show that the adaptation over time leads to better performance than with similar architectures without temporal dependencies, e.g., variational autoencoders.With videos, it is shown that the system extracts the dynamics of the data in states that highly correlate with the ground truth of the actions observed.", "target": ["Eine Methode, die Darstellungen von sequentiellen Daten und ihrer Dynamik durch generative Modelle mit einem aktiven Prozess erstellt.", "Kombiniert neuronale Netze und Gaußsche Verteilungen, um eine Architektur und ein generatives Modell für Bilder und Videos zu schaffen, das den Fehler zwischen erzeugten und gelieferten Bildern minimiert.", "Die Arbeit schlägt ein Bayes'sches Netzmodell vor, das als neuronales Netz realisiert ist und verschiedene Daten in Form eines linearen dynamischen Systems erlernt."]} {"source": "Activation is a nonlinearity function that plays a predominant role in the convergence and performance of deep neural networks.While Rectified Linear Unit (ReLU) is the most successful activation function, its derivatives have shown superior performance on benchmark datasets.In this work, we explore the polynomials as activation functions (order ≥ 2) that can approximate continuous real valued function within a given interval.Leveraging this property, the main idea is to learn the nonlinearity, accepting that the ensuing function may not be monotonic.While having the ability to learn more suitable nonlinearity, we cannot ignore the fact that it is a challenge to achieve stable performance due to exploding gradients - which is prominent with the increase in order.To handle this issue, we introduce dynamic input scaling, output scaling, and lower learning rate for the polynomial weights.Moreover, lower learning rate will control the abrupt fluctuations of the polynomials between weight updates.In experiments on three public datasets, our proposed method matches the performance of prior activation functions, thus providing insight into a network’s nonlinearity preference.", "target": ["Wir schlagen Polynome als Aktivierungsfunktionen vor.", "Die Autoren führen lernfähige Aktivierungsfunktionen ein, die durch Polynomfunktionen parametrisiert sind, und zeigen Ergebnisse, die etwas besser sind als ReLU."]} {"source": "We introduce CBF, an exploration method that works in the absence of rewards or end of episode signal.CBF is based on intrinsic reward derived from the error of a dynamics model operating in feature space.It was inspired by (Pathak et al., 2017), is easy to implement, and can achieve results such as passing four levels of Super Mario Bros, navigating VizDoom mazes and passing two levels of SpaceInvaders.We investigated the effect of combining the method with several auxiliary tasks, but find inconsistent improvements over the CBF baseline.", "target": ["Eine einfache intrinsische Motivationsmethode, die die Forward Dynamik nutzt, modelliert Fehler im Merkmalsraum des Regelwerks."]} {"source": "This paper is concerned with the robustness of VAEs to adversarial attacks.We highlight that conventional VAEs are brittle under attack but that methods recently introduced for disentanglement such as β-TCVAE (Chen et al., 2018) improve robustness, as demonstrated through a variety of previously proposed adversarial attacks (Tabacof et al. (2016); Gondim-Ribeiro et al. (2018); Kos et al.(2018)).This motivated us to develop Seatbelt-VAE, a new hierarchical disentangled VAE that is designed to be significantly more robust to adversarial attacks than existing approaches, while retaining high quality reconstructions.", "target": ["Wir zeigen, dass entwirrte VAEs robuster als einfache VAEs gegenüber Angriffen sind, die darauf abzielen, sie zur Dekodierung der adversarial Eingabe für ein ausgewähltes Ziel zu verleiten. Anschließend entwickeln wir ein noch robusteres hierarchisches entschlüsseltes VAE, Seatbelt-VAE.", "Die Autoren schlagen ein neues VAE-Modell namens seatbelt-VAE vor, das sich gegenüber latenten Angriffen als robuster erweist als Benchmarks."]} {"source": "The backpropagation algorithm is the de-facto standard for credit assignment in artificial neural networks due to its empirical results.Since its conception, variants of the backpropagation algorithm have emerged.More specifically, variants that leverage function changes in the backpropagation equations to satisfy their specific requirements.Feedback Alignment is one such example, which replaces the weight transpose matrix in the backpropagation equations with a random matrix in search of a more biologically plausible credit assignment algorithm.In this work, we show that function changes in the backpropagation procedure is equivalent to adding an implicit learning rate to an artificial neural network.Furthermore, we learn activation function derivatives in the backpropagation equations to demonstrate early convergence in these artificial neural networks.Our work reports competitive performances with early convergence on MNIST and CIFAR10 on sufficiently large deep neural network architectures.", "target": ["Wir zeigen, dass Funktionsänderungen in der Backpropagation gleichbedeutend mit einer impliziten Lernrate sind."]} {"source": "Unsupervised text style transfer is the task of re-writing text of a given style into a target style without using a parallel corpus of source style and target style sentences for training.Style transfer systems are evaluated on their ability to generate sentences that1) possess the target style,2) are fluent and natural sounding, and3) preserve the non-stylistic parts (content) of the source sentence.We train a reinforcement learning (RL) based unsupervised style transfer system that incorporates rewards for the above measures, and describe novel rewards shaping methods for the same.Our approach does not attempt to disentangle style and content, and leverages the power of massively pre-trained language models as well as the Transformer.Our system significantly outperforms existing state-of-art systems based on human as well as automatic evaluations on target style, fluency and content preservation as well as on overall success of style transfer, on a variety of datasets.", "target": ["Ein Ansatz des Reinforcement Learrning zur Übertragung von Textstilen.", "Stellt eine RL-basierte Methode vor, die ein vortrainiertes Sprachmodell nutzt, um den Textstil zu übertragen, ohne das Ziel der Entflechtung zu verfolgen, und dabei generierte Stilübertragungen eines anderen Modells verwendet.", "Die Autoren schlagen eine Kombinationsbelohnung vor, die sich aus Sprachgewandtheit, Inhalt und Stil für die Übertragung von Textstil zusammensetzt."]} {"source": "Despite the success of Generative Adversarial Networks (GANs) in image synthesis, there lacks enough understanding on what networks have learned inside the deep generative representations and how photo-realistic images are able to be composed from random noises.In this work, we show that highly-structured semantic hierarchy emerges from the generative representations as the variation factors for synthesizing scenes.By probing the layer-wise representations with a broad set of visual concepts at different abstraction levels, we are able to quantify the causality between the activations and the semantics occurring in the output image.Such a quantification identifies the human-understandable variation factors learned by GANs to compose scenes.The qualitative and quantitative results suggest that the generative representations learned by GAN are specialized to synthesize different hierarchical semantics: the early layers tend to determine the spatial layout and configuration, the middle layers control the categorical objects, and the later layers finally render the scene attributes as well as color scheme.Identifying such a set of manipulatable latent semantics facilitates semantic scene manipulation.", "target": ["Wir zeigen, dass in den tiefen generativen Repräsentationen eine hochstrukturierte semantische Hierarchie als Ergebnis für die Synthese von Szenen entsteht.", "Die Arbeit untersucht die Aspekte, die durch die latenten Variablen kodiert werden, die in die verschiedenen Schichten von StyleGAN eingegeben werden.", "Der Artikel präsentiert eine visuell geführte Interpretation der Aktivierungen der Convolution Layers im Generator von StyleGAN auf Layout, Szenenkategorie, Szeneneigenschaften und Farbe."]} {"source": "Variational autoencoders (VAEs) defined over SMILES string and graph-based representations of molecules promise to improve the optimization of molecular properties, thereby revolutionizing the pharmaceuticals and materials industries.However, these VAEs are hindered by the non-unique nature of SMILES strings and the computational cost of graph convolutions.To efficiently pass messages along all paths through the molecular graph, we encode multiple SMILES strings of a single molecule using a set of stacked recurrent neural networks, harmonizing hidden representations of each atom between SMILES representations, and use attentional pooling to build a final fixed-length latent representation.By then decoding to a disjoint set of SMILES strings of the molecule, our All SMILES VAE learns an almost bijective mapping between molecules and latent representations near the high-probability-mass subspace of the prior.Our SMILES-derived but molecule-based latent representations significantly surpass the state-of-the-art in a variety of fully- and semi-supervised property regression and molecular property optimization tasks.", "target": ["Wir fassen Nachrichten zwischen mehreren SMILES-Zeichenfolgen desselben Moleküls zusammen, um Informationen entlang aller Pfade durch den molekularen Graphen zu übermitteln und so latente Darstellungen zu erzeugen, die den Stand der Technik bei einer Vielzahl von Aufgaben deutlich übertreffen.", "Die Methode verwendet mehrere Eingaben von SMILES-Zeichenfolgen, zeichenweise Merkmalsfusion über diese Zeichenfolgen und Netzwerktraining durch mehrere Ausgabeziele von SMILES-Zeichenfolgen, wodurch eine robuste latente Repräsentation mit fester Länge unabhängig von SMILES-Variationen geschaffen wird. ", "Die Autoren beschreiben ein neuartiges Variations Autoencoder ähnliches Verfahren für Moleküle, das Moleküle als Zeichenketten kodiert, um die für den Informationsaustausch zwischen den Atomen im Molekül erforderlichen Operationen zu reduzieren."]} {"source": "We propose a simple yet highly effective method that addresses the mode-collapse problem in the Conditional Generative Adversarial Network (cGAN).Although conditional distributions are multi-modal (i.e., having many modes) in practice, most cGAN approaches tend to learn an overly simplified distribution where an input is always mapped to a single output regardless of variations in latent code.To address such issue, we propose to explicitly regularize the generator to produce diverse outputs depending on latent codes.The proposed regularization is simple, general, and can be easily integrated into most conditional GAN objectives.Additionally, explicit regularization on generator allows our method to control a balance between visual quality and diversity.We demonstrate the effectiveness of our method on three conditional generation tasks: image-to-image translation, image inpainting, and future video prediction.We show that simple addition of our regularization to existing models leads to surprisingly diverse generations, substantially outperforming the previous approaches for multi-modal conditional generation specifically designed in each individual task.", "target": ["Wir schlagen einen einfachen und allgemeinen Ansatz vor, der das Problem des Moduszusammenbruchs in verschiedenen bedingten GANs vermeidet.", "Die Arbeit schlägt einen Regularisierungsterm für das bedingte GAN-Ziel vor, um eine vielfältige multimodale Generierung zu fördern und einen Moduskollaps zu verhindern.", "Die Arbeit schlägt eine Methode zur Erzeugung verschiedener Ausgaben für verschiedene bedingte GAN-Frameworks vor, einschließlich Bild-zu-Bild-Übersetzung, Bild-Inpainting und Video-Vorhersage, die auf verschiedene bedingte Synthese-Frameworks für verschiedene Aufgaben angewendet werden können. "]} {"source": "The transformer is a state-of-the-art neural translation model that uses attention to iteratively refine lexical representations with information drawn from the surrounding context.Lexical features are fed into the first layer and propagated through a deep network of hidden layers.We argue that the need to represent and propagate lexical features in each layer limits the model’s capacity for learning and representing other information relevant to the task.To alleviate this bottleneck, we introduce gated shortcut connections between the embedding layer and each subsequent layer within the encoder and decoder.This enables the model to access relevant lexical content dynamically, without expending limited resources on storing it within intermediate states.We show that the proposed modification yields consistent improvements on standard WMT translation tasks and reduces the amount of lexical information passed along the hidden layers.We furthermore evaluate different ways to integrate lexical connections into the transformer architecture and present ablation experiments exploring the effect of proposed shortcuts on model behavior.", "target": ["Durch die Ausstattung des Transformermodells mit Abkürzungen zur Einbettungsschicht wird die Modellkapazität für das Lernen neuer Informationen frei."]} {"source": "Probability density estimation is a classical and well studied problem, but standard density estimation methods have historically lacked the power to model complex and high-dimensional image distributions. More recent generative models leverage the power of neural networks to implicitly learn and represent probability models over complex images. We describe methods to extract explicit probability density estimates from GANs, and explore the properties of these image density functions. We perform sanity check experiments to provide evidence that these probabilities are reasonable. However, we also show that density functions of natural images are difficult to interpret and thus limited in use. We study reasons for this lack of interpretability, and suggest that we can get better interpretability by doing density estimation on latent representations of images.", "target": ["Wir untersuchen die Beziehung zwischen Wahrscheinlichkeitsdichtewerten und Bildinhalten in nicht-invertierbaren GANs.", "Die Autoren versuchen, die Wahrscheinlichkeitsverteilung des Bildes mit Hilfe von GAN zu schätzen und entwickeln eine geeignete Approximation der PDFs im latenten Raum."]} {"source": "Convolutional Neural Networks (CNNs) are composed of multiple convolution layers and show elegant performance in vision tasks.The design of the regular convolution is based on the Receptive Field (RF) where the information within a specific region is processed.In the view of the regular convolution's RF, the outputs of neurons in lower layers with smaller RF are bundled to create neurons in higher layers with larger RF. As a result, the neurons in high layers are able to capture the global context even though the neurons in low layers only see the local information.However, in lower layers of the biological brain, the information outside of the RF changes the properties of neurons.In this work, we extend the regular convolution and propose spatially shuffled convolution (ss convolution).In ss convolution, the regular convolution is able to use the information outside of its RF by spatial shuffling which is a simple and lightweight operation.We perform experiments on CIFAR-10 and ImageNet-1k dataset, and show that ss convolution improves the classification performance across various CNNs.", "target": ["Wir schlagen eine räumlich gemischte Convolution vor, bei der die reguläre Convolution die Informationen von außerhalb ihres rezeptiven Feldes einbezieht.", "Er schlägt eine SS-Konvulation vor, die Informationen außerhalb ihres RF verwendet und bei Tests mit mehreren CNN-Modellen bessere Ergebnisse erzielt.", "Die Autoren schlugen eine Shuffle-Strategie für Convolution Layers in Convolution Layers in Convolutional Neural Networks vor."]} {"source": "We propose a framework to model the distribution of sequential data coming froma set of entities connected in a graph with a known topology.The method isbased on a mixture of shared hidden Markov models (HMMs), which are trainedin order to exploit the knowledge of the graph structure and in such a way that theobtained mixtures tend to be sparse.Experiments in different application domainsdemonstrate the effectiveness and versatility of the method.", "target": ["Eine Methode zur Modellierung der generativen Verteilung von Sequenzen, die aus graphisch verbundenen Einheiten stammen.", "Die Autoren schlagen eine Methode zur Modellierung sequentieller Daten aus mehreren miteinander verbundenen Quellen unter Verwendung einer Mischung aus einem gemeinsamen Pool von HMMs vor."]} {"source": "To gain high rewards in muti-agent scenes, it is sometimes necessary to understand other agents and make corresponding optimal decisions.We can solve these tasks by first building models for other agents and then finding the optimal policy with these models.To get an accurate model, many observations are needed and this can be sample-inefficient.What's more, the learned model and policy can overfit to current agents and cannot generalize if the other agents are replaced by new agents.In many practical situations, each agent we face can be considered as a sample from a population with a fixed but unknown distribution.Thus we can treat the task against some specific agents as a task sampled from a task distribution.We apply meta-learning method to build models and learn policies.Therefore when new agents come, we can adapt to them efficiently.Experiments on grid games show that our method can quickly get high rewards.", "target": ["Unsere Arbeit wendet Meta-Lernen auf Multi-Agenten Reinforcement Learning an, um unserem Agenten zu helfen, sich effizient an neue Gegner anzupassen.", "Dieser Beitrag konzentriert sich auf die schnelle Anpassung an neue Verhaltensweisen der anderen Agenten in der Umgebung mit Hilfe einer auf MAML basierenden Methode.", "Die Arbeit stellt einen Ansatz für Multi-Agenten-Lernen vor, der auf dem Rahmen des modell-agnostischen Meta-Lernens für die Aufgabe der Gegner-Modellierung für Multi-Agenten RL basiert."]} {"source": "We characterize the singular values of the linear transformation associated with a standard 2D multi-channel convolutional layer, enabling their efficient computation. This characterization also leads to an algorithm for projecting a convolutional layer onto an operator-norm ball.We show that this is an effective regularizer; for example, it improves the test error of a deep residual network using batch normalization on CIFAR-10 from 6.2% to 5.3%.", "target": ["Wir charakterisieren die Singulärwerte der linearen Transformation, die mit einer standardmäßigen 2D Mehrkanal Convolutional Layer verbunden sind, und ermöglichen so deren effiziente Berechnung. ", "Der Beitrag widmet sich der Berechnung der Singulärwerte von Convolutional Layers.", "Leitet exakte Formeln für die Berechnung der Singulärwerte von Convolutional Layers tiefer neuronaler Netze ab und zeigt, dass die Berechnung der Singulärwerte viel schneller erfolgen kann als die Berechnung der vollständigen SVD der Convolution Matrix, indem man auf schnelle FFT-Transformationen zurückgreift."]} {"source": "Trading off exploration and exploitation in an unknown environment is key to maximising expected return during learning.A Bayes-optimal policy, which does so optimally, conditions its actions not only on the environment state but on the agent's uncertainty about the environment.Computing a Bayes-optimal policy is however intractable for all but the smallest tasks.In this paper, we introduce variational Bayes-Adaptive Deep RL (variBAD), a way to meta-learn to perform approximate inference in an unknown environment, and incorporate task uncertainty directly during action selection.In a grid-world domain, we illustrate how variBAD performs structured online exploration as a function of task uncertainty.We also evaluate variBAD on MuJoCo domains widely used in meta-RL and show that it achieves higher return during training than existing methods.", "target": ["VariBAD eröffnet einen Weg zu überschaubarer approximativer Bayes-optimaler Exploration für Deep RL unter Verwendung von Ideen aus Meta-Learning, Bayesian RL und approximativer Variationsinferenz.", "In diesem Beitrag wird eine neue Methode des tiefen Reinforcement Learnings vorgestellt, die einen effizienten Kompromiss zwischen Erkundung und Ausbeutung ermöglicht und Meta-Lernen, variationale Inferenz und bayesianisches RL kombiniert."]} {"source": "In a continual learning setting, new categories may be introduced over time, and an ideal learning system should perform well on both the original categories and the new categories.While deep neural nets have achieved resounding success in the classical setting, they are known to forget about knowledge acquired in prior episodes of learning if the examples encountered in the current episode of learning are drastically different from those encountered in prior episodes.This makes deep neural nets ill-suited to continual learning.In this paper, we propose a new model that can both leverage the expressive power of deep neural nets and is resilient to forgetting when new categories are introduced.We demonstrate an improvement in terms of accuracy on original classes compared to a vanilla deep neural net.", "target": ["Wir zeigen, dass metrisches Lernen dazu beitragen kann, katastrophales Vergessen zu reduzieren.", "In dieser Arbeit wird metrisches Lernen eingesetzt, um das katastrophale Vergessen in neuronalen Netzen zu reduzieren, indem die Ausdruckskraft der letzten Schicht verbessert wird, was zu besseren Ergebnissen beim kontinuierlichen Lernen führt."]} {"source": "Biomedical knowledge bases are crucial in modern data-driven biomedical sciences, but auto-mated biomedical knowledge base construction remains challenging.In this paper, we consider the problem of disease entity normalization, an essential task in constructing a biomedical knowledge base. We present NormCo, a deep coherence model which considers the semantics of an entity mention, as well as the topical coherence of the mentions within a single document.NormCo mod-els entity mentions using a simple semantic model which composes phrase representations from word embeddings, and treats coherence as a disease concept co-mention sequence using an RNN rather than modeling the joint probability of all concepts in a document, which requires NP-hard inference. To overcome the issue of data sparsity, we used distantly supervised data and synthetic data generated from priors derived from the BioASQ dataset. Our experimental results show thatNormCo outperforms state-of-the-art baseline methods on two disease normalization corpora in terms of (1) prediction quality and (2) efficiency, and is at least as performant in terms of accuracy and F1 score on tagged documents.", "target": ["Wir stellen NormCo vor, ein tiefes Kohärenzmodell, das sowohl die Semantik einer Entitätserwähnung als auch die thematische Kohärenz der Erwähnungen innerhalb eines einzelnen Dokuments berücksichtigt, um eine Krankheitsentitätsnormalisierung durchzuführen.", "Verwendet einen GRU Autoencoder zur Darstellung des \"Kontexts\" (verwandte Eigenschaften einer bestimmten Krankheit innerhalb eines Satzes) und löst die BioNLP-Aufgabe mit erheblichen Verbesserungen gegenüber den bekanntesten Methoden."]} {"source": "We explore the role of multiplicative interaction as a unifying framework to describe a range of classical and modern neural network architectural motifs, such as gating, attention layers, hypernetworks, and dynamic convolutions amongst others.Multiplicative interaction layers as primitive operations have a long-established presence in the literature, though this often not emphasized and thus under-appreciated.We begin by showing that such layers strictly enrich the representable function classes of neural networks.We conjecture that multiplicative interactions offer a particularly powerful inductive bias when fusing multiple streams of information or when conditional computation is required.We therefore argue that they should be considered in many situation where multiple compute or information paths need to be combined, in place of the simple and oft-used concatenation operation.Finally, we back up our claims and demonstrate the potential of multiplicative interactions by applying them in large-scale complex RL and sequence modelling tasks, where their use allows us to deliver state-of-the-art results, and thereby provides new evidence in support of multiplicative interactions playing a more prominent role when designing new neural network architectures.", "target": ["Wir untersuchen die Rolle der multiplikativen Interaktion als vereinheitlichendes Framework, um eine Reihe klassischer und moderner neuronaler Netzwerkarchitekturen zu beschreiben, wie z.B. Gating, Aufmerksamkeitsschichten, Hypernetze und dynamische Convolutions.", "Stellt die multiplikative Interaktion als einheitliche Charakterisierung für die Darstellung häufig verwendeter Komponenten der Modellarchitektur vor und zeigt empirische Beweise für die überlegene Leistung bei Aufgaben wie RL und Sequenzmodellierung.", "Die Arbeit untersucht verschiedene Arten von multiplikativen Interaktionen und stellt fest, dass MI-Modelle in der Lage sind, bei Sprachmodellierungs- und Reinforcement Learning Problemen eine Spitzenleistung zu erzielen."]} {"source": "Developing conditional generative models for text-to-video synthesis is an extremely challenging yet an important topic of research in machine learning.In this work, we address this problem by introducing Text-Filter conditioning Generative Adversarial Network (TFGAN), a GAN model with novel conditioning scheme that aids improving the text-video associations.With a combination of this conditioning scheme and a deep GAN architecture, TFGAN generates photo-realistic videos from text on very challenging real-world video datasets.In addition, we construct a benchmark synthetic dataset of moving shapes to systematically evaluate our conditioning scheme.Extensive experiments demonstrate that TFGAN significantly outperforms the existing approaches, and can also generate videos of novel categories not seen during training.", "target": ["Ein effektives textkonditionierendes GAN-Framework zur Erzeugung von Videos aus Text.", "In dieser Arbeit wird eine GAN-basierte Methode zur Videogenerierung auf der Grundlage von Textbeschreibungen vorgestellt, mit einer neuen Konditionierungsmethode, die Convolution Filter aus dem kodierten Text erzeugt und diese für eine Convolution im Diskriminator verwendet.", "Diese Arbeit schlägt bedingte GAN-Modelle für die Text-Video-Synthese vor: die Entwicklung von CNN-Filtern mit Textmerkmalen und die Erstellung eines Datensatzes für bewegte Formen mit verbesserter Leistung bei der Video-/Bilderzeugung."]} {"source": "Over-parameterization is ubiquitous nowadays in training neural networks to benefit both optimization in seeking global optima and generalization in reducing prediction error.However, compressive networks are desired in many real world applications and direct training of small networks may be trapped in local optima.In this paper, instead of pruning or distilling over-parameterized models to compressive ones, we propose a new approach based on \\emph{differential inclusions of inverse scale spaces}, that generates a family of models from simple to complex ones by coupling gradient descent and mirror descent to explore model structural sparsity.It has a simple discretization, called the Split Linearized Bregman Iteration (SplitLBI), whose global convergence analysis in deep learning is established that from any initializations, algorithmic iterations converge to a critical point of empirical risks.Experimental evidence shows that\\ SplitLBI may achieve state-of-the-art performance in large scale training on ImageNet-2012 dataset etc., while with \\emph{early stopping} it unveils effective subnet architecture with comparable test accuracies to dense models after retraining instead of pruning well-trained ones.", "target": ["SplitLBI wird auf Deep Learning angewandt, um die strukturelle Spärlichkeit von Modellen zu erforschen. Dabei wird eine Spitzenleistung in ImageNet-2012 erzielt und eine effektive Subnetzarchitektur enthüllt.", "Es wird ein optimierungsbasierter Algorithmus vorgeschlagen, um wichtige dünn besetzte Strukturen großer neuronaler Netze zu finden, indem das Lernen der Gewichtsmatrix und die Einschränkung der dünn besetzten Strukturen gekoppelt werden, was garantierte Konvergenz bei nicht-konvexen Optimierungsproblemen bietet."]} {"source": "In this paper, we study the learned iterative shrinkage thresholding algorithm (LISTA) for solving sparse coding problems. Following assumptions made by prior works, we first discover that the code components in its estimations may be lower than expected, i.e., require gains, and to address this problem, a gated mechanism amenable to theoretical analysis is then introduced.Specific design of the gates is inspired by convergence analyses of the mechanism and hence its effectiveness can be formally guaranteed.In addition to the gain gates, we further introduce overshoot gates for compensating insufficient step size in LISTA.Extensive empirical results confirm our theoretical findings and verify the effectiveness of our method.", "target": ["Wir schlagen Gated-Mechanismen zur Verbesserung der erlernten ISTA für Sparse Coding vor, mit theoretischen Garantien für die Überlegenheit der Methode. ", "Es werden Erweiterungen von LISTA vorgeschlagen, die die Unterschätzung durch die Einführung von \"Gain Gates\" und die Einbeziehung von Impulsen mit \"Overshoot Gates\" angehen und verbesserte Konvergenzraten zeigen.", "Diese Arbeit konzentriert sich auf die Lösung von Sparse-Coding Problemen unter Verwendung von Netzwerken des LISTA-Typs, indem es eine \"Gain Gating Funktion\" vorschlägt, um die Schwäche der \"no false positive\" Annahme zu mildern."]} {"source": "The learning of hierarchical representations for image classification has experienced an impressive series of successes due in part to the availability of large-scale labeled data for training.On the other hand, the trained classifiers have traditionally been evaluated on a handful of test images, which are deemed to be extremely sparsely distributed in the space of all natural images.It is thus questionable whether recent performance improvements on the excessively re-used test sets generalize to real-world natural images with much richer content variations.In addition, studies on adversarial learning show that it is effortless to construct adversarial examples that fool nearly all image classifiers, adding more complications to relative performance comparison of existing models.This work presents an efficient framework for comparing image classifiers, which we name the MAximum Discrepancy (MAD) competition.Rather than comparing image classifiers on fixed test sets, we adaptively sample a test set from an arbitrarily large corpus of unlabeled images so as to maximize the discrepancies between the classifiers, measured by the distance over WordNet hierarchy.Human labeling on the resulting small and model-dependent image sets reveals the relative performance of the competing classifiers and provides useful insights on potential ways to improve them.We report the MAD competition results of eleven ImageNet classifiers while noting that the framework is readily extensible and cost-effective to add future classifiers into the competition.", "target": ["Wir stellen einen effizienten und adaptiven Rahmen für den Vergleich von Bildklassifizierern vor, um die Diskrepanzen zwischen den Klassifizierern zu maximieren, anstelle eines Vergleichs auf festen Testsätzen.", "Mechanismus zur Fehlersuche, der Bildklassifikatoren vergleicht, indem er ihre \"am meisten nicht übereinstimmende\" Testmenge stichprobenartig prüft und die Unstimmigkeit durch einen von der WordNet-Ontologie abgeleiteten semantischen Abstand misst."]} {"source": "Robustness of neural networks has recently been highlighted by the adversarial examples, i.e., inputs added with well-designed perturbations which are imperceptible to humans but can cause the network to give incorrect outputs.In this paper, we design a new CNN architecture that by itself has good robustness.We introduce a simple but powerful technique, Random Mask, to modify existing CNN structures.We show that CNN with Random Mask achieves state-of-the-art performance against black-box adversarial attacks without applying any adversarial training.We next investigate the adversarial examples which “fool” a CNN with Random Mask.Surprisingly, we find that these adversarial examples often “fool” humans as well.This raises fundamental questions on how to define adversarial examples and robustness properly.", "target": ["Wir schlagen eine Technik vor, die CNN-Strukturen modifiziert, um die Robustheit zu verbessern und gleichzeitig eine hohe Testgenauigkeit zu erhalten. Wir stellen in Frage, ob die derzeitige Definition von adversarial Beispielen angemessen ist, indem wir Gegenbeispiele erzeugen, die Menschen täuschen können.", "In dieser Arbeit wird eine einfache Technik zur Verbesserung der Robustheit neuronaler Netze gegen Blackbox-Angriffe vorgeschlagen.", "Die Autoren schlagen eine einfache Methode vor, um die Robustheit von Convolutional Neural Networks gegenüber adversarial Beispielen zu erhöhen, mit überraschend guten Ergebnissen."]} {"source": "Supervised deep learning methods require cleanly labeled large-scale datasets, but collecting such data is difficult and sometimes impossible.There exist two popular frameworks to alleviate this problem: semi-supervised learning and robust learning to label noise.Although these frameworks relax the restriction of supervised learning, they are studied independently.Hence, the training scheme that is suitable when only small cleanly-labeled data are available remains unknown.In this study, we consider learning from bi-quality data as a generalization of these studies, in which a small portion of data is cleanly labeled, and the rest is corrupt.Under this framework, we compare recent algorithms for semi-supervised and robust learning.The results suggest that semi-supervised learning outperforms robust learning with noisy labels.We also propose a training strategy for mixing mixup techniques to learn from such bi-quality data effectively.", "target": ["Wir schlagen vor, halb-überwachtes und robustes Lernen auf störhaften Etiketten in einer gemeinsamen Umgebung zu vergleichen.", "Die Autoren schlagen eine Strategie für das Training eines Modells in einer formalen Umgebung vor, die auf einer Verwechslung basiert und die semi-supervised und robuste Lernaufgaben als Spezialfälle einschließt."]} {"source": "Hierarchical Sparse Coding (HSC) is a powerful model to efficiently represent multi-dimensional, structured data such as images.The simplest solution to solve this computationally hard problem is to decompose it into independent layerwise subproblems.However, neuroscientific evidence would suggest inter-connecting these subproblems as in the Predictive Coding (PC) theory, which adds top-down connections between consecutive layers.In this study, a new model called Sparse Deep Predictive Coding (SDPC) is introduced to assess the impact of this inter-layer feedback connection.In particular, the SDPC is compared with a Hierarchical Lasso (Hi-La) network made out of a sequence of Lasso layers.A 2-layered SDPC and a Hi-La networks are trained on 3 different databases and with different sparsity parameters on each layer.First, we show that the overall prediction error generated by SDPC is lower thanks to the feedback mechanism as it transfers prediction error between layers.Second, we demonstrate that the inference stage of the SDPC is faster to converge than for the Hi-La model.Third, we show that the SDPC also accelerates the learning process.Finally, the qualitative analysis of both models dictionaries, supported by their activation probability, show that the SDPC features are more generic and informative.", "target": ["In dieser Arbeit wird der positive Effekt von Top-Down-Verbindungen im Hierarchical Sparse Coding Algorithmus experimentell nachgewiesen.", "In diesem Beitrag wird eine Studie vorgestellt, in der Techniken für die hierarchische spärliche Kodierung verglichen werden. Es wird gezeigt, dass der Top-Down-Term bei der Reduzierung von Vorhersagefehlern von Vorteil ist und schneller lernen kann."]} {"source": "Explaining a deep learning model can help users understand its behavior and allow researchers to discern its shortcomings.Recent work has primarily focused on explaining models for tasks like image classification or visual question answering. In this paper, we introduce an explanation approach for image similarity models, where a model's output is a score measuring the similarity of two inputs rather than a classification. In this task, an explanation depends on both of the input images, so standard methods do not apply.We propose an explanation method that pairs a saliency map identifying important image regions with an attribute that best explains the match. We find that our explanations provide additional information not typically captured by saliency maps alone, and can also improve performance on the classic task of attribute recognition.Our approach's ability to generalize is demonstrated on two datasets from diverse domains, Polyvore Outfits and Animals with Attributes 2.", "target": ["Ein Black-Box-Ansatz zur Erklärung der Vorhersagen eines Bildähnlichkeitsmodells.", "Stellt eine Methode zur Erklärung von Bildähnlichkeitsmodellen vor, die Attribute identifiziert, die positiv zur Ähnlichkeitsbewertung beitragen, und sie mit einer generierten Auffälligkeitskarte verbindet.", "Die Arbeit schlägt einen Erklärungsmechanismus vor, der die typischen Regionen von Auffälligkeitskarten mit Attributen für die Ähnlichkeitsanpassung tiefer neuronaler Netze verbindet."]} {"source": "Adversarial examples have been shown to be an effective way of assessing the robustness of neural sequence-to-sequence (seq2seq) models, by applying perturbations to the input of a model leading to large degradation in performance.However, these perturbations are only indicative of a weakness in the model if they do not change the semantics of the input in a way that would change the expected output.Using the example of machine translation (MT), we propose a new evaluation framework for adversarial attacks on seq2seq models taking meaning preservation into account and demonstrate that existing methods may not preserve meaning in general.Based on these findings, we propose new constraints for attacks on word-based MT systems and show, via human and automatic evaluation, that they produce more semantically similar adversarial inputs.Furthermore, we show that performing adversarial training with meaning-preserving attacks is beneficial to the model in terms of adversarial robustness without hurting test performance.", "target": ["Wie Sie gegnerische Angriffe auf seq2seq bewerten sollten.", "Die Autoren untersuchen Möglichkeiten zur Generierung von Gegenbeispielen und zeigen, dass das adversarial Training mit dem Angriff, der am besten mit den eingeführten Kriterien für den Bedeutungserhalt übereinstimmt, zu einer verbesserten Robustheit gegenüber dieser Art von Angriff führt, ohne dass es zu einer Verschlechterung in der nicht-adversarischen Umgebung kommt.", "Das Papier handelt von bedeutungserhaltenden adversen Störungen im Kontext von Seq2Seq-Modellen"]} {"source": "We introduce a new normalization technique that exhibits the fast convergence properties of batch normalization using a transformation of layer weights instead of layer outputs.The proposed technique keeps the contribution of positive and negative weights to the layer output in equilibrium.We validate our method on a set of standard benchmarks including CIFAR-10/100, SVHN and ILSVRC 2012 ImageNet.", "target": ["Eine alternative Normalisierungstechnik zur Batch-Normalisierung", "Führt eine Normalisierungstechnik ein, die die Gewichte von Convolutional Layers normalisiert. ", "In diesem Manuskript wird eine neue schichtweise Transformation, EquiNorm, zur Verbesserung der Batch Normalisierung eingeführt, die nicht die Eingaben in die Schichten, sondern die Schichtgewichte verändert."]} {"source": "We present a framework for building unsupervised representations of entities and their compositions, where each entity is viewed as a probability distribution rather than a fixed length vector.In particular, this distribution is supported over the contexts which co-occur with the entity and are embedded in a suitable low-dimensional space.This enables us to consider the problem of representation learning with a perspective from Optimal Transport and take advantage of its numerous tools such as Wasserstein distance and Wasserstein barycenters.We elaborate how the method can be applied for obtaining unsupervised representations of text and illustrate the performance quantitatively as well as qualitatively on tasks such as measuring sentence similarity and word entailment, where we empirically observe significant gains (e.g., 4.1% relative improvement over Sent2vec and GenSen).The key benefits of the proposed approach include:(a) capturing uncertainty and polysemy via modeling the entities as distributions,(b) utilizing the underlying geometry of the particular task (with the ground cost),(c) simultaneously providing interpretability with the notion of optimal transport between contexts and(d) easy applicability on top of existing point embedding methods.In essence, the framework can be useful for any unsupervised or supervised problem (on text or other modalities); and only requires a co-occurrence structure inherent to many problems.The code, as well as pre-built histograms, are available under https://github.com/context-mover.", "target": ["Stellen Sie jede Entität als eine Wahrscheinlichkeitsverteilung über Kontexte dar, die in einen Grundraum eingebettet sind.", "Es wird vorgeschlagen, Worteinbettungen aus einem Histogramm über Kontextwörter zu konstruieren, anstatt als Punktvektoren, was die Messung von Entfernungen zwischen zwei Wörtern im Sinne eines optimalen Transports zwischen den Histogrammen durch eine Methode ermöglicht, die die Darstellung einer Entität vom Standard \"Punkt in einem Vektorraum\" zu einem Histogramm mit Bins an einigen Punkten in diesem Vektorraum erweitert. "]} {"source": "Over the last few years, the phenomenon of adversarial examples --- maliciously constructed inputs that fool trained machine learning models --- has captured the attention of the research community, especially when the adversary is restricted to making small modifications of a correctly handled input.At the same time, less surprisingly, image classifiers lack human-level performance on randomly corrupted images, such as images with additive Gaussian noise.In this work, we show that these are two manifestations of the same underlying phenomenon.We establish this connection in several ways.First, we find that adversarial examples exist at the same distance scales we would expect from a linear model with the same performance on corrupted images.Next, we show that Gaussian data augmentation during training improves robustness to small adversarial perturbations and that adversarial training improves robustness to several types of image corruptions.Finally, we present a model-independent upper bound on the distance from a corrupted image to its nearest error given test performance and show that in practice we already come close to achieving the bound, so that improving robustness further for the corrupted image distribution requires significantly reducing test error.All of this suggests that improving adversarial robustness should go hand in hand with improving performance in the presence of more general and realistic image corruptions.This yields a computationally tractable evaluation metric for defenses to consider: test error in noisy image distributions.", "target": ["In Anbetracht der beobachteten Fehlerquoten von Modellen außerhalb der natürlichen Datenverteilung sollte man mit kleinen negativen Störungen rechnen.", "In diesem Beitrag wird eine alternative Betrachtungsweise für adversarial Beispiele in hochdimensionalen Räumen vorgeschlagen, indem die \"Fehlerrate\" in einer Gauß-Verteilung betrachtet wird, die an jedem Testpunkt zentriert ist."]} {"source": "Recent developments in natural language representations have been accompanied by large and expensive models that leverage vast amounts of general-domain text through self-supervised pre-training.Due to the cost of applying such models to down-stream tasks, several model compression techniques on pre-trained language representations have been proposed (Sun et al., 2019; Sanh, 2019).However, surprisingly, the simple baseline of just pre-training and fine-tuning compact models has been overlooked.In this paper, we first show that pre-training remains important in the context of smaller architectures, and fine-tuning pre-trained compact models can be competitive to more elaborate methods proposed in concurrent work.Starting with pre-trained compact models, we then explore transferring task knowledge from large fine-tuned models through standard knowledge distillation.The resulting simple, yet effective and general algorithm, Pre-trained Distillation, brings further improvements.Through extensive experiments, we more generally explore the interaction between pre-training and distillation under two variables that have been under-studied: model size and properties of unlabeled task data.One surprising observation is that they have a compound effect even when sequentially applied on the same data.To accelerate future research, we will make our 24 pre-trained miniature BERT models publicly available.", "target": ["Untersucht, wie selbstüberwachtes Lernen und Wissensdestillation im Zusammenhang mit der Erstellung kompakter Modelle zusammenwirken.", "Untersucht das Training von kompakten, vortrainierten Sprachmodellen durch Destillation und zeigt, dass die Verwendung eines Lehrers zur Destillation eines kompakten Schülermodells besser funktioniert als das direkte Vortraining des Modells.", "Dieser Beitrag zeigt, dass das Vortraining eines Schülers direkt auf die maskierte Sprachmodellierung besser ist als die Destillation, und dass es am besten ist, beides zu kombinieren und von diesem vortrainierten Schülermodell zu destillieren."]} {"source": "In this paper, we investigate lossy compression of deep neural networks (DNNs) by weight quantization and lossless source coding for memory-efficient deployment.Whereas the previous work addressed non-universal scalar quantization and entropy coding of DNN weights, we for the first time introduce universal DNN compression by universal vector quantization and universal source coding.In particular, we examine universal randomized lattice quantization of DNNs, which randomizes DNN weights by uniform random dithering before lattice quantization and can perform near-optimally on any source without relying on knowledge of its probability distribution.Moreover, we present a method of fine-tuning vector quantized DNNs to recover the performance loss after quantization.Our experimental results show that the proposed universal DNN compression scheme compresses the 32-layer ResNet (trained on CIFAR-10) and the AlexNet (trained on ImageNet) with compression ratios of $47.1$ and $42.5$, respectively.", "target": ["Wir stellen das universelle Komprimierungsschema für tiefe neuronale Netze vor, das universell für die Komprimierung beliebiger Modelle anwendbar ist und unabhängig von deren Gewichtsverteilung nahezu optimal funktioniert.", "Es wird eine Pipeline für die Netzwerkkomprimierung eingeführt, die der tiefen Komprimierung ähnelt und randomisierte Gitterquantisierung anstelle der klassischen Vektorquantisierung sowie universelle Quellcodierung (bzip2) anstelle der Huffman-Codierung verwendet."]} {"source": "What would be learned by variational autoencoder(VAE) and what influence the disentanglement of VAE?This paper tries to preliminarily address VAE's intrinsic dimension, real factor, disentanglement and indicator issues theoretically in the idealistic situation and implementation issue practically through noise modeling perspective in the realistic case. On intrinsic dimension issue, due to information conservation, the idealistic VAE learns and only learns intrinsic factor dimension.Besides, suggested by mutual information separation property, the constraint induced by Gaussian prior to the VAE objective encourages the information sparsity in dimension.On disentanglement issue, subsequently, inspired by information conservation theorem the clarification on disentanglement in this paper is made.On real factor issue, due to factor equivalence, the idealistic VAE possibly learns any factor set in the equivalence class. On indicator issue, the behavior of current disentanglement metric is discussed, and several performance indicators regarding the disentanglement and generating influence are subsequently raised to evaluate the performance of VAE model and to supervise the used factors.On implementation issue, the experiments under noise modeling and constraints empirically testify the theoretical analysis and also show their own characteristic in pursuing disentanglement.", "target": ["In diesem Beitrag wird versucht, die Entflechtung theoretisch in einer idealistischen Situation und praktisch durch die Modellierung von Noise in einem realistischen Fall zu untersuchen.", "Untersucht die Bedeutung der Modellierung mit Störungen in der Gaußschen VAE und schlägt vor, die Störungen mit Hilfe der Empirical-Bayes-Methode zu trainieren.", "Änderung der Behandlung von Störfaktoren bei der Entwicklung von VAE-Modellen"]} {"source": "Weight decay is one of the standard tricks in the neural network toolbox, but the reasons for its regularization effect are poorly understood, and recent results have cast doubt on the traditional interpretation in terms of $L_2$ regularization.Literal weight decay has been shown to outperform $L_2$ regularization for optimizers for which they differ. We empirically investigate weight decay for three optimization algorithms (SGD, Adam, and K-FAC) and a variety of network architectures.We identify three distinct mechanisms by which weight decay exerts a regularization effect, depending on the particular optimization algorithm and architecture: (1) increasing the effective learning rate, (2) approximately regularizing the input-output Jacobian norm, and (3) reducing the effective damping coefficient for second-order optimization. Our results provide insight into how to improve the regularization of neural networks.", "target": ["Wir untersuchen die Gewichtsabnahme-Regularisierung für verschiedene Optimierer und identifizieren drei verschiedene Mechanismen, durch die die Gewichtsabnahme die Generalisierung verbessert.", "Diskutiert die Auswirkung der Gewichtsabnahme auf das Training von Deep-Network-Modellen mit und ohne Batch Normalisierung und bei Verwendung von Optimierungsmethoden erster/zweiter Ordnung und stellt die Hypothese auf, dass eine größere Lernrate einen Regularisierungseffekt hat."]} {"source": "In this paper we present the first freely available dataset for the development and evaluation of domain adaptation methods, for the sound event detection task.The dataset contains 40 log mel-band energies extracted from $100$ different synthetic sound event tracks, with additive noise from nine different acoustic scenes (from indoor, outdoor, and vehicle environments), mixed at six different sound-to-noise ratios, SNRs, (from -12 to -27 dB with a step of -3 dB), and totaling to 5400 (9 * 100 * 6) sound files and a total length of 30 564 minutes.We provide the dataset as is, the code to re-create the dataset and remix the sound event tracks and the acoustic scenes with different SNRs, and a baseline method that tests the adaptation performance with the proposed dataset and establishes some first results.", "target": ["Der erste frei verfügbare Domänenanpassungs Datensatz für die Erkennung von Schallereignissen."]} {"source": "This paper aims to address the limitations of mutual information estimators based on variational optimization.By redefining the cost using generalized functions from nonextensive statistical mechanics we raise the upper bound of previous estimators and enable the control of the bias variance trade off.Variational based estimators outperform previous methods especially in high dependence high dimensional scenarios found in machine learning setups.Despite their performance, these estimators either exhibit a high variance or are upper bounded by log(batch size).Our approach inspired by nonextensive statistical mechanics uses different generalizations for the logarithm and the exponential in the partition function.This enables the estimator to capture changes in mutual information over a wider range of dimensions and correlations of the input variables whereas previous estimators saturate them.", "target": ["Schätzer der gegenseitigen Information auf der Grundlage der nicht-extensiven statistischen Mechanik.", "In diesem Beitrag wird versucht, neuartige Variationsschranken für die gegenseitige Information aufzustellen, indem der Parameter q eingeführt und die q-Algebra definiert wird. Es wird gezeigt, dass die Schranken eine geringere Varianz haben und hohe Werte erreichen."]} {"source": "Generative adversarial networks (GANs) are a widely used framework for learning generative models.Wasserstein GANs (WGANs), one of the most successful variants of GANs, require solving a minmax problem to global optimality, but in practice, are successfully trained with stochastic gradient descent-ascent.In this paper, we show that, when the generator is a one-layer network, stochastic gradient descent-ascent converges to a global solution in polynomial time and sample complexity.", "target": ["Wir zeigen, dass der stochastische Gradientenabstieg zu einem globalen Optimum für WGAN mit einem einschichtigen Generatoren-Netzwerk konvergiert.", "Es wird versucht zu beweisen, dass der Stochastic Gradient Descent-Ascent zu einer globalen Lösung für das Min-Max Problem des WGAN konvergieren kann."]} {"source": "Classifiers such as deep neural networks have been shown to be vulnerable against adversarial perturbations on problems with high-dimensional input space.While adversarial training improves the robustness of classifiers against such adversarial perturbations, it leaves classifiers sensitive to them on a non-negligible fraction of the inputs.We argue that there are two different kinds of adversarial perturbations: shared perturbations which fool a classifier on many inputs and singular perturbations which only fool the classifier on a small fraction of the data.We find that adversarial training increases the robustness of classifiers against shared perturbations.Moreover, it is particularly effective in removing universal perturbations, which can be seen as an extreme form of shared perturbations.Unfortunately, adversarial training does not consistently increase the robustness against singular perturbations on unseen inputs.However, we find that adversarial training decreases robustness of the remaining perturbations against image transformations such as changes to contrast and brightness or Gaussian blurring.It thus makes successful attacks on the classifier in the physical world less likely.Finally, we show that even singular perturbations can be easily detected and must thus exhibit generalizable patterns even though the perturbations are specific for certain inputs.", "target": ["Wir zeigen empirisch, dass das adversarische Training universelle Störungen wirksam beseitigt, die adversarischen Beispiele weniger robust gegenüber Bildtransformationen macht und sie für einen Erkennungsansatz erkennbar bleiben.", "Analysiert adversariales Training und seine Auswirkung auf universelle adversarische Beispiele sowie standardmäßige (Basisiteration) adversarische Beispiele und wie adversariales Training die Erkennung beeinflusst. ", "Die Autoren zeigen, dass adversarial Training wirksam gegen \"gemeinsame\" adversarial Störungen schützt, insbesondere gegen universelle Störungen, aber weniger wirksam gegen singuläre Störungen."]} {"source": "We address the challenging problem of efficient deep learning model deployment, where the goal is to design neural network architectures that can fit different hardware platform constraints.Most of the traditional approaches either manually design or use Neural Architecture Search (NAS) to find a specialized neural network and train it from scratch for each case, which is computationally expensive and unscalable.Our key idea is to decouple model training from architecture search to save the cost.To this end, we propose to train a once-for-all network (OFA) that supports diverse architectural settings (depth, width, kernel size, and resolution).Given a deployment scenario, we can then quickly get a specialized sub-network by selecting from the OFA network without additional training.To prevent interference between many sub-networks during training, we also propose a novel progressive shrinking algorithm, which can train a surprisingly large number of sub-networks ($> 10^{19}$) simultaneously.Extensive experiments on various hardware platforms (CPU, GPU, mCPU, mGPU, FPGA accelerator) show that OFA consistently outperforms SOTA NAS methods (up to 4.0% ImageNet top1 accuracy improvement over MobileNetV3) while reducing orders of magnitude GPU hours and $CO_2$ emission.In particular, OFA achieves a new SOTA 80.0% ImageNet top1 accuracy under the mobile setting ($<$600M FLOPs).Code and pre-trained models are released at https://github.com/mit-han-lab/once-for-all.", "target": ["Wir stellen Techniken vor, um ein einziges Netzwerk zu trainieren, das für viele Hardware-Plattformen geeignet ist.", "Die Methode führt zu einem Netz, aus dem man Teilnetze für verschiedene Ressourcenbeschränkungen (Latenz, Speicher) extrahieren kann, die gute Leistungen erbringen, ohne dass ein erneutes Training erforderlich ist.", "In diesem Beitrag wird versucht, das Problem der Suche nach den besten Architekturen für spezielle ressourcenbeschränkte Einsatzszenarien mit einer auf Vorhersagen basierenden NAS-Methode anzugehen."]} {"source": "A deep generative model is a powerful method of learning a data distribution, which has achieved tremendous success in numerous scenarios.However, it is nontrivial for a single generative model to faithfully capture the distributions of the complex data such as images with complicate structures.In this paper, we propose a novel approach of cascaded boosting for boosting generative models, where meta-models (i.e., weak learners) are cascaded together to produce a stronger model.Any hidden variable meta-model can be leveraged as long as it can support the likelihood evaluation.We derive a decomposable variational lower bound of the boosted model, which allows each meta-model to be trained separately and greedily.We can further improve the learning power of the generative models by combing our cascaded boosting framework with the multiplicative boosting framework.", "target": ["Vorschlag für einen Ansatz zur Verstärkung generativer Modelle durch Kaskadierung von Modellen mit verborgenen Variablen", "In diesem Papier wird ein neuartiger Ansatz des kaskadierten Boostings für generative Modelle vorgeschlagen, bei dem jedes Meta-Modell separat und gierig trainiert werden kann."]} {"source": "Contextualized representation models such as ELMo (Peters et al., 2018a) and BERT (Devlin et al., 2018) have recently achieved state-of-the-art results on a diverse array of downstream NLP tasks.Building on recent token-level probing work, we introduce a novel edge probing task design and construct a broad suite of sub-sentence tasks derived from the traditional structured NLP pipeline.We probe word-level contextual representations from four recent models and investigate how they encode sentence structure across a range of syntactic, semantic, local, and long-range phenomena.We find that existing models trained on language modeling and translation produce strong representations for syntactic phenomena, but only offer comparably small improvements on semantic tasks over a non-contextual baseline.", "target": ["Wir untersuchen die Satzstruktur in ELMo und verwandten kontextuellen Einbettungsmodellen. Wir stellen fest, dass bestehende Modelle die Syntax effizient kodieren und Hinweise auf weitreichende Abhängigkeiten zeigen, aber nur geringe Verbesserungen bei semantischen Aufgaben bieten.", "Die von den Autoren vorgeschlagene \"Edge Probing\"-Methode konzentriert sich auf die Beziehungen zwischen den Bereichen und nicht auf die einzelnen Wörter, was es den Autoren ermöglicht, syntaktische Konstituenten, Abhängigkeiten, Entitätskennzeichnungen und semantische Rollenbezeichnungen zu untersuchen.", "Bietet neue Einblicke in die Erfassung kontextualisierter Worteinbettungen durch die Zusammenstellung einer Reihe von Edge Probing Aufgaben. "]} {"source": "Deep reinforcement learning has succeeded in sophisticated games such as Atari, Go, etc.Real-world decision making, however, often requires reasoning with partial information extracted from complex visual observations.This paper presents Discriminative Particle Filter Reinforcement Learning (DPFRL), a new reinforcement learning framework for partial and complex observations.DPFRL encodes a differentiable particle filter with learned transition and observation models in a neural network, which allows for reasoning with partial observations over multiple time steps.While a standard particle filter relies on a generative observation model, DPFRL learns a discriminatively parameterized model that is training directly for decision making.We show that the discriminative parameterization results in significantly improved performance, especially for tasks with complex visual observations, because it circumvents the difficulty of modelling observations explicitly.In most cases, DPFRL outperforms state-of-the-art POMDP RL models in Flickering Atari Games, an existing POMDP RL benchmark, and in Natural Flickering Atari Games, a new, more challenging POMDP RL benchmark that we introduce.We further show that DPFRL performs well for visual navigation with real-world data.", "target": ["Wir stellen DPFRL vor, ein Rahmenwerk für Reinforcement Learning unter partiellen und komplexen Beobachtungen mit einem voll differenzierbaren diskriminativen Partikelfilter.", "Es werden Ideen für das Training von DLR-Agenten mit latenten Zustandsvariablen vorgestellt, die als Glaubensverteilung modelliert sind, so dass sie mit teilweise beobachteten Umgebungen umgehen können.", "Diese Arbeit stellt eine prinzipielle Methode für POMDP RL vor: Diskriminatives Partikelfilter Reinforcement Learning, das es erlaubt, mit partiellen Beobachtungen über mehrere Zeitschritte zu argumentieren und dabei den Stand der Technik in Benchmarks zu erreichen."]} {"source": "Extending models with auxiliary latent variables is a well-known technique to in-crease model expressivity.Bachman & Precup (2015); Naesseth et al. (2018); Cremer et al. (2017); Domke & Sheldon (2018) show that Importance Weighted Autoencoders (IWAE) (Burda et al., 2015) can be viewed as extending the variational family with auxiliary latent variables.Similarly, we show that this view encompasses many of the recent developments in variational bounds (Maddisonet al., 2017; Naesseth et al., 2018; Le et al., 2017; Yin & Zhou, 2018; Molchanovet al., 2018; Sobolev & Vetrov, 2018).The success of enriching the variational family with auxiliary latent variables motivates applying the same techniques to the generative model.We develop a generative model analogous to the IWAE bound and empirically show that it outperforms the recently proposed Learned Accept/Reject Sampling algorithm (Bauer & Mnih, 2018), while being substantially easier to implement.Furthermore, we show that this generative process provides new insights on ranking Noise Contrastive Estimation (Jozefowicz et al.,2016; Ma & Collins, 2018) and Contrastive Predictive Coding (Oord et al., 2018).", "target": ["Monte Carlo Ziele werden mit Hilfe der Variationsinferenz für Hilfsvariablen analysiert, was zu einer neuen Analyse von CPC und NCE sowie zu einem neuen generativen Modell führt.", "Vorschlage einer anderen Sichtweise zur Verbesserung der Variationsschranken mit latenten Hilfsvariablenmodellen und untersuchen der Verwendung dieser Modelle im generativen Modell."]} {"source": "Stochastic Gradient Descent or SGD is the most popular optimization algorithm for large-scale problems.SGD estimates the gradient by uniform sampling with sample size one.There have been several other works that suggest faster epoch wise convergence by using weighted non-uniform sampling for better gradient estimates.Unfortunately, the per-iteration cost of maintaining this adaptive distribution for gradient estimation is more than calculating the full gradient.As a result, the false impression of faster convergence in iterations leads to slower convergence in time, which we call a chicken-and-egg loop.In this paper, we break this barrier by providing the first demonstration of a sampling scheme, which leads to superior gradient estimation, while keeping the sampling cost per iteration similar to that of the uniform sampling.Such an algorithm is possible due to the sampling view of Locality Sensitive Hashing (LSH), which came to light recently.As a consequence of superior and fast estimation, we reduce the running time of all existing gradient descent algorithms.We demonstrate the benefits of our proposal on both SGD and AdaGrad.", "target": ["Wir verbessern den Ablauf aller bestehenden Gradientenabstiegsalgorithmen.", "Die Autoren schlagen vor, stochastische Gradienten aus einer monotonen Funktion, die proportional zu den Gradientenstärken ist, mit Hilfe von LSH abzutasten. ", "Betrachtet SGD über ein Ziel in der Form einer Summe über Beispiele eines quadratischen Verlustes."]} {"source": "In recent years we have made significant progress identifying computational principles that underlie neural function.While not yet complete, we have sufficient evidence that a synthesis of these ideas could result in an understanding of how neural computation emerges from a combination of innate dynamics and plasticity, and which could potentially be used to construct new AI technologies with unique capabilities.I discuss the relevant principles, the advantages they have for computation, and how they can benefit AI.Limitations of current AI are generally recognized, but fewer people are aware that we understand enough about the brain to immediately offer novel AI formulations.", "target": ["Die Grenzen der gegenwärtigen KI sind allgemein anerkannt, aber weniger Menschen sind sich bewusst, dass wir genug über das Gehirn wissen, um sofort neue KI-Formulierungen anzubieten."]} {"source": "Recent work has demonstrated how predictive modeling can endow agents with rich knowledge of their surroundings, improving their ability to act in complex environments.We propose question-answering as a general paradigm to decode and understand the representations that such agents develop, applying our method to two recent approaches to predictive modeling – action-conditional CPC (Guo et al., 2018) and SimCore (Gregor et al., 2019).After training agents with these predictive objectives in a visually-rich, 3D environment with an assortment of objects, colors, shapes, and spatial configurations, we probe their internal state representations with a host of synthetic (English) questions, without backpropagating gradients from the question-answering decoder into the agent.The performance of different agents when probed in this way reveals that they learn to encode detailed, and seemingly compositional, information about objects, properties and spatial relations from their physical environment.Our approach is intuitive, i.e. humans can easily interpret the responses of the model as opposed to inspecting continuous vectors, and model-agnostic, i.e. applicable to any modeling approach.By revealing the implicit knowledge of objects, quantities, properties and relations acquired by agents as they learn, question-conditional agent probing can stimulate the design and development of stronger predictive learning objectives.", "target": ["Wir verwenden die Beantwortung von Fragen, um zu bewerten, wie viel Wissen über die Umwelt Agenten durch selbstüberwachte Vorhersage lernen können.", "Er schlägt QA als ein Werkzeug vor, um zu untersuchen, was Agenten in der Welt lernen, und argumentiert, dass dies eine intuitive Methode für Menschen ist, die beliebige Komplexität zulässt.", "Die Autoren schlagen ein Framework vor, um die von prädiktiven Modellen erstellten Repräsentationen zu bewerten, die genügend Informationen enthalten, um Fragen über die Umgebung zu beantworten, auf die sie trainiert wurden. Sie zeigen, dass die von SimCore erstellten Repräsentationen genügend Informationen enthielten, damit das LSTM die Fragen genau beantworten konnte."]} {"source": "In most real-world scenarios, training datasets are highly class-imbalanced, where deep neural networks suffer from generalizing to a balanced testing criterion.In this paper, we explore a novel yet simple way to alleviate this issue via synthesizing less-frequent classes with adversarial examples of other classes.Surprisingly, we found this counter-intuitive method can effectively learn generalizable features of minority classes by transferring and leveraging the diversity of the majority information.Our experimental results on various types of class-imbalanced datasets in image classification and natural language processing show that the proposed method not only improves the generalization of minority classes significantly compared to other re-sampling or re-weighting methods, but also surpasses other methods of state-of-art level for the class-imbalanced classification.", "target": ["Wir entwickeln eine neue Methode zur unausgewogenen Klassifizierung unter Verwendung von adversarial Beispielen.", "Schlägt ein neues Optimierungsziel vor, das synthetische Stichproben erzeugt, indem es die Mehrheitsklassen anstelle der Minderheitsklassen überabtasten lässt und so das Problem der Überanpassung der Minderheitsklassen löst.", "Die Autoren schlagen vor, die Ungleichgewichtsklassifizierung mit Hilfe von Re-Sampling Methoden anzugehen. Sie zeigen, dass Gegenbeispiele in der Minderheitenklasse helfen würden, ein neues Modell zu trainieren, das besser verallgemeinert."]} {"source": "Active matter consists of active agents which transform energy extracted from surroundings into momentum, producing a variety of collective phenomena.A model, synthetic active system composed of microtubule polymers driven by protein motors spontaneously forms a liquid-crystalline nematic phase.Extensile stress created by the protein motors precipitates continuous buckling and folding of the microtubules creating motile topological defects and turbulent fluid flows.Defect motion is determined by the rheological properties of the material; however, these remain largely unquantified.Measuring defects dynamics can yield fundamental insights into active nematics, a class of materials that include bacterial films and animal cells.Current methods for defect detection lack robustness and precision, and require fine-tuning for datasets with different visual quality. In this study, we applied Deep Learning to train a defect detector to automatically analyze microscopy videos of the microtubule active nematic. Experimental results indicate that our method is robust and accurate.It is expected to significantly increase the amount of video data that can be processed.", "target": ["Eine interessante Anwendung von CNN in Experimenten zur Physik der weichen kondensierten Materie.", "Die Autoren zeigen, dass ein Deep-Learning Ansatz sowohl die Erkennungsgenauigkeit als auch die Erkennungsrate von Defekten in nematischen Flüssigkristallen verbessern kann.", "Anwendung eines bekannten neuronalen Modells (YOLO) zur Erkennung von Bounding Boxes von Objekten in Bildern."]} {"source": "In this work we study locality and compositionality in the context of learning representations for Zero Shot Learning (ZSL). In order to well-isolate the importance of these properties in learned representations, we impose the additional constraint that, differently from most recent work in ZSL, no pre-training on different datasets (e.g. ImageNet) is performed.The results of our experiment show how locality, in terms of small parts of the input, and compositionality, i.e. how well can the learned representations be expressed as a function of a smaller vocabulary, are both deeply related to generalization and motivate the focus on more local-aware models in future research directions for representation learning.", "target": ["Eine Analyse der Auswirkungen von Kompositionalität und Lokalität auf das Repräsentationslernen beim Zero-Shot Lernen.", "Schlägt einen Evaluierungsrahmen für ZSL vor, bei dem das Modell nicht vortrainiert werden darf und stattdessen die Modellparameter zufällig initialisiert werden, um besser zu verstehen, was in ZSL passiert."]} {"source": "It is becoming increasingly clear that many machine learning classifiers are vulnerable to adversarial examples.In attempting to explain the origin of adversarial examples, previous studies have typically focused on the fact that neural networks operate on high dimensional data, they overfit, or they are too linear.Here we show that distributions of logit differences have a universal functional form.This functional form is independent of architecture, dataset, and training protocol; nor does it change during training.This leads to adversarial error having a universal scaling, as a power-law, with respect to the size of the adversarial perturbation.We show that this universality holds for a broad range of datasets (MNIST, CIFAR10, ImageNet, and random data), models (including state-of-the-art deep networks, linear models, adversarially trained networks, and networks trained on randomly shuffled labels), and attacks (FGSM, step l.l., PGD).Motivated by these results, we study the effects of reducing prediction entropy on adversarial robustness.Finally, we study the effect of network architectures on adversarial sensitivity.To do this, we use neural architecture search with reinforcement learning to find adversarially robust architectures on CIFAR10.Our resulting architecture is more robust to white \\emph{and} black box attacks compared to previous attempts.", "target": ["Bei allen untersuchten Datensätzen und Modellen weisen die Fehler des Gegners eine ähnliche Potenzgesetzform auf, und die Architektur spielt eine Rolle."]} {"source": "Reinforcement learning (RL) has led to increasingly complex looking behavior in recent years.However, such complexity can be misleading and hides over-fitting.We find that visual representations may be a useful metric of complexity, and both correlates well objective optimization and causally effects reward optimization.We then propose curious representation learning (CRL) which allows us to use better visual representation learning algorithms to correspondingly increase visual representation in policy through an intrinsic objective on both simulated environments and transfer to real images.Finally, we show better visual representations induced by CRL allows us to obtain better performance on Atari without any reward than other curiosity objectives.", "target": ["Wir stellen eine Formulierung von Neugier als ein visuelles Repräsentationslernproblem vor und zeigen, dass sie gute visuelle Repräsentationen in Agenten ermöglicht.", "In diesem Beitrag wird neugierbasiertes RL-Training als Lernen eines visuellen Repräsentationsmodells formuliert, wobei argumentiert wird, dass die Konzentration auf bessere LR und die Maximierung des Modellverlusts für neuartige Szenen eine bessere Gesamtleistung ergibt."]} {"source": "This paper introduces the task of semantic instance completion: from an incomplete RGB-D scan of a scene, we aim to detect the individual object instances comprising the scene and infer their complete object geometry.This enables a semantically meaningful decomposition of a scanned scene into individual, complete 3D objects, including hidden and unobserved object parts.This will open up new possibilities for interactions with object in a scene, for instance for virtual or robotic agents.To address this task, we propose 3D-SIC, a new data-driven approach that jointly detects object instances and predicts their completed geometry.The core idea of 3D-SIC is a novel end-to-end 3D neural network architecture that leverages joint color and geometry feature learning.The fully-convolutional nature of our 3D network enables efficient inference of semantic instance completion for 3D scans at scale of large indoor environments in a single forward pass.In a series evaluation, we evaluate on both real and synthetic scan benchmark data, where we outperform state-of-the-art approaches by over 15 in mAP@0.5 on ScanNet, and over 18 in mAP@0.5 on SUNCG.", "target": ["Aus einem unvollständigen RGB-D-Scan einer Szene wollen wir die einzelnen Objektinstanzen, aus denen die Szene besteht, erkennen und ihre vollständige Objektgeometrie ableiten.", "Vorschlagen einer durchgängigen 3D CNN-Struktur, die Farbmerkmale und 3D-Merkmale kombiniert, um die fehlende 3D-Struktur einer Szene aus RGB-D-Scans vorherzusagen.", "Die Autoren schlagen ein neuartiges durchgängiges 3D Convolutional Network vor, das die semantische 3D Instanzvervollständigung in Form von Objektbegrenzungsrahmen, Klassenbezeichnungen und vollständiger Objektgeometrie vorhersagt."]} {"source": "Style transfer usually refers to the task of applying color and texture information from a specific style image to a given content image while preserving the structure of the latter.Here we tackle the more generic problem of semantic style transfer: given two unpaired collections of images, we aim to learn a mapping between the corpus-level style of each collection, while preserving semantic content shared across the two domains.We introduce XGAN (\"Cross-GAN\"), a dual adversarial autoencoder, which captures a shared representation of the common domain semantic content in an unsupervised way, while jointly learning the domain-to-domain image translations in both directions. We exploit ideas from the domain adaptation literature and define a semantic consistency loss which encourages the model to preserve semantics in the learned embedding space.We report promising qualitative results for the task of face-to-cartoon translation.The cartoon dataset we collected for this purpose will also be released as a new benchmark for semantic style transfer.", "target": ["XGAN ist ein unüberwachtes Modell für die Bild-zu-Bild-Übersetzung auf Merkmalsebene, das auf semantische Stilübertragungsprobleme wie die Gesicht-zu-Karikatur-Aufgabe angewendet wird, für die wir einen neuen Datensatz vorstellen.", "In diesem Beitrag wird ein neues GAN-basiertes Modell für die ungepaarte Bild-zu-Bild-Übersetzung vorgeschlagen, das dem DTN ähnelt."]} {"source": "Training neural networks on large datasets can be accelerated by distributing the workload over a network of machines.As datasets grow ever larger, networks of hundreds or thousands of machines become economically viable.The time cost of communicating gradients limits the effectiveness of using such large machine counts, as may the increased chance of network faults.We explore a particularly simple algorithm for robust, communication-efficient learning---signSGD.Workers transmit only the sign of their gradient vector to a server, and the overall update is decided by a majority vote.This algorithm uses 32x less communication per iteration than full-precision, distributed SGD.Under natural conditions verified by experiment, we prove that signSGD converges in the large and mini-batch settings, establishing convergence for a parameter regime of Adam as a byproduct.Aggregating sign gradients by majority vote means that no individual worker has too much power.We prove that unlike SGD, majority vote is robust when up to 50% of workers behave adversarially.The class of adversaries we consider includes as special cases those that invert or randomise their gradient estimate.On the practical side, we built our distributed training system in Pytorch.Benchmarking against the state of the art collective communications library (NCCL), our framework---with the parameter server housed entirely on one machine---led to a 25% reduction in time for training resnet50 on Imagenet when using 15 AWS p3.2xlarge machines.", "target": ["Die Arbeiter senden Gradientenzeichen an den Server, und die Aktualisierung wird durch Mehrheitsabstimmung entschieden. Wir zeigen, dass dieser Algorithmus konvergent, kommunikationseffizient und fehlertolerant ist, sowohl in der Theorie als auch in der Praxis.", "Stellt eine verteilte Implementierung von signSGD mit Mehrheitsentscheidung als Aggregation vor."]} {"source": "Profiling cellular phenotypes from microscopic imaging can provide meaningful biological information resulting from various factors affecting the cells.One motivating application is drug development: morphological cell features can be captured from images, from which similarities between different drugs applied at different dosages can be quantified.The general approach is to find a function mapping the images to an embedding space of manageable dimensionality whose geometry captures relevant features of the input images.An important known issue for such methods is separating relevant biological signal from nuisance variation.For example, the embedding vectors tend to be more correlated for cells that were cultured and imaged during the same week than for cells from a different week, despite having identical drug compounds applied in both cases.In this case, the particular batch a set of experiments were conducted in constitutes the domain of the data; an ideal set of image embeddings should contain only the relevant biological information (e.g. drug effects).We develop a general framework for adjusting the image embeddings in order to `forget' domain-specific information while preserving relevant biological information.To do this, we minimize a loss function based on distances between marginal distributions (such as the Wasserstein distance) of embeddings across domains for each replicated treatment.For the dataset presented, the replicated treatment is the negative control.We find that for our transformed embeddings (1) the underlying geometric structure is not only preserved but the embeddings also carry improved biological signal (2) less domain-specific information is present.", "target": ["Wir korrigieren unerwünschte Abweichungen bei der Einbettung von Bildern in verschiedenen Bereichen, wobei nur relevante Informationen erhalten bleiben.", "Erörtert eine Methode zur Anpassung von Bildeinbettungen, um technische Variationen von biologischen Signalen zu trennen.", "Die Autoren stellen eine Methode vor, um domänenspezifische Informationen zu entfernen und gleichzeitig die relevanten biologischen Informationen zu erhalten, indem sie ein Netzwerk trainieren, das die Wasserstein-Distanz zwischen den Distrbutionen minimiert."]} {"source": "This paper presents a Mutual Information Neural Estimator (MINE) that is linearly scalable in dimensionality as well as in sample size.MINE is back-propable and we prove that it is strongly consistent.We illustrate a handful of applications in which MINE is succesfully applied to enhance the property of generative models in both unsupervised and supervised settings.We apply our framework to estimate the information bottleneck, and apply it in tasks related to supervised classification problems.Our results demonstrate substantial added flexibility and improvement in these settings.", "target": ["Ein in Stichprobengröße und Dimensionen skalierbarer Schätzer der gegenseitigen Information."]} {"source": "Reinforcement learning methods have recently achieved impressive results on a wide range of control problems.However, especially with complex inputs, they still require an extensive amount of training data in order to converge to a meaningful solution.This limitation largely prohibits their usage for complex input spaces such as video signals, and it is still impossible to use it for a number of complex problems in a real world environments, including many of those for video based control.Supervised learning, on the contrary, is capable of learning on a relatively small number of samples, however it does not take into account reward-based control policies and is not capable to provide independent control policies. In this article we propose a model-free control method, which uses a combination of reinforcement and supervised learning for autonomous control and paves the way towards policy based control in real world environments.We use SpeedDreams/TORCS video game to demonstrate that our approach requires much less samples (hundreds of thousands against millions or tens of millions) comparing to the state-of-the-art reinforcement learning techniques on similar data, and at the same time overcomes both supervised and reinforcement learning approaches in terms of quality.Additionally, we demonstrate the applicability of the method to MuJoCo control problems.", "target": ["Die neue Kombination aus verstärktem und überwachtem Lernen, die die Anzahl der erforderlichen Proben für das Training auf Videos drastisch reduziert.", "In dieser Arbeit wird vorgeschlagen, gelabelte kontrollierte Daten zu nutzen, um das verstärkungsbasierte Lernen einer Kontrollpolitik zu beschleunigen."]} {"source": "A typical experiment to study cognitive function is to train animals to perform tasks, while the researcher records the electrical activity of the animals neurons.The main obstacle faced, when using this type of electrophysiological experiment to uncover the circuit mechanisms underlying complex behaviors, is our incomplete access to relevant circuits in the brain.One promising approach is to model neural circuits using an artificial neural network (ANN), which can provide complete access to the “neural circuits” responsible for a behavior.More recently, reinforcement learning models have been adopted to understand the functions of cortico-basal ganglia circuits as reward-based learning has been found in mammalian brain.In this paper, we propose a Biologically-plausible Actor-Critic with Episodic Memory (B-ACEM) framework to model a prefrontal cortex-basal ganglia-hippocampus (PFC-BG) circuit, which is verified to capture the behavioral findings from a well-known perceptual decision-making task, i.e., random dots motion discrimination.This B-ACEM framework links neural computation to behaviors, on which we can explore how episodic memory should be considered to govern future decision.Experiments are conducted using different settings of the episodic memory and results show that all patterns of episodic memories can speed up learning.In particular, salient events are prioritized to propagate reward information and guide decisions.Our B-ACEM framework and the built-on experiments give inspirations to both designs for more standard decision-making models in biological system and a more biologically-plausible ANN.", "target": ["Schnelles Lernen über das episodische Gedächtnis, verifiziert durch einen biologisch plausiblen Rahmen für den präfrontalen Kortex-Basalganglien-Hippocampus-Schaltkreis (PFC-BG)."]} {"source": "Understanding the representational power of Deep Neural Networks (DNNs) and how their structural properties (e.g., depth, width, type of activation unit) affect the functions they can compute, has been an important yet challenging question in deep learning and approximation theory.In a seminal paper, Telgarsky high- lighted the benefits of depth by presenting a family of functions (based on sim- ple triangular waves) for which DNNs achieve zero classification error, whereas shallow networks with fewer than exponentially many nodes incur constant error.Even though Telgarsky’s work reveals the limitations of shallow neural networks, it doesn’t inform us on why these functions are difficult to represent and in fact he states it as a tantalizing open question to characterize those functions that cannot be well-approximated by smaller depths.In this work, we point to a new connection between DNNs expressivity and Sharkovsky’s Theorem from dynamical systems, that enables us to characterize the depth-width trade-offs of ReLU networks for representing functions based on the presence of a generalized notion of fixed points, called periodic points (a fixed point is a point of period 1).Motivated by our observation that the triangle waves used in Telgarsky’s work contain points of period 3 – a period that is special in that it implies chaotic behaviour based on the celebrated result by Li-Yorke – we proceed to give general lower bounds for the width needed to represent periodic functions as a function of the depth.Technically, the crux of our approach is based on an eigenvalue analysis of the dynamical systems associated with such functions.", "target": ["In dieser Arbeit zeigen wir eine neue Verbindung zwischen der Ausdruckskraft von DNNs und Sharkovskys Theorem aus dynamischen Systemen auf, die es uns ermöglicht, den Tiefen-Breiten Kompromiss von ReLU-Netzen zu charakterisieren .", "Zeigt, wie die Ausdruckskraft von NN von ihrer Tiefe und Breite abhängt, und fördert das Verständnis für den Nutzen tiefer Netze zur Darstellung bestimmter Funktionsklassen.", "Die Autoren leiten mit Hilfe der Analyse dynamischer Systeme Bedingungen für einen Tiefen-Breiten Kompromiss ab, wenn ReLu-Netze in der Lage sind, periodische Funktionen darzustellen."]} {"source": "We investigate low-bit quantization to reduce computational cost of deep neural network (DNN) based keyword spotting (KWS).We propose approaches to further reduce quantization bits via integrating quantization into keyword spotting model training, which we refer to as quantization-aware training.Our experimental results on large dataset indicate that quantization-aware training can recover performance models quantized to lower bits representations.By combining quantization-aware training and weight matrix factorization, we are able to significantly reduce model size and computation for small-footprint keyword spotting, while maintaining performance.", "target": ["Wir untersuchen quantisierungsbewusstes Training in sehr niedrig quantisierten Keyword-Spottern, um die Kosten für das On-Device Keyword-Spotting zu reduzieren.", "In diesem Beitrag wird eine Kombination aus Low-Rank-Decomposition und Quanitization-Ansatz zur Komprimierung von DNN-Modellen für das Keyword-Spotting vorgeschlagen."]} {"source": "Single-cell RNA-sequencing (scRNA-seq) is a powerful tool for analyzing biological systems.However, due to biological and technical noise, quantifying the effects of multiple experimental conditions presents an analytical challenge.To overcome this challenge, we developed MELD: Manifold Enhancement of Latent Dimensions.MELD leverages tools from graph signal processing to learn a latent dimension within the data scoring the prototypicality of each datapoint with respect to experimental or control conditions.We call this dimension the Enhanced Experimental Signal (EES).MELD learns the EES by filtering the noisy categorical experimental label in the graph frequency domain to recover a smooth signal with continuous values.This method can be used to identify signature genes that vary between conditions and identify which cell types are most affected by a given perturbation.We demonstrate the advantages of MELD analysis in two biological datasets, including T-cell activation in response to antibody-coated beads and treatment of human pancreatic islet cells with interferon gamma.", "target": ["Ein neuartiger Rahmen für die Graphsignalverarbeitung zur Quantifizierung der Auswirkungen von experimentellen Störungen in biomedizinischen Einzelzelldaten.", "In diesem Beitrag werden mehrere Methoden zur Verarbeitung von Versuchsergebnissen zu biologischen Zellen vorgestellt und ein MELD-Algorithmus vorgeschlagen, der harte Gruppenzuordnungen auf weiche Zuordnungen abbildet, so dass relevante Gruppen von Zellen geclustert werden können."]} {"source": "Models of user behavior are critical inputs in many prescriptive settings and can be viewed as decision rules that transform state information available to the user into actions.Gaussian processes (GPs), as well as nonlinear extensions thereof, provide a flexible framework to learn user models in conjunction with approximate Bayesian inference.However, the resulting models may not be interpretable in general.We propose decision-rule GPs (DRGPs) that apply GPs in a transformed space defined by decision rules that have immediate interpretability to practitioners.We illustrate this modeling tool on a real application and show that structural variational inference techniques can be used with DRGPs.We find that DRGPs outperform the direct use of GPs in terms of out-of-sample performance.", "target": ["Wir schlagen eine Klasse von Benutzermodellen vor, die auf der Anwendung von Gaußschen Prozessen auf einen durch Entscheidungsregeln definierten transformierten Raum basieren."]} {"source": "While Bayesian optimization (BO) has achieved great success in optimizing expensive-to-evaluate black-box functions, especially tuning hyperparameters of neural networks, methods such as random search (Li et al., 2016) and multi-fidelity BO (e.g. Klein et al. (2017)) that exploit cheap approximations, e.g. training on a smaller training data or with fewer iterations, can outperform standard BO approaches that use only full-fidelity observations.In this paper, we propose a novel Bayesian optimization algorithm, the continuous-fidelity knowledge gradient (cfKG) method, that can be used when fidelity is controlled by one or more continuous settings such as training data size and the number of training iterations.cfKG characterizes the value of the information gained by sampling a point at a given fidelity, choosing to sample at the point and fidelity with the largest value per unit cost.Furthermore, cfKG can be generalized, following Wu et al. (2017), to settings where derivatives are available in the optimization process, e.g. large-scale kernel learning, and where more than one point can be evaluated simultaneously.Numerical experiments show that cfKG outperforms state-of-art algorithms when optimizing synthetic functions, tuning convolutional neural networks (CNNs) on CIFAR-10 and SVHN, and in large-scale kernel learning.", "target": ["Wir schlagen einen Bayes-optimalen Bayes'schen Optimierungsalgorithmus für das Tuning von Hyperparametern vor, der billige Approximationen ausnutzt.", "Untersucht die Optimierung von Hyperparametern durch Bayes'sche Optimierung unter Verwendung des Knowledge Gradient Frameworks und ermöglicht dem Bayes'schen Optimierer die Abstimmung von Treue und Kosten."]} {"source": "Neural networks trained only to optimize for training accuracy can often be fooled by adversarial examples --- slightly perturbed inputs misclassified with high confidence.Verification of networks enables us to gauge their vulnerability to such adversarial examples.We formulate verification of piecewise-linear neural networks as a mixed integer program.On a representative task of finding minimum adversarial distortions, our verifier is two to three orders of magnitude quicker than the state-of-the-art.We achieve this computational speedup via tight formulations for non-linearities, as well as a novel presolve algorithm that makes full use of all information available.The computational speedup allows us to verify properties on convolutional and residual networks with over 100,000 ReLUs --- several orders of magnitude more than networks previously verified by any complete verifier.In particular, we determine for the first time the exact adversarial accuracy of an MNIST classifier to perturbations with bounded l-∞ norm ε=0.1: for this classifier, we find an adversarial example for 4.38% of samples, and a certificate of robustness to norm-bounded perturbations for the remainder.Across all robust training procedures and network architectures considered, and for both the MNIST and CIFAR-10 datasets, we are able to certify more samples than the state-of-the-art and find more adversarial examples than a strong first-order attack.", "target": ["Wir verifizieren die Robustheit von tiefen neuronalen Modellen mit über 100.000 ReLUs, wobei wir mehr Beispiele als der Stand der Technik zertifizieren und mehr negative Beispiele finden als ein starker Angriff erster Ordnung.", "Führt eine sorgfältige Studie über gemischt-ganzzahlige lineare Programmieransätze zur Überprüfung der Robustheit neuronaler Netze gegenüber Störungen durch Gegner durch und schlägt drei Verbesserungen der MILP-Formulierungen zur Überprüfung neuronaler Netze vor."]} {"source": "Uncertainty estimation is an essential step in the evaluation of the robustness for deep learning models in computer vision, especially when applied in risk-sensitive areas.However, most state-of-the-art deep learning models either fail to obtain uncertainty estimation or need significant modification (e.g., formulating a proper Bayesian treatment) to obtain it.None of the previous methods are able to take an arbitrary model off the shelf and generate uncertainty estimation without retraining or redesigning it.To address this gap, we perform the first systematic exploration into training-free uncertainty estimation. We propose three simple and scalable methods to analyze the variance of output from a trained network under tolerable perturbations: infer-transformation, infer-noise, and infer-dropout.They operate solely during inference, without the need to re-train, re-design, or fine-tune the model, as typically required by other state-of-the-art uncertainty estimation methods.Surprisingly, even without involving such perturbations in training, our methods produce comparable or even better uncertainty estimation when compared to other training-required state-of-the-art methods.Last but not least, we demonstrate that the uncertainty from our proposed methods can be used to improve the neural network training.", "target": ["Eine Reihe von Methoden, um eine Unsicherheitsabschätzung eines beliebigen Modells zu erhalten, ohne es neu zu entwerfen, neu zu trainieren oder zu fine-tunen.", "Beschreibt mehrere Ansätze zur Messung der Unsicherheit in beliebigen neuronalen Netzen, wenn während des Trainings keine Verzerrung auftritt."]} {"source": "Capturing spatiotemporal dynamics is an essential topic in video recognition.In this paper, we present learnable higher-order operation as a generic family of building blocks for capturing higher-order correlations from high dimensional input video space.We prove that several successful architectures for visual classification tasks are in the family of higher-order neural networks, theoretical and experimental analysis demonstrates their underlying mechanism is higher-order. On the task of video recognition, even using RGB only without fine-tuning with other video datasets, our higher-order models can achieve results on par with or better than the existing state-of-the-art methods on both Something-Something (V1 and V2) and Charades datasets.", "target": ["Vorgeschlagene Operation höherer Ordnung für kontextbezogenes Lernen.", "Es wird ein neuer 3D Convolutional Block vorgeschlagen, der den Video-Input mit seinem Kontext verarbeitet, basierend auf der Annahme, dass relevanter Kontext um das Objekt des Bildes herum vorhanden ist."]} {"source": "Presently the most successful approaches to semi-supervised learning are based on consistency regularization, whereby a model is trained to be robust to small perturbations of its inputs and parameters.To understand consistency regularization, we conceptually explore how loss geometry interacts with training procedures.The consistency loss dramatically improves generalization performance over supervised-only training; however, we show that SGD struggles to converge on the consistency loss and continues to make large steps that lead to changes in predictions on the test data.Motivated by these observations, we propose to train consistency-based methods with Stochastic Weight Averaging (SWA), a recent approach which averages weights along the trajectory of SGD with a modified learning rate schedule.We also propose fast-SWA, which further accelerates convergence by averaging multiple points within each cycle of a cyclical learning rate schedule.With weight averaging, we achieve the best known semi-supervised results on CIFAR-10 and CIFAR-100, over many different quantities of labeled training data.For example, we achieve 5.0% error on CIFAR-10 with only 4000 labels, compared to the previous best result in the literature of 6.3%.", "target": ["Konsistenzbasierte Modelle für halbüberwachtes Lernen konvergieren nicht zu einem einzigen Punkt, sondern erkunden weiterhin eine Reihe plausibler Lösungen am Rande eines flachen Bereichs. Die Mittelwertbildung trägt zur Verbesserung der Generalisierungsleistung bei.", "Der Artikel schlägt vor, Stochastic Weight Averaging auf den Kontext des halbüberwachten Lernens anzuwenden. Es wird argumentiert, dass die halbüberwachten MT/Pi-Modelle besonders gut für SWA geeignet sind und schlägt schnelles SWA vor, um das Training zu beschleunigen."]} {"source": "In this paper, we find that by designing a novel loss function entitled, ''tracking loss'', Convolutional Neural Network (CNN) based object detectors can be successfully converted to well-performed visual trackers without any extra computational cost.This property is preferable to visual tracking where annotated video sequences for training are always absent, because rich features learned by detectors from still images could be utilized by dynamic trackers.It also avoids extra machinery such as feature engineering and feature aggregation proposed in previous studies.Tracking loss achieves this property by exploiting the internal structure of feature maps within the detection network and treating different feature points discriminatively.Such structure allows us to simultaneously consider discrimination quality and bounding box accuracy which is found to be crucial to the success.We also propose a network compression method to accelerate tracking speed without performance reduction.That also verifies tracking loss will remain highly effective even if the network is drastically compressed.Furthermore, if we employ a carefully designed tracking loss ensemble, the tracker would be much more robust and accurate.Evaluation results show that our trackers (including the ensemble tracker and two baseline trackers), outperform all state-of-the-art methods on VOT 2016 Challenge in terms of Expected Average Overlap (EAO) and robustness.We will make the code publicly available.", "target": ["Wir konvertieren erfolgreich einen populären Detektor RPN in einen gut funktionierenden Tracker aus der Sicht der Verlustfunktion."]} {"source": "We study the problem of semantic code repair, which can be broadly defined as automatically fixing non-syntactic bugs in source code.The majority of past work in semantic code repair assumed access to unit tests against which candidate repairs could be validated.In contrast, the goal here is to develop a strong statistical model to accurately predict both bug locations and exact fixes without access to information about the intended correct behavior of the program.Achieving such a goal requires a robust contextual repair model, which we train on a large corpus of real-world source code that has been augmented with synthetically injected bugs.Our framework adopts a two-stage approach where first a large set of repair candidates are generated by rule-based processors, and then these candidates are scored by a statistical model using a novel neural network architecture which we refer to as Share, Specialize, and Compete.Specifically, the architecture (1) generates a shared encoding of the source code using an RNN over the abstract syntax tree, (2) scores each candidate repair using specialized network modules, and (3) then normalizes these scores together so they can compete against one another in comparable probability space.We evaluate our model on a real-world test set gathered from GitHub containing four common categories of bugs.Our model is able to predict the exact correct repair 41% of the time with a single guess, compared to 13% accuracy for an attentional sequence-to-sequence model.", "target": ["Eine neuronale Architektur zur Bewertung und Einstufung von Programmreparaturkandidaten, um semantische Programmreparaturen statisch ohne Zugriff auf Unit-Tests durchzuführen.", "Stellt eine neuronale Netzarchitektur vor, die aus den Teilen share, specialize und compete besteht, um Code in vier Fällen zu reparieren."]} {"source": "Deep networks were recently suggested to face the odds between accuracy (on clean natural images) and robustness (on adversarially perturbed images) (Tsipras et al., 2019).Such a dilemma is shown to be rooted in the inherently higher sample complexity (Schmidt et al., 2018) and/or model capacity (Nakkiran, 2019), for learning a high-accuracy and robust classifier.In view of that, give a classification task, growing the model capacity appears to help draw a win-win between accuracy and robustness, yet at the expense of model size and latency, therefore posing challenges for resource-constrained applications.Is it possible to co-design model accuracy, robustness and efficiency to achieve their triple wins?This paper studies multi-exit networks associated with input-adaptive efficient inference, showing their strong promise in achieving a “sweet point\" in co-optimizing model accuracy, robustness, and efficiency.Our proposed solution, dubbed Robust Dynamic Inference Networks (RDI-Nets), allows for each input (either clean or adversarial) to adaptively choose one of the multiple output layers (early branches or the final one) to output its prediction.That multi-loss adaptivity adds new variations and flexibility to adversarial attacks and defenses, on which we present a systematical investigation.We show experimentally that by equipping existing backbones with such robust adaptive inference, the resulting RDI-Nets can achieve better accuracy and robustness, yet with over 30% computational savings, compared to the defended original models.", "target": ["Ist es möglich, die Genauigkeit, Robustheit und Effizienz von Modellen mitzugestalten, um ihre dreifachen Ziele zu erreichen? Ja!", "Nutzt input-adaptive Mehrfach-Frühausgänge für den Bereich des adversarial Angriffs und der Verteidigung, indem es die durchschnittliche Inferenzkomplexität reduziert, ohne der Annahme einer größeren Kapazität zu widersprechen."]} {"source": "Although deep convolutional networks have achieved improved performance in many natural language tasks, they have been treated as black boxes because they are difficult to interpret.Especially, little is known about how they represent language in their intermediate layers.In an attempt to understand the representations of deep convolutional networks trained on language tasks, we show that individual units are selectively responsive to specific morphemes, words, and phrases, rather than responding to arbitrary and uninterpretable patterns.In order to quantitatively analyze such intriguing phenomenon, we propose a concept alignment method based on how units respond to replicated text.We conduct analyses with different architectures on multiple datasets for classification and translation tasks and provide new insights into how deep models understand natural language.", "target": ["Wir zeigen, dass einzelne Einheiten in CNN Repräsentationen, die in NLP Aufgaben erlernt werden, selektiv auf bestimmte natürliche Sprachkonzepte reagieren.", "Verwendet grammatikalische Einheiten natürlicher Sprache, die Bedeutungen bewahren, um zu zeigen, dass die Einheiten von tiefen CNNs, die in NLP Aufgaben gelernt wurden, als Konzeptdetektor für natürliche Sprache fungieren können."]} {"source": "We study the problem of building models that disentangle independent factors of variation.Such models encode features that can efficiently be used for classification and to transfer attributes between different images in image synthesis.As data we use a weakly labeled training set, where labels indicate what single factor has changed between two data samples, although the relative value of the change is unknown.This labeling is of particular interest as it may be readily available without annotation costs.We introduce an autoencoder model and train it through constraints on image pairs and triplets.We show the role of feature dimensionality and adversarial training theoretically and experimentally.We formally prove the existence of the reference ambiguity, which is inherently present in the disentangling task when weakly labeled data is used.The numerical value of a factor has different meaning in different reference frames.When the reference depends on other factors, transferring that factor becomes ambiguous.We demonstrate experimentally that the proposed model can successfully transfer attributes on several datasets, but show also cases when the reference ambiguity occurs.", "target": ["Es handelt sich dabei um eine überwiegend theoretische Arbeit, die die Herausforderungen bei der Entflechtung von Variationsfaktoren unter Verwendung von Autoencodern und GAN beschreibt.", "Dieser Beitrag befasst sich mit der Entflechtung von Variationsfaktoren in Bildern, zeigt, dass man im Allgemeinen ohne weitere Annahmen nicht zwischen zwei verschiedenen Variationsfaktoren unterscheiden kann, und schlägt eine neuartige AE+GAN-Architektur vor, um zu versuchen, die Variationsfaktoren zu entflechten.", "Diese Arbeit untersucht die Herausforderungen bei der Entflechtung unabhängiger Variationsfaktoren bei schwach markierten Daten und führt den Begriff der Referenzmehrdeutigkeit für die Zuordnung von Datenpunkten ein."]} {"source": "In information retrieval, learning to rank constructs a machine-based ranking model which given a query, sorts the search results by their degree of relevance or importance to the query.Neural networks have been successfully applied to this problem, and in this paper, we propose an attention-based deep neural network which better incorporates different embeddings of the queries and search results with an attention-based mechanism.This model also applies a decoder mechanism to learn the ranks of the search results in a listwise fashion.The embeddings are trained with convolutional neural networks or the word2vec model.We demonstrate the performance of this model with image retrieval and text querying data sets.", "target": ["Lernen, mit mehreren Einbettungen und Aufmerksamkeiten zu rangieren.", "Es wird vorgeschlagen, die Aufmerksamkeit zu nutzen, um mehrere Eingabedarstellungen sowohl für die Suchanfrage als auch für die Suchergebnisse in der Aufgabe \"Lernen zu ranken\" zu kombinieren."]} {"source": "Computational neuroscience aims to fit reliable models of in vivo neural activity and interpret them as abstract computations.Recent work has shown that functional diversity of neurons may be limited to that of relatively few cell types; other work has shown that incorporating constraints into artificial neural networks (ANNs) can improve their ability to mimic neural data.Here we develop an algorithm that takes as input recordings of neural activity and returns clusters of neurons by cell type and models of neural activity constrained by these clusters.The resulting models are both more predictive and more interpretable, revealing the contributions of functional cell types to neural computation and ultimately informing the design of future ANNs.", "target": ["Wir haben einen Algorithmus entwickelt, der als Eingabe Aufzeichnungen neuronaler Aktivität nimmt und Cluster von Neuronen nach Zelltyp und Modelle neuronaler Aktivität liefert, die durch diese Cluster eingeschränkt werden."]} {"source": "Graph Neural Networks (GNNs) are a powerful representational tool for solving problems on graph-structured inputs.In almost all cases so far, however, they have been applied to directly recovering a final solution from raw inputs, without explicit guidance on how to structure their problem-solving.Here, instead, we focus on learning in the space of algorithms: we train several state-of-the-art GNN architectures to imitate individual steps of classical graph algorithms, parallel (breadth-first search, Bellman-Ford) as well as sequential (Prim's algorithm).As graph algorithms usually rely on making discrete decisions within neighbourhoods, we hypothesise that maximisation-based message passing neural networks are best-suited for such objectives, and validate this claim empirically.We also demonstrate how learning in the space of algorithms can yield new opportunities for positive transfer between tasks---showing how learning a shortest-path algorithm can be substantially improved when simultaneously learning a reachability algorithm.", "target": ["Wir überwachen neuronale Graphen-Netzwerke, um Zwischenergebnisse und schrittweise Ausgaben klassischer Graphen-Algorithmen zu imitieren, und gewinnen dabei sehr günstige Erkenntnisse.", "Schlägt vor, neuronale Netze so zu trainieren, dass sie Graphenalgorithmen imitieren, indem sie Primitive und Unterroutinen lernen und nicht die endgültige Ausgabe."]} {"source": "Prospection is an important part of how humans come up with new task plans, but has not been explored in depth in robotics.Predicting multiple task-level is a challenging problem that involves capturing both task semantics and continuous variability over the state of the world.Ideally, we would combine the ability of machine learning to leverage big data for learning the semantics of a task, while using techniques from task planning to reliably generalize to new environment.In this work, we propose a method for learning a model encoding just such a representation for task planning.We learn a neural net that encodes the k most likely outcomes from high level actions from a given world.Our approach creates comprehensible task plans that allow us to predict changes to the environment many time steps into the future.We demonstrate this approach via application to a stacking task in a cluttered environment, where the robot must select between different colored blocks while avoiding obstacles, in order to perform a task.We also show results on a simple navigation task.Our algorithm generates realistic image and pose predictions at multiple points in a given task.", "target": ["Wir beschreiben eine Architektur zur Generierung verschiedener Hypothesen für Zwischenziele bei Robotermanipulationsaufgaben.", "Bewertet die Qualität eines vorgeschlagenen generativen Vorhersagemodells zur Erstellung von Plänen für die Roboterausführung.", "In dieser Arbeit wird eine Methode zum Erlernen einer hochrangigen Übergangsfunktion vorgeschlagen, die für die Aufgabenplanung nützlich ist."]} {"source": "Adaptive gradient algorithms perform gradient-based updates using the history of gradients and are ubiquitous in training deep neural networks.While adaptive gradient methods theory is well understood for minimization problems, the underlying factors driving their empirical success in min-max problems such as GANs remain unclear.In this paper, we aim at bridging this gap from both theoretical and empirical perspectives.First, we analyze a variant of Optimistic Stochastic Gradient (OSG) proposed in~\\citep{daskalakis2017training} for solving a class of non-convex non-concave min-max problem and establish $O(\\epsilon^{-4})$ complexity for finding $\\epsilon$-first-order stationary point, in which the algorithm only requires invoking one stochastic first-order oracle while enjoying state-of-the-art iteration complexity achieved by stochastic extragradient method by~\\citep{iusem2017extragradient}.Then we propose an adaptive variant of OSG named Optimistic Adagrad (OAdagrad) and reveal an \\emph{improved} adaptive complexity $\\widetilde{O}\\left(\\epsilon^{-\\frac{2}{1-\\alpha}}\\right)$~\\footnote{Here $\\widetilde{O}(\\cdot)$ compresses a logarithmic factor of $\\epsilon$.}, where $\\alpha$ characterizes the growth rate of the cumulative stochastic gradient and $0\\leq \\alpha\\leq 1/2$.To the best of our knowledge, this is the first work for establishing adaptive complexity in non-convex non-concave min-max optimization.Empirically, our experiments show that indeed adaptive gradient algorithms outperform their non-adaptive counterparts in GAN training.Moreover, this observation can be explained by the slow growth rate of the cumulative stochastic gradient, as observed empirically.", "target": ["Dieser Artikel bietet eine neuartige Analyse von adaptiven Gradientenalgorithmen zur Lösung von nicht-konvexen, nicht-konkaven Min-Max-Problemen als GANs und erklärt den Grund, warum adaptive Gradientenmethoden ihre nicht-adaptiven Gegenstücke durch empirische Studien übertreffen.", "Entwickelt Algorithmen für die Lösung von Variationsungleichungen im stochastischen Umfeld und schlägt eine Variante der Extragradientenmethode vor."]} {"source": "We consider the problem of unsupervised learning of a low dimensional, interpretable, latent state of a video containing a moving object.The problem of distilling dynamics from pixels has been extensively considered through the lens of graphical/state space models that exploit Markov structure for cheap computation and structured graphical model priors for enforcing interpretability on latent representations.We take a step towards extending these approaches by discarding the Markov structure; instead, repurposing the recently proposed Gaussian Process Prior Variational Autoencoder for learning sophisticated latent trajectories.We describe the model and perform experiments on a synthetic dataset and see that the model reliably reconstructs smooth dynamics exhibiting U-turns and loops.We also observe that this model may be trained without any beta-annealing or freeze-thaw of training parameters.Training is performed purely end-to-end on the unmodified evidence lower bound objective.This is in contrast to previous works, albeit for slightly different use cases, where application specific training tricks are often required.", "target": ["Wir lernen sohpisticated Trajektorien eines Objekts rein aus Pixeln mit einem Spielzeug Videodatensatz durch die Verwendung einer VAE-Struktur mit einem Gauß Prozess Prior."]} {"source": "Dreams and our ability to recall them are among the most puzzling questions in sleep research.Specifically, putative differences in brain network dynamics between individuals with high versus low dream recall rates, are still poorly understood.In this study, we addressed this question as a classification problem where we applied deep convolutional networks (CNN) to sleep EEG recordings to predict whether subjects belonged to the high or low dream recall group (HDR and LDR resp.).Our model achieves significant accuracy levels across all the sleep stages, thereby indicating subtle signatures of dream recall in the sleep microstructure.We also visualized the feature space to inspect the subject-specificity of the learned features, thus ensuring that the network captured population level differences.Beyond being the first study to apply deep learning to sleep EEG in order to classify HDR and LDR, guided backpropagation allowed us to visualize the most discriminant features in each sleep stage.The significance of these findings and future directions are discussed.", "target": ["Wir untersuchen die neuronalen Grundlagen der Traumerinnerung mit Hilfe von Convolutional Neural Networks und Techniken zur Visualisierung von Merkmalen, wie tSNE und Guided Backpropagation."]} {"source": "This paper considers multi-agent reinforcement learning (MARL) in networked system control.Specifically, each agent learns a decentralized control policy based on local observations and messages from connected neighbors.We formulate such a networked MARL (NMARL) problem as a spatiotemporal Markov decision process and introduce a spatial discount factor to stabilize the training of each local agent.Further, we propose a new differentiable communication protocol, called NeurComm, to reduce information loss and non-stationarity in NMARL.Based on experiments in realistic NMARL scenarios of adaptive traffic signal control and cooperative adaptive cruise control, an appropriate spatial discount factor effectively enhances the learning curves of non-communicative MARL algorithms, while NeurComm outperforms existing communication protocols in both learning efficiency and control performance.", "target": ["In diesem Papier werden eine neue Formulierung und ein neues Kommunikationsprotokoll für vernetzte Multi-Agenten Kontrollprobleme vorgeschlagen", "Befasst sich mit N-MARLs, bei denen die Agenten ihre Politik nur auf der Grundlage von Nachrichten von benachbarten Knoten aktualisieren, und zeigt, dass die Einführung eines räumlichen Diskontierungsfaktors das Lernen stabilisiert."]} {"source": "Variational Bayesian Inference is a popular methodology for approximating posterior distributions over Bayesian neural network weights.Recent work developing this class of methods has explored ever richer parameterizations of the approximate posterior in the hope of improving performance.In contrast, here we share a curious experimental finding that suggests instead restricting the variational distribution to a more compact parameterization.For a variety of deep Bayesian neural networks trained using Gaussian mean-field variational inference, we find that the posterior standard deviations consistently exhibit strong low-rank structure after convergence.This means that by decomposing these variational parameters into a low-rank factorization, we can make our variational approximation more compact without decreasing the models' performance.Furthermore, we find that such factorized parameterizations improve the signal-to-noise ratio of stochastic gradient estimates of the variational lower bound, resulting in faster convergence.", "target": ["Die VB mit mittlerem Feld verwendet doppelt so viele Parameter; wir binden die Varianzparameter in der VB mit mittlerem Feld ohne Verlust an ELBO, wodurch wir an Geschwindigkeit und geringeren Varianzgradienten gewinnen."]} {"source": "Aspect extraction in online product reviews is a key task in sentiment analysis and opinion mining.Training supervised neural networks for aspect extraction is not possible when ground truth aspect labels are not available, while the unsupervised neural topic models fail to capture the particular aspects of interest.In this work, we propose a weakly supervised approach for training neural networks for aspect extraction in cases where only a small set of seed words, i.e., keywords that describe an aspect, are available.Our main contributions are as follows.First, we show that current weakly supervised networks fail to leverage the predictive power of the available seed words by comparing them to a simple bag-of-words classifier. Second, we propose a distillation approach for aspect extraction where the seed words are considered by the bag-of-words classifier (teacher) and distilled to the parameters of a neural network (student).Third, we show that regularization encourages the student to consider non-seed words for classification and, as a result, the student outperforms the teacher, which only considers the seed words.Finally, we empirically show that our proposed distillation approach outperforms (by up to 34.4% in F1 score) previous weakly supervised approaches for aspect extraction in six domains of Amazon product reviews.", "target": ["Wir nutzen einige wenige Schlüsselwörter als schwache Überwachung für das Training neuronaler Netze zur Extraktion von Aspekten.", "Erörtert eine Variante der Wissensdestillation, bei der ein \"Lehrer\" auf der Grundlage eines Bag-of-Words-Klassifikators mit Startwörtern und ein \"Schüler\", der ein auf Einbettung basierendes neuronales Netz ist, verwendet werden."]} {"source": "Forming perceptual groups and individuating objects in visual scenes is an essential step towards visual intelligence.This ability is thought to arise in the brain from computations implemented by bottom-up, horizontal, and top-down connections between neurons.However, the relative contributions of these connections to perceptual grouping are poorly understood.We address this question by systematically evaluating neural network architectures featuring combinations of these connections on two synthetic visual tasks, which stress low-level \"Gestalt\" vs. high-level object cues for perceptual grouping.We show that increasing the difficulty of either task strains learning for networks that rely solely on bottom-up processing.Horizontal connections resolve this limitation on tasks with Gestalt cues by supporting incremental spatial propagation of activities, whereas top-down connections rescue learning on tasks with high-level object cues by modifying coarse predictions about the position of the target object.Our findings dissociate the computational roles of bottom-up, horizontal and top-down connectivity, and demonstrate how a model featuring all of these interactions can more flexibly learn to form perceptual groups.", "target": ["Horizontale und von oben nach unten gerichtete Rückkopplungsverbindungen sind für komplementäre Wahrnehmungsgruppierungsstrategien in biologischen und rekurrenten Sehsystemen verantwortlich.", "Unter Verwendung neuronaler Netze als Computermodell des Gehirns wird die Effizienz verschiedener Strategien zur Lösung von zwei visuellen Aufgaben untersucht."]} {"source": "Generative adversarial networks have seen rapid development in recent years and have led to remarkable improvements in generative modelling of images.However, their application in the audio domain has received limited attention,and autoregressive models, such as WaveNet, remain the state of the art in generative modelling of audio signals such as human speech.To address this paucity, we introduce GAN-TTS, a Generative Adversarial Network for Text-to-Speech.Our architecture is composed of a conditional feed-forward generator producing raw speech audio, and an ensemble of discriminators which operate on random windows of different sizes.The discriminators analyse the audio both in terms of general realism, as well as how well the audio corresponds to the utterance that should be pronounced. To measure the performance of GAN-TTS, we employ both subjective human evaluation (MOS - Mean Opinion Score), as well as novel quantitative metrics (Fréchet DeepSpeech Distance and Kernel DeepSpeech Distance), which we find to be well correlated with MOS.We show that GAN-TTS is capable of generating high-fidelity speech with naturalness comparable to the state-of-the-art models, and unlike autoregressive models, it is highly parallelisable thanks to an efficient feed-forward generator.Listen to GAN-TTS reading this abstract at http://tiny.cc/gantts.", "target": ["Wir stellen GAN-TTS vor, ein Generatives Adversariales Netzwerk für Text-to-Speech, das einen Mean Opinion Score (MOS) von 4,2 erreicht.", "Löst die GAN-Herausforderung bei der Synthese von Rohwellenformen und beginnt, die bestehende Leistungslücke zwischen autoregressiven Modellen und GANs für Rohaudios zu schließen."]} {"source": "This paper proposes a Pruning in Training (PiT) framework of learning to reduce the parameter size of networks.Different from existing works, our PiT framework employs the sparse penalties to train networks and thus help rank the importance of weights and filters.Our PiT algorithms can directly prune the network without any fine-tuning.The pruned networks can still achieve comparable performance to the original networks.In particular, we introduce the (Group) Lasso-type Penalty (L-P /GL-P), and (Group) Split LBI Penalty (S-P / GS-P) to regularize the networks, and a pruning strategy proposed is used in help prune the network.We conduct the extensive experiments on MNIST, Cifar-10, and miniImageNet.The results validate the efficacy of our proposed methods.Remarkably, on MNIST dataset, our PiT framework can save 17.5% parameter size of LeNet-5, which achieves the 98.47% recognition accuracy.", "target": ["Wir schlagen einen Lernalgorithmus für Netzwerk Pruning vor, indem wir Strukturspärlichkeitsstrafen durchsetzen.", "In diesem Beitrag wird ein Ansatz für das Pruning beim Training eines Netzes unter Verwendung von Lasso- und Split-LBI-Penalties vorgestellt."]} {"source": "We first pose the Unsupervised Continual Learning (UCL) problem: learning salient representations from a non-stationary stream of unlabeled data in which the number of object classes varies with time.Given limited labeled data just before inference, those representations can also be associated with specific object types to perform classification.To solve the UCL problem, we propose an architecture that involves a single module, called Self-Taught Associative Memory (STAM), which loosely models the function of a cortical column in the mammalian brain.Hierarchies of STAM modules learn based on a combination of Hebbian learning, online clustering, detection of novel patterns and forgetting outliers, and top-down predictions.We illustrate the operation of STAMs in the context of learning handwritten digits in a continual manner with only 3-12 labeled examples per class.STAMs suggest a promising direction to solve the UCL problem without catastrophic forgetting.", "target": ["Wir stellen unüberwachtes kontinuierliches Lernen (UCL) und eine neuroinspirierte Architektur vor, die das UCL-Problem löst.", "Vorschlag Hierarchien von STAM-Modulen zu verwenden, um das UCL-Problem zu lösen, und erbringen des Nachweis, dass die von den Modulen erlernten Repräsentationen für die Few-Shot Klassifizierung gut geeignet sind."]} {"source": "Recent advances have made it possible to create deep complex-valued neural networks.Despite this progress, the potential power of fully complex intermediate computations and representations has not yet been explored for many challenging learning problems.Building on recent advances, we propose a novel mechanism for extracting signals in the frequency domain.As a case study, we perform audio source separation in the Fourier domain.Our extraction mechanism could be regarded as a local ensembling method that combines a complex-valued convolutional version of Feature-Wise Linear Modulation (FiLM) and a signal averaging operation.We also introduce a new explicit amplitude and phase-aware loss, which is scale and time invariant, taking into account the complex-valued components of the spectrogram.Using the Wall Street Journal Dataset, we compare our phase-aware loss to several others that operate both in the time and frequency domains and demonstrate the effectiveness of our proposed signal extraction method and proposed loss.When operating in the complex-valued frequency domain, our deep complex-valued network substantially outperforms its real-valued counterparts even with half the depth and a third of the parameters.Our proposed mechanism improves significantly deep complex-valued networks' performance and we demonstrate the usefulness of its regularizing effect.", "target": ["Neue Methode zur Signalextraktion im Fourier-Bereich.", "Trägt eine komplexwertige Convolutional Version der merkmalsweisen linearen Modulation bei, die eine Parameteroptimierung ermöglicht und einen Verlust entwirft, bei dem Betrag und Phase berücksichtigt werden."]} {"source": "It is challenging to disentangle an object into two orthogonal spaces of content and style since each can influence the visual observation in a different and unpredictable way.It is rare for one to have access to a large number of data to help separate the influences.In this paper, we present a novel framework to learn this disentangled representation in a completely unsupervised manner.We address this problem in a two-branch Autoencoder framework.For the structural content branch, we project the latent factor into a soft structured point tensor and constrain it with losses derived from prior knowledge.This encourages the branch to distill geometry information.Another branch learns the complementary style information.The two branches form an effective framework that can disentangle object's content-style representation without any human annotation.We evaluate our approach on four image datasets, on which we demonstrate the superior disentanglement and visual analogy quality both in synthesized and real-world data.We are able to generate photo-realistic images with 256x256 resolution that are clearly disentangled in content and style.", "target": ["Wir stellen ein neuartiges Framework vor, mit dem sich die getrennte Darstellung von Inhalt und Stil auf völlig unüberwachte Weise erlernen lässt. ", "Vorschlag eines Modells auf der Grundlage eines Autoencoders zur Entflechtung der Darstellung eines Objekts; die Ergebnisse zeigen, dass das Modell Darstellungen erzeugen kann, die Inhalt und Stil erfassen."]} {"source": "We develop the Y-learner for estimating heterogeneous treatment effects in experimental and observational studies.The Y-learner is designed to leverage the abilities of neural networks to optimize multiple objectives and continually update, which allows for better pooling of underlying feature information between treatment and control groups.We evaluate the Y-learner on three test problems: (1) A set of six simulated data benchmarks from the literature.(2) A real-world large-scale experiment on voter persuasion.(3) A task from the literature that estimates artificially generated treatment effects on MNIST didgits.The Y-learner achieves state of the art results on two of the three tasks.On the MNIST task, it gets the second best results.", "target": ["Wir entwickeln eine CATE-Schätzungsstrategie, die sich einige der faszinierenden Eigenschaften neuronaler Netze zunutze macht. ", "Zeigt Verbesserungen von X-learner durch die Modellierung der Behandlungsreaktionsfunktion, der Kontrollreaktionsfunktion und der Abbildung des unterstellten Behandlungseffekts auf den bedingten durchschnittlichen Behandlungseffekt als neuronale Netze.", "Die Autoren schlagen den Y-Learner zur Schätzung des bedingten durchschnittlichen Behandlungseffekts (CATE) vor, der gleichzeitig die Parameter der Ergebnisfunktionen und den CATE-Schätzer aktualisiert."]} {"source": "With the rapid proliferation of IoT devices, our cyberspace is nowadays dominated by billions of low-cost computing nodes, which expose an unprecedented heterogeneity to our computing systems.Dynamic analysis, one of the most effective approaches to finding software bugs, has become paralyzed due to the lack of a generic emulator capable of running diverse previously-unseen firmware.In recent years, we have witnessed devastating security breaches targeting IoT devices.These security concerns have significantly hamstrung further evolution of IoT technology.In this work, we present Laelaps, a device emulator specifically designed to run diverse software on low-cost IoT devices.We do not encode into our emulator any specific information about a device.Instead, Laelaps infers the expected behavior of firmware via symbolic-execution-assisted peripheral emulation and generates proper inputs to steer concrete execution on the fly.This unique design feature makes Laelaps the first generic device emulator capable of running diverse firmware with no a priori knowledge about the target device.To demonstrate the capabilities of Laelaps, we deployed two popular dynamic analysis techniques---fuzzing testing and dynamic symbolic execution---on top of our emulator.We successfully identified both self-injected and real-world vulnerabilities.", "target": ["Geräteunabhängige Firmware-Ausführung."]} {"source": "Deep neural models, such as convolutional and recurrent networks, achieve phenomenal results over spatial data such as images and text.However, when considering tabular data, gradient boosting of decision trees (GBDT) remains the method of choice.Aiming to bridge this gap, we propose \\emph{deep neural forests} (DNF)-- a novel architecture that combines elements from decision trees as well as dense residual connections. We present the results of extensive empirical study in which we examine the performance of GBDTs, DNFs and (deep) fully-connected networks. These results indicate that DNFs achieve comparable results to GBDTs on tabular data, and open the door to end-to-end neural modeling of multi-modal data.To this end, we present a successful application of DNFs as part of a hybrid architecture for a multi-modal driving scene understanding classification task.", "target": ["Eine Architektur für tabellarische Daten, die Verzweigungen von Entscheidungsbäumen nachbildet und eine dichte Restkonnektivität verwendet. ", "In dieser Arbeit wird ein tiefer neuronaler Wald vorgeschlagen, ein Algorithmus, der auf tabellarische Daten abzielt und die Stärken des Gradient Boosting von Entscheidungsbäumen integriert.", "Eine neuartige neuronale Netzwerkarchitektur, die die Funktionsweise von Entscheidungswäldern nachahmt, um das allgemeine Problem des Trainings von tiefen Modellen für tabellarische Daten zu lösen, und die eine mit GBDT vergleichbare Effektivität aufweist."]} {"source": "Hyperparameter tuning is one of the most time-consuming workloads in deep learning.State-of-the-art optimizers, such as AdaGrad, RMSProp and Adam, reduce this labor by adaptively tuning an individual learning rate for each variable.Recently researchers have shown renewed interest in simpler methods like momentum SGD as they may yield better results.Motivated by this trend, we ask: can simple adaptive methods, based on SGD perform as well or better?We revisit the momentum SGD algorithm and show that hand-tuning a single learning rate and momentum makes it competitive with Adam.We then analyze its robustness to learning rate misspecification and objective curvature variation.Based on these insights, we design YellowFin, an automatic tuner for momentum and learning rate in SGD.YellowFin optionally uses a negative-feedback loop to compensate for the momentum dynamics in asynchronous settings on the fly.We empirically show YellowFin can converge in fewer iterations than Adam on ResNets and LSTMs for image recognition, language modeling and constituency parsing, with a speedup of up to $3.28$x in synchronous and up to $2.69$x in asynchronous settings.", "target": ["YellowFin ist ein SGD-basierter Optimierer, der sowohl die Dynamik als auch die Lernrate anpassen kann.", "Schlägt eine Methode zur automatischen Abstimmung des Momentum-Parameters in Momentum-SGD-Methoden vor, die bessere Ergebnisse und eine schnellere Konvergenzgeschwindigkeit als der moderne Adam-Algorithmus erzielt."]} {"source": "Robustness and security of machine learning (ML) systems are intertwined, wherein a non-robust ML system (classifiers, regressors, etc.) can be subject to attacks using a wide variety of exploits.With the advent of scalable deep learning methodologies, a lot of emphasis has been put on the robustness of supervised, unsupervised and reinforcement learning algorithms.Here, we study the robustness of the latent space of a deep variational autoencoder (dVAE), an unsupervised generative framework, to show that it is indeed possible to perturb the latent space, flip the class predictions and keep the classification probability approximately equal before and after an attack.This means that an agent that looks at the outputs of a decoder would remain oblivious to an attack.", "target": ["Angriffe auf den latenten Raum von variationalen Autoencodern zur Veränderung der semantischen Bedeutung von Eingaben.", "Dieses Papier befasst sich mit Sicherheit und maschinellem Lernen und schlägt einen Man-in-Middle-Angriff vor, der die VAE-Kodierung der Eingabedaten so verändert, dass die dekodierte Ausgabe falsch klassifiziert wird."]} {"source": "Graph-based dependency parsing consists of two steps: first, an encoder produces a feature representation for each parsing substructure of the input sentence, which is then used to compute a score for the substructure; and second, a decoder} finds the parse tree whose substructures have the largest total score.Over the past few years, powerful neural techniques have been introduced into the encoding step which substantially increases parsing accuracies.However, advanced decoding techniques, in particular high-order decoding, have seen a decline in usage.It is widely believed that contextualized features produced by neural encoders can help capture high-order decoding information and hence diminish the need for a high-order decoder.In this paper, we empirically evaluate the combinations of different neural and non-neural encoders with first- and second-order decoders and provide a comprehensive analysis about the effectiveness of these combinations with varied training data sizes.We find that: first, when there is large training data, a strong neural encoder with first-order decoding is sufficient to achieve high parsing accuracy and only slightly lags behind the combination of neural encoding and second-order decoding; second, with small training data, a non-neural encoder with a second-order decoder outperforms the other combinations in most cases.", "target": ["Eine empirische Studie, die die Wirksamkeit verschiedener Encoder-Decoder-Kombinationen für die Aufgabe des Dependency Parsing untersucht.", "Empirische Analyse verschiedener Kodierer, Dekodierer und deren Abhängigkeiten für graphbasiertes Dependency Parsing."]} {"source": "Meta-learning will be crucial to creating lifelong, generalizable AI.In practice, however, it is hard to define the meta-training task distribution that is used to train meta-learners.If made too small, tasks are too similar for a model to meaningfully generalize.If made too large, generalization becomes incredibly difficult.We argue that both problems can be alleviated by introducing a teacher model that controls the sequence of tasks that a meta-learner is trained on.This teacher model is incentivized to start the student meta-learner on simple tasks then adaptively increase task difficulty in response to student progress.While this approach has been previously studied in curriculum generation, our main contribution is in extending it to meta-learning.", "target": ["Lehrer, der Meta-Lernende wie Menschen ausbildet."]} {"source": "Using higher order knowledge to reduce training data has become a popular research topic.However, the ability for available methods to draw effective decision boundaries is still limited: when training set is small, neural networks will be biased to certain labels.Based on this observation, we consider constraining output probability distribution as higher order domain knowledge.We design a novel algorithm that jointly optimizes output probability distribution on a clustered embedding space to make neural networks draw effective decision boundaries. While directly applying probability constraint is not effective, users need to provide additional very weak supervisions: mark some batches that have output distribution greatly differ from target probability distribution.We use experiments to empirically prove that our model can converge to an accuracy higher than other state-of-art semi-supervised learning models with less high quality labeled training examples.", "target": ["Wir führen einen Ansatz zur Einbettung des Raums ein, um die Wahrscheinlichkeitsverteilung der Ausgabe eines neuronalen Netzes einzuschränken.", "In diesem Beitrag wird eine Methode zum halbüberwachten Lernen mit tiefen neuronalen Netzen vorgestellt, und das Modell erreicht bei einem geringen Trainingsumfang eine relativ hohe Genauigkeit.", "In dieser Arbeit wird die Label-Verteilung in das Modelllernen integriert, wenn nur eine begrenzte Anzahl von Trainingsinstanzen zur Verfügung steht, und es werden zwei Techniken vorgeschlagen, um das Problem der falsch verzerrten Output-Label Verteilung zu lösen."]} {"source": "We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy). Our word vectors are learned functions of the internal states of a deep bidirectional language model (biLM), which is pretrained on a large text corpus.We show that these representations can be easily added to existing models and significantly improve the state of the art across six challenging NLP problems, including question answering, textual entailment and sentiment analysis. We also present an analysis showing that exposing the deep internals of the pretrained network is crucial, allowing downstream models to mix different types of semi-supervision signals.", "target": ["Wir stellen eine neue Art von tiefer kontextualisierter Wortrepräsentation vor, die den Stand der Technik für eine Reihe von anspruchsvollen NLP-Aufgaben deutlich verbessert."]} {"source": "This work addresses the long-standing problem of robust event localization in the presence of temporally of misaligned labels in the training data.We propose a novel versatile loss function that generalizes a number of training regimes from standard fully-supervised cross-entropy to count-based weakly-supervised learning.Unlike classical models which are constrained to strictly fit the annotations during training, our soft localization learning approach relaxes the reliance on the exact position of labels instead.Training with this new loss function exhibits strong robustness to temporal misalignment of labels, thus alleviating the burden of precise annotation of temporal sequences.We demonstrate state-of-the-art performance against standard benchmarks in a number of challenging experiments and further show that robustness to label noise is not achieved at the expense of raw performance.", "target": ["In dieser Arbeit wird eine neuartige Verlustfunktion für das robuste Training von DNN zur zeitlichen Lokalisierung in Gegenwart von falsch ausgerichteten Etiketten eingeführt.", "Ein neuer Verlust für Trainingsmodelle, die vorhersagen, wo Ereignisse in einer Trainingssequenz mit störhaften Beschriftungen auftreten, indem sie geglättete Beschriftung und Vorhersagesequenz vergleichen."]} {"source": "The driving force behind deep networks is their ability to compactly represent rich classes of functions.The primary notion for formally reasoning about this phenomenon is expressive efficiency, which refers to a situation where one network must grow unfeasibly large in order to replicate functions of another.To date, expressive efficiency analyses focused on the architectural feature of depth, showing that deep networks are representationally superior to shallow ones.In this paper we study the expressive efficiency brought forth by connectivity, motivated by the observation that modern networks interconnect their layers in elaborate ways.We focus on dilated convolutional networks, a family of deep models delivering state of the art performance in sequence processing tasks.By introducing and analyzing the concept of mixed tensor decompositions, we prove that interconnecting dilated convolutional networks can lead to expressive efficiency.In particular, we show that even a single connection between intermediate layers can already lead to an almost quadratic gap, which in large-scale settings typically makes the difference between a model that is practical and one that is not.Empirical evaluation demonstrates how the expressive efficiency of connectivity, similarly to that of depth, translates into gains in accuracy.This leads us to believe that expressive efficiency may serve a key role in developing new tools for deep network design.", "target": ["Wir führen den Begriff der gemischten Tensorzerlegungen ein und beweisen damit, dass die Verbindung von erweiterten Convolutional Networks deren Ausdruckskraft erhöht.", "In diesem Beitrag wird theoretisch bestätigt, dass die Verbindung von Netzen mit unterschiedlichen Dilatationen zu einer ausdrucksstarken Effizienz bei der gemischten Tensorzerlegung führen kann.", "Die Autoren untersuchen erweiterte Convolutional Networks und zeigen, dass die Verflechtung von zwei erweiterten Convolutional Networks A und B in verschiedenen Stadien ausdrucksstärker ist als eine Nichtverflechtung.", "Es zeigt sich, dass die strukturelle Annahme eines einzigen perfekten Binärbaums im WaveNet seine Leistung beeinträchtigt und dass WaveNet-ähnliche Architekturen mit komplexeren gemischten Baumstrukturen besser abschneiden."]} {"source": "We apply multi-task learning to image classification tasks on MNIST-like datasets.MNIST dataset has been referred to as the {\\em drosophila} of machine learning and has been the testbed of many learning theories.The NotMNIST dataset and the FashionMNIST dataset have been created with the MNIST dataset as reference.In this work, we exploit these MNIST-like datasets for multi-task learning.The datasets are pooled together for learning the parameters of joint classification networks.Then the learned parameters are used as the initial parameters to retrain disjoint classification networks.The baseline recognition model are all-convolution neural networks.Without multi-task learning, the recognition accuracies for MNIST, NotMNIST and FashionMNIST are 99.56\\%, 97.22\\% and 94.32\\% respectively.With multi-task learning to pre-train the networks, the recognition accuracies are respectively 99.70\\%, 97.46\\% and 95.25\\%.The results re-affirm that multi-task learning framework, even with data with different genres, does lead to significant improvement.", "target": ["Multi-Task-Lernen funktioniert.", "In diesem Beitrag wird ein neuronales Multitasking-Netzwerk zur Klassifizierung von MNIST-ähnlichen Datensätzen vorgestellt."]} {"source": "Recent work has demonstrated that neural networks are vulnerable to adversarial examples, i.e., inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network.To address this problem, we study the adversarial robustness of neural networks through the lens of robust optimization.This approach provides us with a broad and unifying view on much prior work on this topic.Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal.In particular, they specify a concrete security guarantee that would protect against a well-defined class of adversaries.These methods let us train networks with significantly improved resistance to a wide range of adversarial attacks.They also suggest robustness against a first-order adversary as a natural security guarantee.We believe that robustness against such well-defined classes of adversaries is an important stepping stone towards fully resistant deep learning models.", "target": ["Wir bieten eine prinzipielle, optimierungsbasierte Neubetrachtung des Begriffs der adversarial Beispiele und entwickeln Methoden, die Modelle hervorbringen, die gegen eine Vielzahl von Gegnern robust sind.", "Untersucht eine Minimax Formulierung für das Lernen von tiefen Netzwerken, um deren Robustheit zu erhöhen, wobei der projizierte Gradientenabstieg als Hauptgegner verwendet wird. ", "In dieser Arbeit wird vorgeschlagen, neuronale Netze durch den Rahmen von Sattelpunktproblemen widerstandsfähig gegen gegnerische Verluste zu machen. "]} {"source": "In recent years there has been a rapid increase in classification methods on graph structured data.Both in graph kernels and graph neural networks, one of the implicit assumptions of successful state-of-the-art models was that incorporating graph isomorphism features into the architecture leads to better empirical performance.However, as we discover in this work, commonly used data sets for graph classification have repeating instances which cause the problem of isomorphism bias, i.e. artificially increasing the accuracy of the models by memorizing target information from the training set.This prevents fair competition of the algorithms and raises a question of the validity of the obtained results.We analyze 54 data sets, previously extensively used for graph-related tasks, on the existence of isomorphism bias, give a set of recommendations to machine learning practitioners to properly set up their models, and open source new data sets for the future experiments.", "target": ["Viele Graphklassifizierungsdatensätze weisen Duplikate auf, was Fragen hinsichtlich der Generalisierungsfähigkeiten und des fairen Vergleichs der Modelle aufwirft. ", "Die Autoren diskutieren den Isomorphismus-Bias in Graphen-Datensätzen, den Overfitting-Effekt beim Lernen von Netzwerken, wenn Graphen-Isomorphismus-Merkmale in das Modell aufgenommen werden, theoretisch analog zu Datenleckeffekten."]} {"source": "Imitation learning, followed by reinforcement learning algorithms, is a promising paradigm to solve complex control tasks sample-efficiently.However, learning from demonstrations often suffers from the covariate shift problem, which resultsin cascading errors of the learned policy.We introduce a notion of conservatively extrapolated value functions, which provably lead to policies with self-correction.We design an algorithm Value Iteration with Negative Sampling (VINS) that practically learns such value functions with conservative extrapolation.We show that VINS can correct mistakes of the behavioral cloning policy on simulated robotics benchmark tasks.We also propose the algorithm of using VINS to initialize a reinforcement learning algorithm, which is shown to outperform prior works in sample efficiency.", "target": ["Wir führen einen Begriff von konservativ-extrapolierten Wertfunktionen ein, die nachweislich zu Strategien führen, die sich selbst korrigieren können, um nahe an den Demonstrationszuständen zu bleiben, und lernen sie mit einer neuartigen negativen Sampling-Technik.", "Ein Algorithmus namens Wertiteration mit negativem Sampling, um das Problem der Kovariatenverschiebung beim Imitationslernen zu lösen."]} {"source": "A structured understanding of our world in terms of objects, relations, and hierarchies is an important component of human cognition.Learning such a structured world model from raw sensory data remains a challenge.As a step towards this goal, we introduce Contrastively-trained Structured World Models (C-SWMs).C-SWMs utilize a contrastive approach for representation learning in environments with compositional structure.We structure each state embedding as a set of object representations and their relations, modeled by a graph neural network.This allows objects to be discovered from raw pixel observations without direct supervision as part of the learning process.We evaluate C-SWMs on compositional environments involving multiple interacting objects that can be manipulated independently by an agent, simple Atari games, and a multi-object physics simulation.Our experiments demonstrate that C-SWMs can overcome limitations of models based on pixel reconstruction and outperform typical representatives of this model class in highly structured environments, while learning interpretable object-based representations.", "target": ["Kontrastiv trainierte strukturierte Weltmodelle (C-SWMs) lernen objektorientierte Zustandsdarstellungen und ein relationales Modell einer Umgebung aus rohen Pixeleingaben.", "Die Autoren überwinden das Problem der Verwendung von pixelbasierten Verlusten bei der Konstruktion und dem Lernen von strukturierten Weltmodellen, indem sie einen kontrastiven latenten Raum verwenden."]} {"source": "Neural machine translation (NMT) models learn representations containing substantial linguistic information.However, it is not clear if such information is fully distributed or if some of it can be attributed to individual neurons.We develop unsupervised methods for discovering important neurons in NMT models.Our methods rely on the intuition that different models learn similar properties, and do not require any costly external supervision.We show experimentally that translation quality depends on the discovered neurons, and find that many of them capture common linguistic phenomena.Finally, we show how to control NMT translations in predictable ways, by modifying activations of individual neurons.", "target": ["Unüberwachte Methoden zum Auffinden, Analysieren und Kontrollieren wichtiger Neuronen in der NMT", "In dieser Arbeit wird vorgeschlagen, \"sinnvolle\" Neuronen in neuronalen maschinellen Übersetzungsmodellen zu finden, indem ein Ranking auf der Grundlage der Korrelation zwischen Modellpaaren, verschiedenen Epochen oder verschiedenen Datensätzen erstellt wird, und es wird ein Kontrollmechanismus für die Modelle vorgeschlagen."]} {"source": "Computations for the softmax function in neural network models are expensive when the number of output classes is large.This can become a significant issue in both training and inference for such models.In this paper, we present Doubly Sparse Softmax (DS-Softmax), Sparse Mixture of Sparse of Sparse Experts, to improve the efficiency for softmax inference.During training, our method learns a two-level class hierarchy by dividing entire output class space into several partially overlapping experts.Each expert is responsible for a learned subset of the output class space and each output class only belongs to a small number of those experts.During inference, our method quickly locates the most probable expert to compute small-scale softmax.Our method is learning-based and requires no knowledge of the output class partition space a priori.We empirically evaluate our method on several real-world tasks and demonstrate that we can achieve significant computation reductions without loss of", "target": ["Wir präsentieren doppelt spärlichen Softmax, die spärliche Mischung aus spärlichen Experten, um die Effizienz der Softmax-Inferenz durch Ausnutzung der zweistufigen überlappenden Hierarchie zu verbessern. ", "Die Arbeit schlägt die neue Softmax-Algorithmus-Implementierung mit zwei hierarchischen Ebenen der Spärlichkeit, die den Betrieb in der Sprach Modellierung beschleunigt."]} {"source": "Our work presents empirical evidence that layer rotation, i.e. the evolution across training of the cosine distance between each layer's weight vector and its initialization, constitutes an impressively consistent indicator of generalization performance.Compared to previously studied indicators of generalization, we show that layer rotation has the additional benefit of being easily monitored and controlled, as well as having a network-independent optimum: the training procedures during which all layers' weights reach a cosine distance of 1 from their initialization consistently outperform other configurations -by up to 20% test accuracy.Finally, our results also suggest that the study of layer rotation can provide a unified framework to explain the impact of weight decay and adaptive gradient methods on generalization.", "target": ["In diesem Beitrag werden empirische Belege für die Entdeckung eines Generalisierungsindikators vorgestellt: die Entwicklung des Kosinusabstands zwischen dem Gewichtsvektor jeder Schicht und ihrer Initialisierung während des Trainings."]} {"source": "Models of code can learn distributed representations of a program's syntax and semantics to predict many non-trivial properties of a program.Recent state-of-the-art models leverage highly structured representations of programs, such as trees, graphs and paths therein (e.g. data-flow relations), which are precise and abundantly available for code.This provides a strong inductive bias towards semantically meaningful relations, yielding more generalizable representations than classical sequence-based models.Unfortunately, these models primarily rely on graph-based message passing to represent relations in code, which makes them de facto local due to the high cost of message-passing steps, quite in contrast to modern, global sequence-based models, such as the Transformer.In this work, we bridge this divide between global and structured models by introducing two new hybrid model families that are both global and incorporate structural bias: Graph Sandwiches, which wrap traditional (gated) graph message-passing layers in sequential message-passing layers; and Graph Relational Embedding Attention Transformers (GREAT for short), which bias traditional Transformers with relational information from graph edge types.By studying a popular, non-trivial program repair task, variable-misuse identification, we explore the relative merits of traditional and hybrid model families for code representation.Starting with a graph-based model that already improves upon the prior state-of-the-art for this task by 20%, we show that our proposed hybrid models improve an additional 10-15%, while training both faster and using fewer parameters.", "target": ["Modelle von Quellcode, die globale und strukturelle Merkmale kombinieren, lernen leistungsfähigere Darstellungen von Programmen.", "Eine neue Methode zur Modellierung des Quellcodes für die Fehlerbehebung unter Verwendung eines Sandwich-Modells wie [RNN GNN RNN], das die Lokalisierungs- und Reparaturgenauigkeit erheblich verbessert."]} {"source": "Recurrent neural networks (RNNs) are particularly well-suited for modeling long-term dependencies in sequential data, but are notoriously hard to train because the error backpropagated in time either vanishes or explodes at an exponential rate.While a number of works attempt to mitigate this effect through gated recurrent units, skip-connections, parametric constraints and design choices, we propose a novel incremental RNN (iRNN), where hidden state vectors keep track of incremental changes, and as such approximate state-vector increments of Rosenblatt's (1962) continuous-time RNNs.iRNN exhibits identity gradients and is able to account for long-term dependencies (LTD).We show that our method is computationally efficient overcoming overheads of many existing methods that attempt to improve RNN training, while suffering no performance degradation.We demonstrate the utility of our approach with extensive experiments and show competitive performance against standard LSTMs on LTD and other non-LTD tasks.", "target": ["Inkrementelle RNNs lösen das Problem der Explosion/des verschwindenden Gradienten, indem sie die Zustandsvektoren auf der Grundlage der Differenz zwischen dem vorherigen Zustand und dem durch eine ODE vorhergesagten Zustand aktualisieren.", "Die Autoren befassen sich mit dem Problem der Signalausbreitung in rekurrenten neuronalen Netzen, indem sie ein Attraktorsystem für den Signalübergang aufbauen und prüfen, ob es zu einem Gleichgewicht konvergiert. "]} {"source": "Recent empirical results on over-parameterized deep networks are marked by a striking absence of the classic U-shaped test error curve: test error keeps decreasing in wider networks.Researchers are actively working on bridging this discrepancy by proposing better complexity measures.Instead, we directly measure prediction bias and variance for four classification and regression tasks on modern deep networks.We find that both bias and variance can decrease as the number of parameters grows.Qualitatively, the phenomenon persists over a number of gradient-based optimizers.To better understand the role of optimization, we decompose the total variance into variance due to training set sampling and variance due to initialization.Variance due to initialization is significant in the under-parameterized regime.In the over-parameterized regime, total variance is much lower and dominated by variance due to sampling.We provide theoretical analysis in a simplified setting that is consistent with our empirical findings.", "target": ["Wir liefern Beweise gegen klassische Behauptungen über den Kompromiss zwischen Verzerrung und Varianz und schlagen eine neuartige Zerlegung der Varianz vor."]} {"source": "Real world images often contain large amounts of private / sensitive information that should be carefully protected without reducing their utilities.In this paper, we propose a privacy-preserving deep learning framework with a learnable ob- fuscator for the image classification task.Our framework consists of three mod- els: learnable obfuscator, classifier and reconstructor.The learnable obfuscator is used to remove the sensitive information in the images and extract the feature maps from them.The reconstructor plays the role as an attacker, which tries to recover the image from the feature maps extracted by the obfuscator.In order to best protect users’ privacy in images, we design an adversarial training methodol- ogy for our framework to optimize the obfuscator.Through extensive evaluations on real world datasets, both the numerical metrics and the visualization results demonstrate that our framework is qualified to protect users’ privacy and achieve a relatively high accuracy on the image classification task.", "target": ["Wir haben ein neuartiges Deep Learning System zur Bildklassifizierung vorgeschlagen, das sowohl Bilder genau klassifizieren als auch die Privatsphäre der Nutzer schützen kann.", "In diesem Beitrag wird ein Verfahren vorgeschlagen, das die privaten Informationen im Bild bewahrt und die Nutzbarkeit des Bildes nicht beeinträchtigt.", "In dieser Arbeit wird vorgeschlagen, adversarische Netzwerke zu verwenden, um Bilder zu verschleiern und sie so ohne Bedenken hinsichtlich des Datenschutzes zu sammeln, um sie für das Training von maschinellen Lernmodellen zu verwenden."]} {"source": "Bitcoin is a virtual coinage system that enables users to trade virtually free of a central trusted authority.All transactions on the Bitcoin blockchain are publicly available for viewing, yet as Bitcoin is built mainly for security it’s original structure does not allow for direct analysis of address transactions. Existing analysis methods of the Bitcoin blockchain can be complicated, computationally expensive or inaccurate.We propose a computationally efficient model to analyze bitcoin blockchain addresses and allow for their use with existing machine learning algorithms.We compare our approach against Multi Level Sequence Learners (MLSLs), one of the best performing models on bitcoin address data.", "target": ["Ein 2vec-Modell für Transaktionsgraphen von Kryptowährungen.", "Die Arbeit schlägt vor, einen Autoencoder, networkX, und node2Vec zu verwenden, um vorherzusagen, ob eine Bitcoin-Adresse nach einem Jahr leer sein wird, aber die Ergebnisse sind schlechter als eine bestehende Basislinie."]} {"source": "Despite remarkable empirical success, the training dynamics of generative adversarial networks (GAN), which involves solving a minimax game using stochastic gradients, is still poorly understood.In this work, we analyze last-iterate convergence of simultaneous gradient descent (simGD) and its variants under the assumption of convex-concavity, guided by a continuous-time analysis with differential equations.First, we show that simGD, as is, converges with stochastic sub-gradients under strict convexity in the primal variable.Second, we generalize optimistic simGD to accommodate an optimism rate separate from the learning rate and show its convergence with full gradients.Finally, we present anchored simGD, a new method, and show convergence with stochastic subgradients.", "target": ["Konvergenzbeweis der stochastischen Subgradientenmethode und Variationen bei konvex-konkaven Minimax-Problemen", "Eine Analyse des simultanen stochastischen Subgradienten, des simultanen Gradienten mit Optimismus und des simultanen Gradienten mit Verankerung im Kontext von konvex-konkaven Minmax Spielen.", "Dieses Papier analysiert die Dynamik des stochastischen Gradientenabstiegs, wenn er auf konvex-konkave Spiele angewendet wird, sowie GD mit Optimismus und einen neuen verankerten GD-Algorithmus, der unter schwächeren Annahmen konvergiert als SGD oder SGD mit Optimismus."]} {"source": "Small spacecraft now have precise attitude control systems available commercially, allowing them to slew in 3 degrees of freedom, and capture images within short notice.When combined with appropriate software, this agility can significantly increase response rate, revisit time and coverage.In prior work, we have demonstrated an algorithmic framework that combines orbital mechanics, attitude control and scheduling optimization to plan the time-varying, full-body orientation of agile, small spacecraft in a constellation.The proposed schedule optimization would run at the ground station autonomously, and the resultant schedules uplinked to the spacecraft for execution.The algorithm is generalizable over small steerable spacecraft, control capability, sensor specs, imaging requirements, and regions of interest.In this article, we modify the algorithm to run onboard small spacecraft, such that the constellation can make time-sensitive decisions to slew and capture images autonomously, without ground control.We have developed a communication module based on Delay/Disruption Tolerant Networking (DTN) for onboard data management and routing among the satellites, which will work in conjunction with the other modules to optimize the schedule of agile communication and steering.We then apply this preliminary framework on representative constellations to simulate targeted measurements of episodic precipitation events and subsequent urban floods.The command and control efficiency of our agile algorithm is compared to non-agile (11.3x improvement) and non-DTN (21% improvement) constellations.", "target": ["Wir schlagen einen algorithmischen Rahmen vor, um Konstellationen von kleinen Raumfahrzeugen mit 3-DOF-Neuorientierungsfähigkeiten zu planen, die mit Inter-Sat-Verbindungen vernetzt sind.", "In diesem Beitrag wird ein Kommunikationsmodul zur Optimierung des Kommunikationsplans für das Problem von Raumfahrzeugkonstellationen vorgeschlagen und der Algorithmus in verteilten und zentralisierten Einstellungen verglichen."]} {"source": "Importance sampling (IS) is a standard Monte Carlo (MC) tool to compute information about random variables such as moments or quantiles with unknown distributions. IS is asymptotically consistent as the number of MC samples, and hence deltas (particles) that parameterize the density estimate, go to infinity.However, retaining infinitely many particles is intractable.We propose a scheme for only keeping a \\emph{finite representative subset} of particles and their augmented importance weights that is \\emph{nearly consistent}. To do so in {an online manner}, we approximate importance sampling in two ways. First, we replace the deltas by kernels, yielding kernel density estimates (KDEs). Second, we sequentially project KDEs onto nearby lower-dimensional subspaces.We characterize the asymptotic bias of this scheme as determined by a compression parameter and kernel bandwidth, which yields a tunable tradeoff between consistency and memory.In experiments, we observe a favorable tradeoff between memory and accuracy, providing for the first time near-consistent compressions of arbitrary posterior distributions.", "target": ["Wir haben einen neuartigen komprimierten Kernelized Importance Sampling Algorithmus vorgeschlagen."]} {"source": "We study the following three fundamental problems about ridge regression: (1) what is the structure of the estimator?(2) how to correctly use cross-validation to choose the regularization parameter?and (3) how to accelerate computation without losing too much accuracy?We consider the three problems in a unified large-data linear model.We give a precise representation of ridge regression as a covariance matrix-dependent linear combination of the true parameter and the noise. We study the bias of $K$-fold cross-validation for choosing the regularization parameter, and propose a simple bias-correction.We analyze the accuracy of primal and dual sketching for ridge regression, showing they are surprisingly accurate.Our results are illustrated by simulations and by analyzing empirical data.", "target": ["Wir untersuchen die Struktur der Ridge-Regression in einem hochdimensionalen asymptotischen Framework und gewinnen Erkenntnisse über Cross-Validation und Sketching.", "Eine theoretische Untersuchung der Ridge-Regression unter Ausnutzung einer neuen asymptotischen Charakterisierung des Ridge-Regressionsschätzers."]} {"source": "Attention mechanisms have advanced the state of the art in several machine learning tasks.Despite significant empirical gains, there is a lack of theoretical analyses on understanding their effectiveness.In this paper, we address this problem by studying the landscape of population and empirical loss functions of attention-based neural networks.Our results show that, under mild assumptions, every local minimum of a two-layer global attention model has low prediction error, and attention models require lower sample complexity than models not employing attention.We then extend our analyses to the popular self-attention model, proving that they deliver consistent predictions with a more expressive class of functions.Additionally, our theoretical results provide several guidelines for designing attention mechanisms.Our findings are validated with satisfactory experimental results on MNIST and IMDB reviews dataset.", "target": ["Wir analysieren die Verlustlandschaft von neuronalen Netzen mit Aufmerksamkeit und erklären, warum Aufmerksamkeit beim Training neuronaler Netze hilfreich ist, um eine gute Leistung zu erzielen.", "Diese Arbeit beweist aus theoretischer Sicht, dass Aufmerksamkeitsnetzwerke besser generalisieren können als Nicht-Aufmerksamkeits-Baselines für feste Aufmerksamkeit (ein- und mehrschichtig) und Selbstaufmerksamkeit in der einschichtigen Umgebung."]} {"source": "Recent advances in deep learning techniques has shown the usefulness of the deep neural networks in extracting features required to perform the task at hand.However, these features learnt are in particular helpful only for the initial task.This is due to the fact that the features learnt are very task specific and does not capture the most general and task agnostic features of the input.In fact the way humans are seen to learn is by disentangling features which task agnostic.This indicates that leaning task agnostic features by disentangling only the most informative features from the input data.Recently Variational Auto-Encoders (VAEs) have shown to be the de-facto models to capture the latent variables in a generative sense.As these latent features can be represented as continuous and/or discrete variables, this indicates us to use VAE with a mixture of continuous and discrete variables for the latent space.We achieve this by performing our experiments using a modified version of joint-vae to learn the disentangled features.", "target": ["Mischungsmodell für neuronale Entflechtung."]} {"source": "To improve how neural networks function it is crucial to understand their learning process.The information bottleneck theory of deep learning proposes that neural networks achieve good generalization by compressing their representations to disregard information that is not relevant to the task.However, empirical evidence for this theory is conflicting, as compression was only observed when networks used saturating activation functions.In contrast, networks with non-saturating activation functions achieved comparable levels of task performance but did not show compression.In this paper we developed more robust mutual information estimation techniques, that adapt to hidden activity of neural networks and produce more sensitive measurements of activations from all functions, especially unbounded functions.Using these adaptive estimation techniques, we explored compression in networks with a range of different activation functions.With two improved methods of estimation, firstly, we show that saturation of the activation function is not required for compression, and the amount of compression varies between different activation functions.We also find that there is a large amount of variation in compression between different network initializations.Secondary, we see that L2 regularization leads to significantly increased compression, while preventing overfitting.Finally, we show that only compression of the last layer is positively correlated with generalization.", "target": ["Wir haben robuste Schätzungen der gegenseitigen Information für DNNs entwickelt und sie verwendet, um die Kompression in Netzwerken mit nicht sättigenden Aktivierungsfunktionen zu beobachten.", "In dieser Arbeit wurde die weit verbreitete Annahme untersucht, dass tiefe neuronale Netze bei überwachten Aufgaben eine Informationskompression durchführen.", "In diesem Papier wird eine Methode zur Schätzung der gegenseitigen Information für Netze mit unbeschränkten Aktivierungsfunktionen und die Verwendung der L2-Regularisierung vorgeschlagen, um eine stärkere Kompression zu erreichen."]} {"source": "In this work, we address the problem of musical timbre transfer, where the goal is to manipulate the timbre of a sound sample from one instrument to match another instrument while preserving other musical content, such as pitch, rhythm, and loudness.In principle, one could apply image-based style transfer techniques to a time-frequency representation of an audio signal, but this depends on having a representation that allows independent manipulation of timbre as well as high-quality waveform generation.We introduce TimbreTron, a method for musical timbre transfer which applies “image” domain style transfer to a time-frequency representation of the audio signal, and then produces a high-quality waveform using a conditional WaveNet synthesizer.We show that the Constant Q Transform (CQT) representation is particularly well-suited to convolutional architectures due to its approximate pitch equivariance.Based on human perceptual evaluations, we confirmed that TimbreTron recognizably transferred the timbre while otherwise preserving the musical content, for both monophonic and polyphonic samples.We made an accompanying demo video here: https://www.cs.toronto.edu/~huang/TimbreTron/index.html which we strongly encourage you to watch before reading the paper.", "target": ["Wir stellen das TimbreTron vor, eine Pipeline zur Durchführung von qualitativ hochwertigem Timbre-Transfer auf musikalische Wellenformen unter Verwendung von CQT-Domain-Transfer.", "Eine Methode zur Konvertierung von Aufnahmen eines bestimmten Musikinstruments in ein anderes durch Anwendung von CycleGAN, das für die Übertragung von Bildern entwickelt wurde, um Spektrogramme zu übertragen.", "Die Autoren verwenden mehrere Techniken/Werkzeuge, um eine neuronale Klangfarbenübertragung (Umwandlung von Musik von einem Instrument in ein anderes) ohne gepaarte Trainingsbeispiele zu ermöglichen. ", "Beschreibt ein Modell für die Übertragung von musikalischen Klangfarben. Die Ergebnisse zeigen, dass das vorgeschlagene System sowohl für die Übertragung von Tonhöhe und Tempo als auch für die Anpassung der Klangfarbe effektiv ist."]} {"source": "Neuromorphic hardware tends to pose limits on the connectivity of deep networks that one can run on them.But also generic hardware and software implementations of deep learning run more efficiently for sparse networks.Several methods exist for pruning connections of a neural network after it was trained without connectivity constraints.We present an algorithm, DEEP R, that enables us to train directly a sparsely connected neural network.DEEP R automatically rewires the network during supervised training so that connections are there where they are most needed for the task, while its total number is all the time strictly bounded.We demonstrate that DEEP R can be used to train very sparse feedforward and recurrent neural networks on standard benchmark tasks with just a minor loss in performance.DEEP R is based on a rigorous theoretical foundation that views rewiring as stochastic sampling of network configurations from a posterior.", "target": ["In diesem Beitrag wird Deep Rewiring vorgestellt, ein Algorithmus, mit dem tiefe neuronale Netze trainiert werden können, wenn die Konnektivität des Netzes während des Trainings stark eingeschränkt ist.", "Ein Ansatz zur Implementierung von Deep Learning direkt auf spärlich verbundenen Graphen, der ein effizientes Online-Training von Netzwerken und schnelles und flexibles Lernen ermöglicht.", "Die Autoren stellen einen einfachen Algorithmus vor, der mit begrenztem Speicherplatz trainieren kann"]} {"source": "Deep learning's success has led to larger and larger models to handle more and more complex tasks; trained models can contain millions of parameters.These large models are compute- and memory-intensive, which makes it a challenge to deploy them with minimized latency, throughput, and storage requirements.Some model compression methods have been successfully applied on image classification and detection or language models, but there has been very little work compressing generative adversarial networks (GANs) performing complex tasks.In this paper, we show that a standard model compression technique, weight pruning, cannot be applied to GANs using existing methods.We then develop a self-supervised compression technique which uses the trained discriminator to supervise the training of a compressed generator.We show that this framework has a compelling performance to high degrees of sparsity, generalizes well to new tasks and models, and enables meaningful comparisons between different pruning granularities.", "target": ["Bestehende Pruning-Methoden versagen, wenn sie auf GANs angewendet werden, die komplexe Aufgaben bewältigen. Daher stellen wir eine einfache und robuste Methode zum Pruning von Generatoren vor, die für eine Vielzahl von Netzwerken und Aufgaben gut funktioniert.", "Die Autoren schlagen eine Modifikation der klassischen Destillationsmethode für die Aufgabe der Komprimierung eines Netzwerks vor, um das Versagen früherer Lösungen bei der Anwendung auf generative adversarische Netzwerke zu beheben."]} {"source": "Large-scale distributed training requires significant communication bandwidth for gradient exchange that limits the scalability of multi-node training, and requires expensive high-bandwidth network infrastructure.The situation gets even worse with distributed training on mobile devices (federated learning), which suffers from higher latency, lower throughput, and intermittent poor connections.In this paper, we find 99.9% of the gradient exchange in distributed SGD is redundant, and propose Deep Gradient Compression (DGC) to greatly reduce the communication bandwidth.To preserve accuracy during compression, DGC employs four methods: momentum correction, local gradient clipping, momentum factor masking, and warm-up training.We have applied Deep Gradient Compression to image classification, speech recognition, and language modeling with multiple datasets including Cifar10, ImageNet, Penn Treebank, and Librispeech Corpus.On these scenarios, Deep Gradient Compression achieves a gradient compression ratio from 270x to 600x without losing accuracy, cutting the gradient size of ResNet-50 from 97MB to 0.35MB, and for DeepSpeech from 488MB to 0.74MB.Deep gradient compression enables large-scale distributed training on inexpensive commodity 1Gbps Ethernet and facilitates distributed training on mobile.", "target": ["Wir stellen fest, dass 99,9 % des Gradientenaustauschs bei verteiltem SGD überflüssig sind; wir reduzieren die Kommunikationsbandbreite um zwei Größenordnungen, ohne an Genauigkeit zu verlieren. ", "In dieser Arbeit wird eine zusätzliche Verbesserung gegenüber dem Gradient Dropping vorgeschlagen, um die Kommunikationseffizienz zu steigern."]} {"source": "Image-to-image translation has recently received significant attention due to advances in deep learning.Most works focus on learning either a one-to-one mapping in an unsupervised way or a many-to-many mapping in a supervised way.However, a more practical setting is many-to-many mapping in an unsupervised way, which is harder due to the lack of supervision and the complex inner- and cross-domain variations.To alleviate these issues, we propose the Exemplar Guided & Semantically Consistent Image-to-image Translation (EGSC-IT) network which conditions the translation process on an exemplar image in the target domain.We assume that an image comprises of a content component which is shared across domains, and a style component specific to each domain.Under the guidance of an exemplar from the target domain we apply Adaptive Instance Normalization to the shared content component, which allows us to transfer the style information of the target domain to the source domain.To avoid semantic inconsistencies during translation that naturally appear due to the large inner- and cross-domain variations, we introduce the concept of feature masks that provide coarse semantic guidance without requiring the use of any semantic labels.Experimental results on various datasets show that EGSC-IT does not only translate the source image to diverse instances in the target domain, but also preserves the semantic consistency during the process.", "target": ["Wir schlagen das Netzwerk Exemplar Guided & Semantically Consistent Image-to-Image Translation (EGSC-IT) vor, das den Übersetzungsprozess an ein Beispielbild in der Zieldomäne koppelt.", "Erörtert ein zentrales Versagen und die Notwendigkeit von I2I-Übersetzungsmodellen.", "Die Arbeit untersucht die Idee, dass ein Bild zwei Komponenten hat, und wendet ein Aufmerksamkeitsmodell an, bei dem die Merkmalsmasken, die den Übersetzungsprozess steuern, keine semantischen Bezeichnungen benötigen."]} {"source": "Deep neural networks can learn meaningful representations of data.However, these representations are hard to interpret.For example, visualizing a latent layer is generally only possible for at most three dimensions.Neural networks are able to learn and benefit from much higher dimensional representations but these are not visually interpretable because nodes have arbitrary ordering within a layer.Here, we utilize the ability of the human observer to identify patterns in structured representations to visualize higher dimensions.To do so, we propose a class of regularizations we call \\textit{Graph Spectral Regularizations} that impose graph-structure on latent layers.This is achieved by treating activations as signals on a predefined graph and constraining those activations using graph filters, such as low pass and wavelet-like filters.This framework allows for any kind of graph as well as filter to achieve a wide range of structured regularizations depending on the inference needs of the data.First, we show a synthetic example that the graph-structured layer can reveal topological features of the data.Next, we show that a smoothing regularization can impose semantically consistent ordering of nodes when applied to capsule nets.Further, we show that the graph-structured layer, using wavelet-like spatially localized filters, can form localized receptive fields for improved image and biomedical data interpretation.In other words, the mapping between latent layer, neurons and the output space becomes clear due to the localization of the activations.Finally, we show that when structured as a grid, the representations create coherent images that allow for image-processing techniques such as convolutions.", "target": ["Einfügen von Graphenstrukturen in neuronale Netzschichten zur Verbesserung der visuellen Interpretierbarkeit.", "Ein neuartiger Regularisierer, der den verborgenen Schichten eines Neuronalen Netzes eine Graphenstruktur auferlegt, um die Interpretierbarkeit der verborgenen Repräsentationen zu verbessern.", "Hervorheben des Beitrags des graphischen spektralen Regularisierers zur Interpretierbarkeit neuronaler Netze."]} {"source": "Text generation is ubiquitous in many NLP tasks, from summarization, to dialogue and machine translation.The dominant parametric approach is based on locally normalized models which predict one word at a time.While these work remarkably well, they are plagued by exposure bias due to the greedy nature of the generation process.In this work, we investigate un-normalized energy-based models (EBMs) which operate not at the token but at the sequence level.In order to make training tractable, we first work in the residual of a pretrained locally normalized language model and second we train using noise contrastive estimation.Furthermore, since the EBM works at the sequence level, we can leverage pretrained bi-directional contextual representations, such as BERT and RoBERTa.Our experiments on two large language modeling datasets show that residual EBMs yield lower perplexity compared to locally normalized baselines.Moreover, generation via importance sampling is very efficient and of higher quality than the baseline models according to human evaluation.", "target": ["Wir zeigen, dass energiebasierte Modelle, die auf den Residuen eines autoregressiven Sprachmodells trainiert werden, effektiv und effizient zur Texterzeugung eingesetzt werden können. ", "Ein vorgeschlagenes, auf Restenergie basierendes Modell (EBM) für die Texterstellung, das auf Satzebene arbeitet und daher BERT nutzen kann, erreicht eine geringere Komplexität und wird bei der menschlichen Bewertung bevorzugt."]} {"source": "We investigate the robustness properties of image recognition models equipped with two features inspired by human vision, an explicit episodic memory and a shape bias, at the ImageNet scale.As reported in previous work, we show that an explicit episodic memory improves the robustness of image recognition models against small-norm adversarial perturbations under some threat models.It does not, however, improve the robustness against more natural, and typically larger, perturbations.Learning more robust features during training appears to be necessary for robustness in this second sense.We show that features derived from a model that was encouraged to learn global, shape-based representations (Geirhos et al., 2019) do not only improve the robustness against natural perturbations, but when used in conjunction with an episodic memory, they also provide additional robustness against adversarial perturbations.Finally, we address three important design choices for the episodic memory: memory size, dimensionality of the memories and the retrieval method.We show that to make the episodic memory more compact, it is preferable to reduce the number of memories by clustering them, instead of reducing their dimensionality.", "target": ["Systematische Untersuchung großer cachebasierter Bilderkennungsmodelle mit besonderem Augenmerk auf deren Robustheitseigenschaften.", "In dieser Arbeit wurde vorgeschlagen, den Cache-Speicher zu nutzen, um die Robustheit gegenüber ungünstigen Bildbeispielen zu verbessern, und man kam zu dem Schluss, dass die Verwendung eines großen kontinuierlichen Cachespeichers der harten Aufmerksamkeit nicht überlegen ist."]} {"source": "Group convolutional neural networks (G-CNNs) can be used to improve classical CNNs by equipping them with the geometric structure of groups.Central in the success of G-CNNs is the lifting of feature maps to higher dimensional disentangled representations, in which data characteristics are effectively learned, geometric data-augmentations are made obsolete, and predictable behavior under geometric transformations (equivariance) is guaranteed via group theory.Currently, however, the practical implementations of G-CNNs are limited to either discrete groups (that leave the grid intact) or continuous compact groups such as rotations (that enable the use of Fourier theory).In this paper we lift these limitations and propose a modular framework for the design and implementation of G-CNNs for arbitrary Lie groups.In our approach the differential structure of Lie groups is used to expand convolution kernels in a generic basis of B-splines that is defined on the Lie algebra.This leads to a flexible framework that enables localized, atrous, and deformable convolutions in G-CNNs by means of respectively localized, sparse and non-uniform B-spline expansions.The impact and potential of our approach is studied on two benchmark datasets: cancer detection in histopathology slides (PCam dataset) in which rotation equivariance plays a key role and facial landmark localization (CelebA dataset) in which scale equivariance is important.In both cases, G-CNN architectures outperform their classical 2D counterparts and the added value of atrous and localized group convolutions is studied in detail.", "target": ["Das Papier beschreibt einen flexiblen Rahmen für den Aufbau von CNNs, die äquivariant zu einer großen Klasse von Transformationsgruppen sind.", "Ein Rahmenwerk für den Aufbau von Gruppen-CNN mit einer beliebigen Lie Gruppe G, das bei der Klassifizierung von Tumoren und der Lokalisierung von Landmarken eine Überlegenheit gegenüber einem CNN zeigt. "]} {"source": "Global feature pooling is a modern variant of feature pooling providing better interpretatability and regularization.Although alternative pooling methods exist (eg. max, lp norm, stochastic), the averaging operation is still the dominating global pooling scheme in popular models.As fine-grained recognition requires learning subtle, discriminative features, we consider the question: is average pooling the optimal strategy?We first ask: ``is there a difference between features learned by global average and max pooling?'' Visualization and quantitative analysis show that max pooling encourages learning features of different spatial scales.We then ask ``is there a single global feature pooling variant that's most suitable for fine-grained recognition?'' A thorough evaluation of nine representative pooling algorithms finds that: max pooling outperforms average pooling consistently across models, datasets, and image resolutions; it does so by reducing the generalization gap; and generalized pooling's performance increases almost monotonically as it changes from average to max.We finally ask: ``what's the best way to combine two heterogeneous pooling schemes?'' Common strategies struggle because of potential gradient conflict but the ``freeze-and-train'' trick works best.We also find that post-global batch normalization helps with faster convergence and improves model performance consistently.", "target": ["Ein Benchmarking von neun repräsentativen globalen Pooling-Systemen zeigt einige interessante Ergebnisse.", "Für feinkörnige Klassifizierungsaufgaben wurde in dieser Arbeit bestätigt, dass maxpooling spärlichere Merkmalskarten als avgpooling fördert und diese übertrifft. "]} {"source": "We present a technique to improve the generalization of deep representations learned on small labeled datasets by introducing self-supervised tasks as auxiliary loss functions.Although recent research has shown benefits of self-supervised learning (SSL) on large unlabeled datasets, its utility on small datasets is unknown.We find that SSL reduces the relative error rate of few-shot meta-learners by 4%-27%, even when the datasets are small and only utilizing images within the datasets.The improvements are greater when the training set is smaller or the task is more challenging.Though the benefits of SSL may increase with larger training sets, we observe that SSL can have a negative impact on performance when there is a domain shift between distribution of images used for meta-learning and SSL.Based on this analysis we present a technique that automatically select images for SSL from a large, generic pool of unlabeled images for a given dataset using a domain classifier that provides further improvements.We present results using several meta-learners and self-supervised tasks across datasets with varying degrees of domain shifts and label sizes to characterize the effectiveness of SSL for few-shot learning.", "target": ["Die Selbstüberwachung verbessert die Few-Shot Erkennung in kleinen und schwierigen Datensätzen, ohne auf zusätzliche Daten angewiesen zu sein; zusätzliche Daten helfen nur, wenn sie aus demselben oder einem ähnlichen Bereich stammen.", "Eine empirische Studie verschiedener Methoden des selbstüberwachten Lernens (SSL), die zeigt, dass SSL mehr hilft, wenn der Datensatz schwieriger ist, dass der Bereich für das Training wichtig ist und eine Methode zur Auswahl von Proben aus einem unbeschrifteten Datensatz. "]} {"source": "Abstraction of Markov Decision Processes is a useful tool for solving complex problems, as it can ignore unimportant aspects of an environment, simplifying the process of learning an optimal policy.In this paper, we propose a new algorithm for finding abstract MDPs in environments with continuous state spaces.It is based on MDP homomorphisms, a structure-preserving mapping between MDPs.We demonstrate our algorithm's ability to learns abstractions from collected experience and show how to reuse the abstractions to guide exploration in new tasks the agent encounters.Our novel task transfer method beats a baseline based on a deep Q-network.", "target": ["Wir erstellen abstrakte Modelle von Umgebungen aus Erfahrung und nutzen sie, um neue Aufgaben schneller zu lernen.", "Eine Methode, die die Idee der MDP-Homomorphismen nutzt, um ein komplexes MDP mit einem kontinuierlichen Zustandsraum in ein einfacheres zu transformieren."]} {"source": "A number of recent methods to understand neural networks have focused on quantifying the role of individual features. One such method, NetDissect identifies interpretable features of a model using the Broden dataset of visual semantic labels (colors, materials, textures, objects and scenes). Given the recent rise of a number of action recognition datasets, we propose extending the Broden dataset to include actions to better analyze learned action models. We describe the annotation process, results from interpreting action recognition models on the extended Broden dataset and examine interpretable feature paths to help us understand the conceptual hierarchy used to classify an action.", "target": ["Wir erweitern die Netzwerk-Dissektion um die Interpretation von Handlungen und untersuchen interpretierbare Merkmalspfade, um die konzeptuelle Hierarchie zu verstehen, die zur Klassifizierung einer Handlung verwendet wird."]} {"source": "Automatic melody generation for pop music has been a long-time aspiration forboth AI researchers and musicians.However, learning to generate euphoniousmelody has turned out to be highly challenging due to a number of factors.Representationof multivariate property of notes has been one of the primary challenges.It is also difficult to remain in the permissible spectrum of musical variety, outsideof which would be perceived as a plain random play without auditory pleasantness.Observing the conventional structure of pop music poses further challenges.In this paper, we propose to represent each note and its properties as a unique‘word,’ thus lessening the prospect of misalignments between the properties, aswell as reducing the complexity of learning.We also enforce regularization policieson the range of notes, thus encouraging the generated melody to stay closeto what humans would find easy to follow.Furthermore, we generate melodyconditioned on song part information, thus replicating the overall structure of afull song.Experimental results demonstrate that our model can generate auditorilypleasant songs that are more indistinguishable from human-written ones thanprevious models.", "target": ["Wir schlagen ein neues Modell zur Darstellung von Noten und ihren Eigenschaften vor, das die automatische Melodieerzeugung verbessern kann.", "In diesem Beitrag wird ein generatives Modell der symbolischen (MIDI-)Melodie in der westlichen Populärmusik vorgeschlagen, das Notensymbole zusammen mit Zeit- und Dauerinformationen kodiert, um musikalische \"Worte\" zu bilden.", "In der Arbeit wird vorgeschlagen, die Erzeugung von Melodien zu erleichtern, indem Noten als \"Wörter\" dargestellt werden, die alle Eigenschaften der Note repräsentieren und so die Erzeugung von musikalischen \"Sätzen\" ermöglichen."]} {"source": "Depth is a key component of Deep Neural Networks (DNNs), however, designing depth is heuristic and requires many human efforts.We propose AutoGrow to automate depth discovery in DNNs: starting from a shallow seed architecture, AutoGrow grows new layers if the growth improves the accuracy; otherwise, stops growing and thus discovers the depth.We propose robust growing and stopping policies to generalize to different network architectures and datasets.Our experiments show that by applying the same policy to different network architectures, AutoGrow can always discover near-optimal depth on various datasets of MNIST, FashionMNIST, SVHN, CIFAR10, CIFAR100 and ImageNet.For example, in terms of accuracy-computation trade-off, AutoGrow discovers a better depth combination in ResNets than human experts.Our AutoGrow is efficient.It discovers depth within similar time of training a single DNN.", "target": ["Eine Methode, die automatisch Schichten in neuronalen Netzen wachsen lässt, um die optimale Tiefe zu ermitteln.", "Ein Rahmen für das Training eines flachen Netzes und das Hinzufügen neuer Schichten, der Einblicke in das Paradigma der \"wachsenden Netze\" bietet."]} {"source": "Given the importance of remote sensing, surprisingly little attention has been paid to it by the representation learning community.To address it and to speed up innovation in this domain, we provide simplified access to 5 diverse remote sensing datasets in a standardized form.We specifically explore in-domain representation learning and address the question of \"what characteristics should a dataset have to be a good source for remote sensing representation learning\".The established baselines achieve state-of-the-art performance on these datasets.", "target": ["Erforschung des bereichsinternen Repräsentationslernens für Fernerkundung von Datensätze.", "In dieser Arbeit wurden mehrere standardisierte Fernerkundungsdatensätze bereitgestellt und es wurde gezeigt, dass die bereichsinterne Repräsentation bessere Basisergebnisse für die Fernerkundung liefern kann als das Fine-Tuning mit ImageNet oder das Lernen von Grund auf."]} {"source": "Generative seq2seq dialogue systems are trained to predict the next word in dialogues that have already occurred.They can learn from large unlabeled conversation datasets, build a deep understanding of conversational context, and generate a wide variety of responses.This flexibility comes at the cost of control.Undesirable responses in the training data will be reproduced by the model at inference time, and longer generations often don’t make sense.Instead of generating responses one word at a time, we train a classifier to choose from a predefined list of full responses.The classifier is trained on (conversation context, response class) pairs, where each response class is a noisily labeled group of interchangeable responses.At inference, we generate the exemplar response associated with the predicted response class.Experts can edit and improve these exemplar responses over time without retraining the classifier or invalidating old training data.Human evaluation of 775 unseen doctor/patient conversations shows that this tradeoff improves responses.Only 12% of our discriminative approach’s responses are worse than the doctor’s response in the same conversational context, compared to 18% for the generative model.A discriminative model trained without any manual labeling of response classes achieves equal performance to the generative model.", "target": ["Vermeiden Sie es, Antworten Wort für Wort zu generieren, indem Sie eine schwache Überwachung verwenden, um einen Klassifikator zu trainieren, der eine vollständige Antwort auswählt.", "Eine Möglichkeit zur Generierung von Antworten für medizinische Dialoge unter Verwendung eines Klassifikators zur Auswahl aus von Experten kuratierten Antworten auf der Grundlage des Gesprächskontextes."]} {"source": "There is a previously identified equivalence between wide fully connected neural networks (FCNs) and Gaussian processes (GPs).This equivalence enables, for instance, test set predictions that would have resulted from a fully Bayesian, infinitely wide trained FCN to be computed without ever instantiating the FCN, but by instead evaluating the corresponding GP.In this work, we derive an analogous equivalence for multi-layer convolutional neural networks (CNNs) both with and without pooling layers, and achieve state of the art results on CIFAR10 for GPs without trainable kernels.We also introduce a Monte Carlo method to estimate the GP corresponding to a given neural network architecture, even in cases where the analytic form has too many terms to be computationally feasible. Surprisingly, in the absence of pooling layers, the GPs corresponding to CNNs with and without weight sharing are identical.As a consequence, translation equivariance, beneficial in finite channel CNNs trained with stochastic gradient descent (SGD), is guaranteed to play no role in the Bayesian treatment of the infinite channel limit - a qualitative difference between the two regimes that is not present in the FCN case.We confirm experimentally, that while in some scenarios the performance of SGD-trained finite CNNs approaches that of the corresponding GPs as the channel count increases, with careful tuning SGD-trained CNNs can significantly outperform their corresponding GPs, suggesting advantages from SGD training compared to fully Bayesian parameter estimation.", "target": ["SGD-trainierte CNNs mit endlicher Breite vs. unendlich breite, vollständig Bayes'sche CNNs. Wer gewinnt?", "Die Arbeit stellt eine Verbindung zwischen einem Bayes'schen Convolutional Neural Network mit unendlich vielen Kanälen und Gauß'schen Prozessen her."]} {"source": "Bayesian inference promises to ground and improve the performance of deep neural networks.It promises to be robust to overfitting, to simplify the training procedure and the space of hyperparameters, and to provide a calibrated measure of uncertainty that can enhance decision making, agent exploration and prediction fairness.Markov Chain Monte Carlo (MCMC) methods enable Bayesian inference by generating samples from the posterior distribution over model parameters.Despite the theoretical advantages of Bayesian inference and the similarity between MCMC and optimization methods, the performance of sampling methods has so far lagged behind optimization methods for large scale deep learning tasks.We aim to fill this gap and introduce ATMC, an adaptive noise MCMC algorithm that estimates and is able to sample from the posterior of a neural network.ATMC dynamically adjusts the amount of momentum and noise applied to each parameter update in order to compensate for the use of stochastic gradients.We use a ResNet architecture without batch normalization to test ATMC on the Cifar10 benchmark and the large scale ImageNet benchmark and show that, despite the absence of batch normalization, ATMC outperforms a strong optimization baseline in terms of both classification accuracy and test log-likelihood.We show that ATMC is intrinsically robust to overfitting on the training data and that ATMC provides a better calibrated measure of uncertainty compared to the optimization baseline.", "target": ["Wir skalieren die Bayes'sche Inferenz auf die ImageNet-Klassifikation und erzielen wettbewerbsfähige Ergebnisse hinsichtlich Genauigkeit und Unsicherheitskalibrierung.", "Ein adaptiver Noise-MCMC Algorithmus für die Bildklassifikation, der den Impuls und die Störungen, die auf jede Parameteraktualisierung angewendet werden, dynamisch anpasst, robust gegenüber Overfitting ist und ein Unsicherheitsmaß mit Vorhersagen liefert. "]} {"source": "Now GANs can generate more and more realistic face images that can easily fool human beings. In contrast, a common convolutional neural network(CNN), e.g. ResNet-18, can achieve more than 99.9% accuracy in discerning fake/real faces if training and testing faces are from the same source.In this paper, we performed both human studies and CNN experiments, which led us to two important findings.One finding is that the textures of fake faces are substantially different from real ones.CNNs can capture local image texture information for recognizing fake/real face, while such cues are easily overlooked by humans.The other finding is that global image texture information is more robust to image editing and generalizable to fake faces from different GANs and datasets.Based on the above findings, we propose a novel architecture coined as Gram-Net, which incorporates “Gram Block” in multiple semantic levels to extract global image texture representations.Experimental results demonstrate that our Gram-Net performs better than existing approaches for fake face detection. Especially, our Gram-Net is more robust to image editing, e.g. downsampling, JPEG compression, blur, and noise. More importantly, our Gram-Net generalizes significantly better in detecting fake faces from GAN models not seen in the training phase.", "target": ["Eine empirische Studie über gefälschte Bilder zeigt, dass die Textur ein wichtiger Hinweis darauf ist, dass sich gefälschte Bilder von echten Bildern unterscheiden. Unser verbessertes Modell, das globale Texturstatistiken erfasst, zeigt eine bessere GAN-übergreifende Erkennungsleistung für gefälschte Bilder.", "Die Arbeit schlägt einen Weg vor, um die Leistung des Modells für die Erkennung gefälschter Gesichter in Bildern, die von einem GAN generiert wurden, zu verbessern, damit es auf der Grundlage von Texturinformationen verallgemeinerbar ist."]} {"source": "The Wasserstein probability metric has received much attention from the machine learning community.Unlike the Kullback-Leibler divergence, which strictly measures change in probability, the Wasserstein metric reflects the underlying geometry between outcomes.The value of being sensitive to this geometry has been demonstrated, among others, in ordinal regression and generative modelling, and most recently in reinforcement learning.In this paper we describe three natural properties of probability divergences that we believe reflect requirements from machine learning: sum invariance, scale sensitivity, and unbiased sample gradients.The Wasserstein metric possesses the first two properties but, unlike the Kullback-Leibler divergence, does not possess the third.We provide empirical evidence suggesting this is a serious issue in practice.Leveraging insights from probabilistic forecasting we propose an alternative to the Wasserstein metric, the Cramér distance.We show that the Cramér distance possesses all three desired properties, combining the best of the Wasserstein and Kullback-Leibler divergences.We give empirical results on a number of domains comparing these three divergences.To illustrate the practical relevance of the Cramér distance we design a new algorithm, the Cramér Generative Adversarial Network (GAN), and show that it has a number of desirable properties over the related Wasserstein GAN.", "target": ["Die Wasserstein-Distanz ist mit stochastischem Gradientenabstieg schwer zu minimieren, während die Cramer-Distanz leicht optimiert werden kann und genauso gut funktioniert.", "In dem Manuskript wird vorgeschlagen, die Cramer-Distanz als Verlust bei der Optimierung einer Zielfunktion mit stochastischem Gradientenabstieg zu verwenden, da sie unverzerrte Stichprobengradienten aufweist.", "Der Beitrag des Artikels bezieht sich auf Leistungskriterien, insbesondere auf die Wasserstein/Mallows-Metrik."]} {"source": "We humans have an innate understanding of the asymmetric progression of time, which we use to efficiently and safely perceive and manipulate our environment.Drawing inspiration from that, we approach the problem of learning an arrow of time in a Markov (Decision) Process.We illustrate how a learned arrow of time can capture salient information about the environment, which in turn can be used to measure reachability, detect side-effects and to obtain an intrinsic reward signal.Finally, we propose a simple yet effective algorithm to parameterize the problem at hand and learn an arrow of time with a function approximator (here, a deep neural network).Our empirical results span a selection of discrete and continuous environments, and demonstrate for a class of stochastic processes that the learned arrow of time agrees reasonably well with a well known notion of an arrow of time due to Jordan, Kinderlehrer and Otto (1998).", "target": ["Wir lernen den Pfeil der Zeit für MDPs und nutzen ihn, um die Erreichbarkeit zu messen, Nebenwirkungen zu erkennen und ein Neugier-Belohnungssignal zu erhalten. ", "In dieser Arbeit wird das h-Potenzial als Lösung für ein Ziel vorgeschlagen, das die Asymmetrie der Zustandsübergänge in einem MDP misst."]} {"source": "We formulate stochastic gradient descent (SGD) as a novel factorised Bayesian filtering problem, in which each parameter is inferred separately, conditioned on the corresopnding backpropagated gradient. Inference in this setting naturally gives rise to BRMSprop and BAdam: Bayesian variants of RMSprop and Adam. Remarkably, the Bayesian approach recovers many features of state-of-the-art adaptive SGD methods, including amongst others root-mean-square normalization, Nesterov acceleration and AdamW. As such, the Bayesian approach provides one explanation for the empirical effectiveness of state-of-the-art adaptive SGD algorithms. Empirically comparing BRMSprop and BAdam with naive RMSprop and Adam on MNIST, we find that Bayesian methods have the potential to considerably reduce test loss and classification error.", "target": ["Wir haben SGD als ein Bayes'sches Filterproblem formuliert und zeigen, dass dies zu RMSprop, Adam, AdamW, NAG und anderen Merkmalen moderner adaptiver Methoden führt.", "Das Papier analysiert den stochastischen Gradientenabstieg durch Bayes'sche Filterung als Rahmen für die Analyse adaptiver Methoden.", "Die Autoren versuchen, bestehende adaptive Gradientenmethoden im Rahmen der Bayes'schen Filterung mit dem dynamischen Prior zu vereinen."]} {"source": "Data augmentation (DA) has been widely utilized to improve generalization in training deep neural networks.Recently, human-designed data augmentation has been gradually replaced by automatically learned augmentation policy.Through finding the best policy in well-designed search space of data augmentation, AutoAugment (Cubuk et al., 2019) can significantly improve validation accuracy on image classification tasks.However, this approach is not computationally practical for large-scale problems.In this paper, we develop an adversarial method to arrive at a computationally-affordable solution called Adversarial AutoAugment, which can simultaneously optimize target related object and augmentation policy search loss.The augmentation policy network attempts to increase the training loss of a target network through generating adversarial augmentation policies, while the target network can learn more robust features from harder examples to improve the generalization.In contrast to prior work, we reuse the computation in target network training for policy evaluation, and dispense with the retraining of the target network.Compared to AutoAugment, this leads to about 12x reduction in computing cost and 11x shortening in time overhead on ImageNet.We show experimental results of our approach on CIFAR-10/CIFAR-100, ImageNet, and demonstrate significant performance improvements over state-of-the-art.On CIFAR-10, we achieve a top-1 test error of 1.36%, which is the currently best performing single model.On ImageNet, we achieve a leading performance of top-1 accuracy 79.40% on ResNet-50 and 80.00% on ResNet-50-D without extra data.", "target": ["Wir führen die Idee des kontradiktorischen Lernens in die automatische Datenerweiterung ein, um die Generalisierung eines Zielnetzes zu verbessern.", "Eine Technik namens Adversarial AutoAugment, die während des Trainings mit Hilfe eines adversariellen Ansatzes dynamisch gute Datenerweiterungsstrategien erlernt."]} {"source": "In this study we focus on first-order meta-learning algorithms that aim to learn a parameter initialization of a network which can quickly adapt to new concepts, given a few examples.We investigate two approaches to enhance generalization and speed of learning of such algorithms, particularly expanding on the Reptile (Nichol et al., 2018) algorithm.We introduce a novel regularization technique called meta-step gradient pruning and also investigate the effects of increasing the depth of network architectures in first-order meta-learning.We present an empirical evaluation of both approaches, where we match benchmark few-shot image classification results with 10 times fewer iterations using Mini-ImageNet dataset and with the use of deeper networks, we attain accuracies that surpass the current benchmarks of few-shot image classification using Omniglot dataset.", "target": ["Die Studie stellt zwei Ansätze zur Verbesserung der Generalisierung von Meta-Lernen erster Ordnung vor und präsentiert eine empirische Evaluierung zur Few-Shot Klassifizierung von Bildern.", "Die Arbeit präsentiert eine empirische Studie des Meta-Lernalgorithmus erster Ordnung Reptile Algorithmus, der eine vorgeschlagene Regularisierungstechnik und tiefere Netzwerke untersucht."]} {"source": "In this paper, we propose the use of in-training matrix factorization to reduce the model size for neural machine translation.Using in-training matrix factorization, parameter matrices may be decomposed into the products of smaller matrices, which can compress large machine translation architectures by vastly reducing the number of learnable parameters.We apply in-training matrix factorization to different layers of standard neural architectures and show that in-training factorization is capable of reducing nearly 50% of learnable parameters without any associated loss in BLEU score.Further, we find that in-training matrix factorization is especially powerful on embedding layers, providing a simple and effective method to curtail the number of parameters with minimal impact on model performance, and, at times, an increase in performance.", "target": ["In dieser Arbeit wird vorgeschlagen, die Matrixfaktorisierung zur Trainingszeit für die neuronale maschinelle Übersetzung zu verwenden, wodurch die Modellgröße und die Trainingszeit ohne Leistungseinbußen verringert werden können.", "In dieser Arbeit wird vorgeschlagen, Modelle mit Hilfe der Matrixfaktorisierung während des Trainings für tiefe neuronale Netze der maschinellen Übersetzung zu komprimieren."]} {"source": "Though state-of-the-art sentence representation models can perform tasks requiring significant knowledge of grammar, it is an open question how best to evaluate their grammatical knowledge.We explore five experimental methods inspired by prior work evaluating pretrained sentence representation models.We use a single linguistic phenomenon, negative polarity item (NPI) licensing, as a case study for our experiments.NPIs like 'any' are grammatical only if they appear in a licensing environment like negation ('Sue doesn't have any cats' vs. '*Sue has any cats').This phenomenon is challenging because of the variety of NPI licensing environments that exist.We introduce an artificially generated dataset that manipulates key features of NPI licensing for the experiments.We find that BERT has significant knowledge of these features, but its success varies widely across different experimental methods.We conclude that a variety of methods is necessary to reveal all relevant aspects of a model's grammatical knowledge in a given domain.", "target": ["Verschiedene Methoden zur Analyse des BERT führen zu unterschiedlichen (aber kompatiblen) Schlussfolgerungen in einer Fallstudie über NPIs."]} {"source": "The primate visual system builds robust, multi-purpose representations of the external world in order to support several diverse downstream cortical processes.Such representations are required to be invariant to the sensory inconsistencies caused by dynamically varying lighting, local texture distortion, etc.A key architectural feature combating such environmental irregularities is ‘long-range horizontal connections’ that aid the perception of the global form of objects.In this work, we explore the introduction of such horizontal connections into standard deep convolutional networks; we present V1Net -- a novel convolutional-recurrent unit that models linear and nonlinear horizontal inhibitory and excitatory connections inspired by primate visual cortical connectivity.We introduce the Texturized Challenge -- a new benchmark to evaluate object recognition performance under perceptual noise -- which we use to evaluate V1Net against an array of carefully selected control models with/without recurrent processing.Additionally, we present results from an ablation study of V1Net demonstrating the utility of diverse neurally inspired horizontal connections for state-of-the-art AI systems on the task of object boundary detection from natural images.We also present the emergence of several biologically plausible horizontal connectivity patterns, namely center-on surround-off, association fields and border-ownership connectivity patterns in a V1Net model trained to perform boundary detection on natural images from the Berkeley Segmentation Dataset 500 (BSDS500).Our findings suggest an increased representational similarity between V1Net and biological visual systems, and highlight the importance of neurally inspired recurrent contextual processing principles for learning visual representations that are robust to perceptual noise and furthering the state-of-the-art in computer vision.", "target": ["In dieser Arbeit stellen wir V1Net vor - ein neuartiges rekurrentes neuronales Netzwerk, das kortikale horizontale Verbindungen modelliert, die zu robusten visuellen Repräsentationen durch Wahrnehmungsgruppierung führen.", "Die Autoren schlagen vor, eine Convolutional Variante des LSTM zu modifizieren, um horizontale Verbindungen einzubeziehen, die von bekannten Interaktionen im visuellen Kortex inspiriert sind."]} {"source": "Humans understand novel sentences by composing meanings and roles of core language components.In contrast, neural network models for natural language modeling fail when such compositional generalization is required.The main contribution of this paper is to hypothesize that language compositionality is a form of group-equivariance.Based on this hypothesis, we propose a set of tools for constructing equivariant sequence-to-sequence models.Throughout a variety of experiments on the SCAN tasks, we analyze the behavior of existing models under the lens of equivariance, and demonstrate that our equivariant architecture is able to achieve the type compositional generalization required in human language understanding.", "target": ["Wir schlagen eine Verbindung zwischen Permutationsäquivarianz und kompositioneller Verallgemeinerung vor und bieten äquivariante Sprachmodelle", "Diese Arbeit konzentriert sich auf das Lernen von lokal äquivarianten Repräsentationen und Funktionen über Eingabe-/Ausgabewörter für die Zwecke der SCAN-Aufgabe."]} {"source": "Variational inference (VI) is a popular approach for approximate Bayesian inference that is particularly promising for highly parameterized models such as deep neural networks. A key challenge of variational inference is to approximate the posterior over model parameters with a distribution that is simpler and tractable yet sufficiently expressive.In this work, we propose a method for training highly flexible variational distributions by starting with a coarse approximation and iteratively refining it.Each refinement step makes cheap, local adjustments and only requires optimization of simple variational families.We demonstrate theoretically that our method always improves a bound on the approximation (the Evidence Lower BOund) and observe this empirically across a variety of benchmark tasks. In experiments, our method consistently outperforms recent variational inference methods for deep learning in terms of log-likelihood and the ELBO. We see that the gains are further amplified on larger scale models, significantly outperforming standard VI and deep ensembles on residual networks on CIFAR10.", "target": ["Das Papier schlägt einen Algorithmus vor, um die Flexibilität des Variationsposteriores in Bayesschen Neuronalen Netzen durch iterative Optimierung zu erhöhen.", "Eine Methode zum Trainieren flexibler Variationsposteriori-Verteilungen, angewandt auf Bayes'sche neuronale Netze, um Variationsinferenz (VI) über die Gewichte durchzuführen."]} {"source": "In this paper, we propose a residual non-local attention network for high-quality image restoration.Without considering the uneven distribution of information in the corrupted images, previous methods are restricted by local convolutional operation and equal treatment of spatial- and channel-wise features.To address this issue, we design local and non-local attention blocks to extract features that capture the long-range dependencies between pixels and pay more attention to the challenging parts.Specifically, we design trunk branch and (non-)local mask branch in each (non-)local attention block.The trunk branch is used to extract hierarchical features.Local and non-local mask branches aim to adaptively rescale these hierarchical features with mixed attentions.The local mask branch concentrates on more local structures with convolutional operations, while non-local attention considers more about long-range dependencies in the whole feature map.Furthermore, we propose residual local and non-local attention learning to train the very deep network, which further enhance the representation ability of the network.Our proposed method can be generalized for various image restoration applications, such as image denoising, demosaicing, compression artifacts reduction, and super-resolution.Experiments demonstrate that our method obtains comparable or better results compared with recently leading methods quantitatively and visually.", "target": ["Neuer hochmoderner Rahmen für die Bildwiederherstellung.", "In dem Artikel wird eine Architektur für ein Convolutional Neural Network vorgeschlagen, das Blöcke für lokale und nicht-lokale Aufmerksamkeitsmechanismen enthält, die angeblich für die Erzielung hervorragender Ergebnisse in vier Bildwiederherstellungsanwendungen verantwortlich sind.", "In diesem Beitrag wird ein nicht-lokales Aufmerksamkeitsnetz für die Bildwiederherstellung vorgeschlagen."]} {"source": "Most approaches to learning action planning models heavily rely on a significantly large volume of training samples or plan observations.In this paper, we adopt a different approach based on deductive learning from domain-specific knowledge, specifically from logic formulae that specify constraints about the possible states of a given domain.The minimal input observability required by our approach is a single example composed of a full initial state and a partial goal state.We will show that exploiting specific domain knowledge enables to constrain the space of possible action models as well as to complete partial observations, both of which turn out helpful to learn good-quality action models.", "target": ["Hybrider Ansatz zur Modellerstellung, der einen Mangel an verfügbaren Daten durch bereichsspezifisches Wissen von Experten ausgleicht.", "Ein Domänenerfassungsansatz, der eine andere Darstellung des partiellen Domänenmodells in Betracht zieht, indem er schematische Mutex-Relationen anstelle von Vor- und Nachbedingungen verwendet."]} {"source": "We release the largest public ECG dataset of continuous raw signals for representation learning containing over 11k patients and 2 billion labelled beats.Our goal is to enable semi-supervised ECG models to be made as well as to discover unknown subtypes of arrhythmia and anomalous ECG signal events.To this end, we propose an unsupervised representation learning task, evaluated in a semi-supervised fashion. We provide a set of baselines for different feature extractors that can be built upon. Additionally, we perform qualitative evaluations on results from PCA embeddings, where we identify some clustering of known subtypes indicating the potential for representation learning in arrhythmia sub-type discovery.", "target": ["Wir veröffentlichen einen Datensatz, der aus Einleitungs-EKG-Daten von 11.000 Patienten besteht, denen das {DEVICENAME}(TM)-Gerät verschrieben wurde.", "Diese Arbeit beschreibt einen großen EKG-Datensatz, den die Autoren zu veröffentlichen beabsichtigen, und bietet eine unüberwachte Analyse und Visualisierung des Datensatzes."]} {"source": "As the basic building block of Convolutional Neural Networks (CNNs), the convolutional layer is designed to extract local patterns and lacks the ability to model global context in its nature.Many efforts have been recently made to complement CNNs with the global modeling ability, especially by a family of works on global feature interaction.In these works, the global context information is incorporated into local features before they are fed into convolutional layers.However, research on neuroscience reveals that, besides influences changing the inputs to our neurons, the neurons' ability of modifying their functions dynamically according to context is essential for perceptual tasks, which has been overlooked in most of CNNs.Motivated by this, we propose one novel Context-Gated Convolution (CGC) to explicitly modify the weights of convolutional layers adaptively under the guidance of global context.As such, being aware of the global context, the modulated convolution kernel of our proposed CGC can better extract representative local patterns and compose discriminative features.Moreover, our proposed CGC is lightweight, amenable to modern CNN architectures, and consistently improves the performance of CNNs according to extensive experiments on image classification, action recognition, and machine translation.", "target": ["Eine neuartige Context-Gated Convolution, die durch explizite Modulation von Convolutional Kernels globale Kontextinformationen in CNNs einbezieht und so repräsentativere lokale Muster erfasst und diskriminierende Merkmale extrahiert.", "In dieser Arbeit wird globaler Kontext verwendet, um die Gewichte von Convolutional Layers zu modulieren und CNNs dabei zu helfen, diskriminantere Merkmale mit hoher Leistung und weniger Parametern zu erfassen als bei der Modulation von Merkmalskarten."]} {"source": "We analyze the trade-off between quantization noise and clipping distortion in low precision networks.We identify the statistics of various tensors, and derive exact expressions for the mean-square-error degradation due to clipping.By optimizing these expressions, we show marked improvements over standard quantization schemes that normally avoid clipping.For example, just by choosing the accurate clipping values, more than 40\\% accuracy improvement is obtained for the quantization of VGG-16 to 4-bits of precision.Our results have many applications for the quantization of neural networks at both training and inference time.", "target": ["Wir analysieren den Kompromiss zwischen Quantisierungsstörungen und Clipping-Verzerrung in Netzwerken mit geringer Präzision und zeigen deutliche Verbesserungen gegenüber Standard-Quantisierungsschemata, die normalerweise Clipping vermeiden.", "Leitet eine Formel zur Ermittlung der minimalen und maximalen Clipping-Werte für eine gleichmäßige Quantisierung ab, die den quadratischen Fehler, der sich aus der Quantisierung ergibt, entweder für eine Laplace- oder Gauß-Verteilung über den vorquantisierten Wert minimiert."]} {"source": "Batch Normalization (BN) is one of the most widely used techniques in Deep Learning field.But its performance can awfully degrade with insufficient batch size.This weakness limits the usage of BN on many computer vision tasks like detection or segmentation, where batch size is usually small due to the constraint of memory consumption.Therefore many modified normalization techniques have been proposed, which either fail to restore the performance of BN completely, or have to introduce additional nonlinear operations in inference procedure and increase huge consumption.In this paper, we reveal that there are two extra batch statistics involved in backward propagation of BN, on which has never been well discussed before.The extra batch statistics associated with gradients also can severely affect the training of deep neural network.Based on our analysis, we propose a novel normalization method, named Moving Average Batch Normalization (MABN).MABN can completely restore the performance of vanilla BN in small batch cases, without introducing any additional nonlinear operations in inference procedure.We prove the benefits of MABN by both theoretical analysis and experiments.Our experiments demonstrate the effectiveness of MABN in multiple computer vision tasks including ImageNet and COCO.The code has been released in https://github.com/megvii-model/MABN.", "target": ["Wir schlagen eine neuartige Normalisierungsmethode vor, um Fälle mit kleinen Batch Größen zu behandeln.", "Eine Methode zur Bewältigung des Problems der kleinen Batch Größe von BN, die den gleitenden Mittelwert ohne allzu großen Aufwand anwendet und die Anzahl der Statistiken von BN für eine bessere Stabilität reduziert."]} {"source": "We present a simple proof for the benefit of depth in multi-layer feedforward network with rectifed activation (``\"depth separation\").Specifically we present a sequence of classification problems f_i such that(a) for any fixed depth rectified network we can find an index m such that problems with index > m require exponential network width to fully represent the function f_m; and(b) for any problem f_m in the family, we present a concrete neural network with linear depth and bounded width that fully represents it.While there are several previous work showing similar results, our proof uses substantially simpler tools and techniques, and should be accessible to undergraduate students in computer science and people with similar backgrounds.", "target": ["ReLU MLP Tiefenseparationsbeweis mit gemoterischen Argumenten.", "Ein Beweis dafür, dass tiefere Netze weniger Einheiten benötigen als flachere für eine Familie von Problemen. "]} {"source": "The rich and accessible labeled data fuel the revolutionary success of deep learning.Nonetheless, massive supervision remains a luxury for many real applications, boosting great interest in label-scarce techniques such as few-shot learning (FSL).An intuitively feasible approach to FSL is to conduct data augmentation via synthesizing additional training samples.The key to this approach is how to guarantee both discriminability and diversity of the synthesized samples.In this paper, we propose a novel FSL model, called $\\textrm{D}^2$GAN, which synthesizes Diverse and Discriminative features based on Generative Adversarial Networks (GAN).$\\textrm{D}^2$GAN secures discriminability of the synthesized features by constraining them to have high correlation with real features of the same classes while low correlation with those of different classes. Based on the observation that noise vectors that are closer in the latent code space are more likely to be collapsed into the same mode when mapped to feature space, $\\textrm{D}^2$GAN incorporates a novel anti-collapse regularization term, which encourages feature diversity by penalizing the ratio of the logarithmic similarity of two synthesized features and the logarithmic similarity of the latent codes generating them.Experiments on three common benchmark datasets verify the effectiveness of $\\textrm{D}^2$GAN by comparing with the state-of-the-art.", "target": ["Ein neuer GAN-basierter Few-Shot Lernalgorithmus durch Synthese verschiedener und diskriminierender Merkmale.", "Eine Meta-Lernmethode, die ein generatives Modell erlernt, das die Unterstützungsmenge eines Few-Shot Lerners, der eine Kombination von Verlusten optimiert, erweitern kann."]} {"source": "The lack of crisp mathematical models that capture the structure of real-worlddata sets is a major obstacle to the detailed theoretical understanding of deepneural networks.Here, we first demonstrate the effect of structured data setsby experimentally comparing the dynamics and the performance of two-layernetworks trained on two different data sets:(i) an unstructured synthetic dataset containing random i.i.d. inputs, and(ii) a simple canonical data set suchas MNIST images.Our analysis reveals two phenomena related to the dynamics ofthe networks and their ability to generalise that only appear when training onstructured data sets.Second, we introduce a generative model for data sets,where high-dimensional inputs lie on a lower-dimensional manifold and havelabels that depend only on their position within this manifold.We call it the*hidden manifold model* and we experimentally demonstrate that trainingnetworks on data sets drawn from this model reproduces both the phenomena seenduring training on MNIST.", "target": ["Wir zeigen, wie sich die Struktur von Datensätzen auf neuronale Netze auswirkt, und stellen ein generatives Modell für synthetische Datensätze vor, das diese Auswirkungen reproduziert.", "In dem Beitrag wird untersucht, wie sich verschiedene Einstellungen der Datenstruktur auf das Lernen neuronaler Netze auswirken und wie das Verhalten auf realen Datensätzen beim Lernen auf einem synthetischen Datensatz nachgeahmt werden kann."]} {"source": "In this paper, we study deep diagonal circulant neural networks, that is deep neural networks in which weight matrices are the product of diagonal and circulant ones.Besides making a theoretical analysis of their expressivity, we introduced principled techniques for training these models: we devise an initialization scheme and proposed a smart use of non-linearity functions in order to train deep diagonal circulant networks. Furthermore, we show that these networks outperform recently introduced deep networks with other types of structured layers.We conduct a thorough experimental study to compare the performance of deep diagonal circulant networks with state of the art models based on structured matrices and with dense models.We show that our models achieve better accuracy than other structured approaches while required 2x fewer weights as the next best approach.Finally we train deep diagonal circulant networks to build a compact and accurate models on a real world video classification dataset with over 3.8 million training examples.", "target": ["Wir trainieren tiefe neuronale Netze, die auf diagonalen und zirkulanten Matrizen basieren, und zeigen, dass diese Art von Netzen sowohl kompakt als auch genau in realen Anwendungen sind.", "Die Autoren liefern eine theoretische Analyse der Ausdruckskraft von diagonalen zirkulanten neuronalen Netzen (DCNN) und schlagen ein Initialisierungsschema für tiefe DCNNs vor."]} {"source": "Interpretability has largely focused on local explanations, i.e. explaining why a model made a particular prediction for a sample.These explanations are appealing due to their simplicity and local fidelity.However, they do not provide information about the general behavior of the model.We propose to leverage model distillation to learn global additive explanations that describe the relationship between input features and model predictions.These global explanations take the form of feature shapes, which are more expressive than feature attributions.Through careful experimentation, we show qualitatively and quantitatively that global additive explanations are able to describe model behavior and yield insights about models such as neural nets.A visualization of our approach applied to a neural net as it is trained is available at https://youtu.be/ErQYwNqzEdc", "target": ["Wir schlagen vor, die Modelldestillation zu nutzen, um globale additive Erklärungen in Form von Merkmalsformen (die aussagekräftiger sind als Merkmalszuweisungen) für Modelle wie neuronale Netze zu lernen, die auf tabellarischen Daten trainiert werden.", "In diesem Beitrag werden verallgemeinerte additive Modelle (GAMs) mit Modelldestillation eingesetzt, um globale Erklärungen für neuronale Netze zu liefern."]} {"source": "A lot of the recent success in natural language processing (NLP) has been driven by distributed vector representations of words trained on large amounts of text in an unsupervised manner.These representations are typically used as general purpose features for words across a range of NLP problems.However, extending this success to learning representations of sequences of words, such as sentences, remains an open problem.Recent work has explored unsupervised as well as supervised learning techniques with different training objectives to learn general purpose fixed-length sentence representations.In this work, we present a simple, effective multi-task learning framework for sentence representations that combines the inductive biases of diverse training objectives in a single model. We train this model on several data sources with multiple training objectives on over 100 million sentences.Extensive experiments demonstrate that sharing a single recurrent sentence encoder across weakly related tasks leads to consistent improvements over previous methods.We present substantial improvements in the context of transfer learning and low-resource settings using our learned general-purpose representations.", "target": ["Ein groß angelegter Multi-Task Lernrahmen mit verschiedenen Trainingszielen zum Erlernen von Satzrepräsentationen fester Länge.", "In dieser Arbeit geht es um das Erlernen von Satzeinbettungen durch die Kombination verschiedener Trainingssignale: Überspringen von Gedanken, Vorhersage von Übersetzungen, Klassifizierung von Entailment-Beziehungen und Vorhersage der Konstituentenparse."]} {"source": "In a time where neural networks are increasingly adopted in sensitive applications, algorithmic bias has emerged as an issue with moral implications.While there are myriad ways that a system may be compromised by bias, systematically isolating and evaluating existing systems on such scenarios is non-trivial, i.e., bias may be subtle, natural and inherently difficult to quantify.To this end, this paper proposes the first systematic study of benchmarking state-of-the-art neural models against biased scenarios.More concretely, we postulate that the bias annotator problem can be approximated with neural models, i.e., we propose generative models of latent bias to deliberately and unfairly associate latent features to a specific class.All in all, our framework provides a new way for principled quantification and evaluation of models against biased datasets.Consequently, we find that state-of-the-art NLP models (e.g., BERT, RoBERTa, XLNET) are readily compromised by biased data.", "target": ["Wir schlagen einen neuronalen Bias Annotator vor, um Modelle auf ihre Robustheit gegenüber verzerrten Textdatensätzen zu testen.", "Eine Methode zur Erzeugung voreingenommener Datensätze für NLP, die auf einem bedingten, adversarial regularisierten Autoencoder (CARA) beruht."]} {"source": "We consider the problem of topic modeling in a weakly semi-supervised setting.In this scenario, we assume that the user knows a priori a subset of the topics she wants the model to learn and is able to provide a few exemplar documents for those topics.In addition, while each document may typically consist of multiple topics, we do not assume that the user will identify all its topics exhaustively. Recent state-of-the-art topic models such as NVDM, referred to herein as Neural Topic Models (NTMs), fall under the variational autoencoder framework.We extend NTMs to the weakly semi-supervised setting by using informative priors in the training objective.After analyzing the effect of informative priors, we propose a simple modification of the NVDM model using a logit-normal posterior that we show achieves better alignment to user-desired topics versus other NTM models.", "target": ["Wir schlagen vor, Themenmodelle im VAE-Stil zu überwachen, indem wir den Prior auf intelligenter Basis pro Dokument anpassen. Wir finden, dass ein Logit-Normal-Posterior die beste Leistung liefert.", "Eine flexible Methode zur schwachen Überwachung eines Themenmodells, um eine bessere Anpassung an die Intuition der Benutzer zu erreichen."]} {"source": "Analyzing deep neural networks (DNNs) via information plane (IP) theory has gained tremendous attention recently as a tool to gain insight into, among others, their generalization ability.However, it is by no means obvious how to estimate mutual information (MI) between each hidden layer and the input/desired output, to construct the IP.For instance, hidden layers with many neurons require MI estimators with robustness towards the high dimensionality associated with such layers.MI estimators should also be able to naturally handle convolutional layers, while at the same time being computationally tractable to scale to large networks.None of the existing IP methods to date have been able to study truly deep Convolutional Neural Networks (CNNs), such as the e.g.\\ VGG-16.In this paper, we propose an IP analysis using the new matrix--based R\\'enyi's entropy coupled with tensor kernels over convolutional layers, leveraging the power of kernel methods to represent properties of the probability distribution independently of the dimensionality of the data.The obtained results shed new light on the previous literature concerning small-scale DNNs, however using a completely new approach.Importantly, the new framework enables us to provide the first comprehensive IP analysis of contemporary large-scale DNNs and CNNs, investigating the different training phases and providing new insights into the training dynamics of large-scale neural networks.", "target": ["Erste umfassende Analyse der Informationsebene von großen tiefen neuronalen Netzen unter Verwendung von matrixbasierter Entropie und Tensorkernels.", "Die Autoren schlagen einen Tensor-Kernel-basierten Schätzer für die Schätzung der gegenseitigen Information zwischen hochdimensionalen Schichten in einem neuronalen Netz vor."]} {"source": "Developing agents that can learn to follow natural language instructions has been an emerging research direction.While being accessible and flexible, natural language instructions can sometimes be ambiguous even to humans.To address this, we propose to utilize programs, structured in a formal language, as a precise and expressive way to specify tasks.We then devise a modular framework that learns to perform a task specified by a program – as different circumstances give rise to diverse ways to accomplish the task, our framework can perceive which circumstance it is currently under, and instruct a multitask policy accordingly to fulfill each subtask of the overall task.Experimental results on a 2D Minecraft environment not only demonstrate that the proposed framework learns to reliably accomplish program instructions and achieves zero-shot generalization to more complex instructions but also verify the efficiency of the proposed modulation mechanism for learning the multitask policy.We also conduct an analysis comparing various models which learn from programs and natural language instructions in an end-to-end fashion.", "target": ["Wir schlagen einen modularen Rahmen vor, der durch Programme spezifizierte Aufgaben bewältigen kann und eine Zero-Shot Verallgemeinerung auf komplexere Aufgaben ermöglicht.", "Diese Arbeit untersucht das Training von RL-Agenten mit Anweisungen und Aufgabendekompositionen, die als Programme formalisiert sind, und schlägt ein Modell für einen programmgesteuerten Agenten vor, der ein Programm interpretiert und Teilziele für ein Aktionsmodul vorschlägt."]} {"source": "We analyze the convergence of (stochastic) gradient descent algorithm for learning a convolutional filter with Rectified Linear Unit (ReLU) activation function.Our analysis does not rely on any specific form of the input distribution and our proofs only use the definition of ReLU, in contrast with previous works that are restricted to standard Gaussian input.We show that (stochastic) gradient descent with random initialization can learn the convolutional filter in polynomial time and the convergence rate depends on the smoothness of the input distribution and the closeness of patches.To the best of our knowledge, this is the first recovery guarantee of gradient-based algorithms for convolutional filter on non-Gaussian input distributions.Our theory also justifies the two-stage learning rate strategy in deep neural networks.While our focus is theoretical, we also present experiments that justify our theoretical findings.", "target": ["Wir beweisen, dass ein zufällig initialisierter (stochastischer) Gradientenabstieg einen Convolutional Filter in polynomieller Zeit lernt.", "Untersucht das Problem des Lernens eines einzelnen Convolutional Filter mit SGD und zeigt, dass SGD unter bestimmten Bedingungen einen einzelnen Convolutional Filter lernt.", "In dieser Arbeit wird die Annahme der Gaußschen Verteilung auf eine allgemeinere Annahme der Winkelglätte erweitert, die eine größere Familie von Eingangsverteilungen abdeckt."]} {"source": "Deep neural networks (DNNs) are widely adopted in real-world cognitive applications because of their high accuracy.The robustness of DNN models, however, has been recently challenged by adversarial attacks where small disturbance on input samples may result in misclassification.State-of-the-art defending algorithms, such as adversarial training or robust optimization, improve DNNs' resilience to adversarial attacks by paying high computational costs.Moreover, these approaches are usually designed to defend one or a few known attacking techniques only.The effectiveness to defend other types of attacking methods, especially those that have not yet been discovered or explored, cannot be guaranteed.This work aims for a general approach of enhancing the robustness of DNN models under adversarial attacks.In particular, we propose Bamboo -- the first data augmentation method designed for improving the general robustness of DNN without any hypothesis on the attacking algorithms.Bamboo augments the training data set with a small amount of data uniformly sampled on a fixed radius ball around each training data and hence, effectively increase the distance between natural data points and decision boundary.Our experiments show that Bamboo substantially improve the general robustness against arbitrary types of attacks and noises, achieving better results comparing to previous adversarial training methods, robust optimization methods and other data augmentation methods with the same amount of data points.", "target": ["Die erste Datenerweiterungsmethode, die speziell zur Verbesserung der allgemeinen Robustheit von DNN entwickelt wurde, ohne Hypothesen über die angreifenden Algorithmen aufzustellen.", "Es wird eine Trainingsmethode zur Datenerweiterung vorgeschlagen, um die Robustheit des Modells gegenüber Störungen durch den Gegner zu erhöhen, indem gleichmäßige Zufallsstichproben aus einer Kugel mit festem Radius, die auf die Trainingsdaten zentriert ist, erweitert werden. "]} {"source": "The ability to synthesize realistic patterns of neural activity is crucial for studying neural information processing.Here we used the Generative Adversarial Networks (GANs) framework to simulate the concerted activity of a population of neurons.We adapted the Wasserstein-GAN variant to facilitate the generation of unconstrained neural population activity patterns while still benefiting from parameter sharing in the temporal domain.We demonstrate that our proposed GAN, which we termed Spike-GAN, generates spike trains that match accurately the first- and second-order statistics of datasets of tens of neurons and also approximates well their higher-order statistics.We applied Spike-GAN to a real dataset recorded from salamander retina and showed that it performs as well as state-of-the-art approaches based on the maximum entropy and the dichotomized Gaussian frameworks.Importantly, Spike-GAN does not require to specify a priori the statistics to be matched by the model, and so constitutes a more flexible method than these alternative approaches.Finally, we show how to exploit a trained Spike-GAN to construct 'importance maps' to detect the most relevant statistical structures present in a spike train. Spike-GAN provides a powerful, easy-to-use technique for generating realistic spiking neural activity and for describing the most relevant features of the large-scale neural population recordings studied in modern systems neuroscience.", "target": ["Verwendung von Wasserstein-GANs zur Erzeugung realistischer neuronaler Aktivität und zur Erkennung der wichtigsten Merkmale in neuronalen Populationsmustern.", "Eine Methode zur Simulation von Spike Trains von Neuronenpopulationen, die mit empirischen Daten übereinstimmen, unter Verwendung eines semi-convolutional GAN.", "In der Arbeit wird vorgeschlagen, GANs für die Synthese realistischer neuronaler Aktivitätsmuster zu verwenden."]} {"source": "Deep latent variable models have become a popular model choice due to the scalable learning algorithms introduced by (Kingma & Welling 2013, Rezende et al. 2014).These approaches maximize a variational lower bound on the intractable log likelihood of the observed data.Burda et al. (2015) introduced a multi-sample variational bound, IWAE, that is at least as tight as the standard variational lower bound and becomes increasingly tight as the number of samples increases.Counterintuitively, the typical inference network gradient estimator for the IWAE bound performs poorly as the number of samples increases (Rainforth et al. 2018, Le et al. 2018).Roeder et a.(2017) propose an improved gradient estimator, however, are unable to show it is unbiased.We show that it is in fact biased and that the bias can be estimated efficiently with a second application of the reparameterization trick.The doubly reparameterized gradient (DReG) estimator does not suffer as the number of samples increases, resolving the previously raised issues.The same idea can be used to improve many recently introduced training techniques for latent variable models.In particular, we show that this estimator reduces the variance of the IWAE gradient, the reweighted wake-sleep update (RWS) (Bornschein & Bengio 2014), and the jackknife variational inference (JVI) gradient (Nowozin 2018).Finally, we show that this computationally efficient, drop-in estimator translates to improved performance for all three objectives on several modeling tasks.", "target": ["Doppelt reparametrisierte Gradientenschätzer bieten eine unvoreingenommene Varianzreduktion, die zu einer verbesserten Leistung führt.", "Der Autor fand experimentell heraus, dass der Schätzer der bestehenden Arbeit (STL) voreingenommen ist und schlägt vor, die Voreingenommenheit zu reduzieren, um den Gradientenschätzer des ELBO zu verbessern."]} {"source": "Zeroth-order optimization is the process of minimizing an objective $f(x)$, given oracle access to evaluations at adaptively chosen inputs $x$.In this paper, we present two simple yet powerful GradientLess Descent (GLD) algorithms that do not rely on an underlying gradient estimate and are numerically stable.We analyze our algorithm from a novel geometric perspective and we show that for {\\it any monotone transform} of a smooth and strongly convex objective with latent dimension $k \\ge n$, we present a novel analysis that shows convergence within an $\\epsilon$-ball of the optimum in $O(kQ\\log(n)\\log(R/\\epsilon))$ evaluations, where the input dimension is $n$, $R$ is the diameter of the input space and $Q$ is the condition number.Our rates are the first of its kind to be both1) poly-logarithmically dependent on dimensionality and2) invariant under monotone transformations.We further leverage our geometric perspective to show that our analysis is optimal.Both monotone invariance and its ability to utilize a low latent dimensionality are key to the empirical success of our algorithms, as demonstrated on synthetic and MuJoCo benchmarks.", "target": ["Gradientless Descent ist ein nachweislich effizienter gradientenfreier Algorithmus, der monoton-invariant und schnell für hochdimensionale Optimierung nullter Ordnung ist.", "Diese Arbeit schlägt stabile GradientLess Descent (GLD) Algorithmen vor, die sich nicht auf eine Gradientenschätzung verlassen."]} {"source": "Many processes can be concisely represented as a sequence of events leading from a starting state to an end state.Given raw ingredients, and a finished cake, an experienced chef can surmise the recipe.Building upon this intuition, we propose a new class of visual generative models: goal-conditioned predictors (GCP).Prior work on video generation largely focuses on prediction models that only observe frames from the beginning of the video.GCP instead treats videos as start-goal transformations, making video generation easier by conditioning on the more informative context provided by the first and final frames. Not only do existing forward prediction approaches synthesize better and longer videos when modified to become goal-conditioned, but GCP models can also utilize structures that are not linear in time, to accomplish hierarchical prediction. To this end, we study both auto-regressive GCP models and novel tree-structured GCP models that generate frames recursively, splitting the video iteratively into finer and finer segments delineated by subgoals. In experiments across simulated and real datasets, our GCP methods generate high-quality sequences over long horizons. Tree-structured GCPs are also substantially easier to parallelize than auto-regressive GCPs, making training and inference very efficient, and allowing the model to train on sequences that are thousands of frames in length.Finally, we demonstrate the utility of GCP approaches for imitation learning in the setting without access to expert actions. Videos are on the supplementary website: https://sites.google.com/view/video-gcp", "target": ["Wir schlagen eine neue Klasse von visuellen generativen Modellen vor: zielkonditionierte Prädiktoren. Wir zeigen experimentell, dass die Konditionierung auf das Ziel es ermöglicht, die Unsicherheit zu reduzieren und Vorhersagen über viel längere Zeiträume zu erstellen.", "In dieser Arbeit wird das Problem der Videovorhersage als Interpolation statt als Extrapolation neu formuliert, indem die Vorhersage auf das Start- und Endbild (Zielbild) konditioniert wird, was zu einer höheren Qualität der Vorhersagen führt."]} {"source": "Recent advances in computing technology and sensor design have made it easier to collect longitudinal or time series data from patients, resulting in a gigantic amount of available medical data.Most of the medical time series lack annotations or even when the annotations are available they could be subjective and prone to human errors.Earlier works have developed natural language processing techniques to extract concept annotations and/or clinical narratives from doctor notes.However, these approaches are slow and do not use the accompanying medical time series data.To address this issue, we introduce the problem of concept annotation for the medical time series data, i.e., the task of predicting and localizing medical concepts by using the time series data as input.We propose Relational Multi-Instance Learning (RMIL) - a deep Multi Instance Learning framework based on recurrent neural networks, which uses pooling functions and attention mechanisms for the concept annotation tasks.Empirical results on medical datasets show that our proposed models outperform various multi-instance learning models.", "target": ["Wir schlagen ein tiefes Multiinstanz-Lernsystem vor, das auf rekurrenten neuronalen Netzen basiert und Pooling-Funktionen und Aufmerksamkeitsmechanismen für die Annotation von Konzepten verwendet.", "Das Papier befasst sich mit der Klassifizierung medizinischer Zeitreihendaten und schlägt vor, die zeitliche Beziehung zwischen den Instanzen in jeder Reihe mithilfe einer rekurrenten neuronalen Netzwerkarchitektur zu modellieren. ", "Vorschlagen einer neuartigen Multiple Instance Learning (MIL) Formulierung, die Relation MIL (RMIL) genannt wird, und erörtern eine Reihe ihrer Varianten mit LSTM, Bi-LSTM, S2S usw. und untersuchen der Integration von RMIL mit verschiedenen Aufmerksamkeitsmechanismen und demonstrieren ihrer Verwendung bei der Vorhersage medizinischer Konzepte aus Zeitreihendaten. "]} {"source": "The embedding layers transforming input words into real vectors are the key components of deep neural networks used in natural language processing.However, when the vocabulary is large, the corresponding weight matrices can be enormous, which precludes their deployment in a limited resource setting.We introduce a novel way of parametrizing embedding layers based on the Tensor Train (TT) decomposition, which allows compressing the model significantly at the cost of a negligible drop or even a slight gain in performance. We evaluate our method on a wide range of benchmarks in natural language processing and analyze the trade-off between performance and compression ratios for a wide range of architectures, from MLPs to LSTMs and Transformers.", "target": ["Die Einbettungsschichten werden mit der Tensor-Train-Zerlegung faktorisiert, um ihren Speicherbedarf zu reduzieren.", "Diese Arbeit schlägt ein Low-Rank-Tensor-Zerlegungsmodell zur Parametrisierung der Einbettungsmatrix in der natürlichen Sprachverarbeitung (NLP) vor, das das Netzwerk komprimiert und manchmal die Testgenauigkeit erhöht."]} {"source": "We note that common implementations of adaptive gradient algorithms, such as Adam, limit the potential benefit of weight decay regularization, because the weights do not decay multiplicatively (as would be expected for standard weight decay) but by an additive constant factor. We propose a simple way to resolve this issue by decoupling weight decay and the optimization steps taken w.r.t. the loss function.We provide empirical evidence that our proposed modification(i) decouples the optimal choice of weight decay factor from the setting of the learning rate for both standard SGD and Adam, and(ii) substantially improves Adam's generalization performance, allowing it to compete with SGD with momentum on image classification datasets (on which it was previously typically outperformed by the latter).We also demonstrate that longer optimization runs require smaller weight decay values for optimal results and introduce a normalized variant of weight decay to reduce this dependence.Finally, we propose a version of Adam with warm restarts (AdamWR) that has strong anytime performance while achieving state-of-the-art results on CIFAR-10 and ImageNet32x32. Our source code will become available after the review process.", "target": ["Regulierung des Gewichtsabfalls in adaptiven Gradientenmethoden wie Adam.", "Schlägt vor, den Gewichtsabfall von der Anzahl der Optimierungsschritte zu entkoppeln.", "In der Arbeit wird eine alternative Methode zur Umsetzung des Gewichtsverfalls in Adam vorgestellt, für die empirische Ergebnisse vorliegen.", "Untersucht die Probleme des Gewichtsverfalls in den SGD-Varianten und schlägt eine Entkopplungsmethode zwischen dem Gewichtsverfall und der gradientenbasierten Aktualisierung vor."]} {"source": "Lifelong learning is the problem of learning multiple consecutive tasks in a sequential manner where knowledge gained from previous tasks is retained and used for future learning.It is essential towards the development of intelligent machines that can adapt to their surroundings.In this work we focus on a lifelong learning approach to generative modeling where we continuously incorporate newly observed streaming distributions into our learnt model.We do so through a student-teacher architecture which allows us to learn and preserve all the distributions seen so far without the need to retain the past data nor the past models.Through the introduction of a novel cross-model regularizer, the student model leverages the information learnt by the teacher, which acts as a summary of everything seen till now.The regularizer has the additional benefit of reducing the effect of catastrophic interference that appears when we learn over streaming data.We demonstrate its efficacy on streaming distributions as well as its ability to learn a common latent representation across a complex transfer learning scenario.", "target": ["Lebenslanges verteilungsbasiertes Lernen durch eine Schüler-Lehrer Architektur in Verbindung mit einem modellübergreifenden Posterior-Regulierer."]} {"source": "Three-dimensional geometric data offer an excellent domain for studying representation learning and generative modeling.In this paper, we look at geometric data represented as point clouds.We introduce a deep autoencoder (AE) network with excellent reconstruction quality and generalization ability.The learned representations outperform the state of the art in 3D recognition tasks and enable basic shape editing applications via simple algebraic manipulations, such as semantic part editing, shape analogies and shape interpolation.We also perform a thorough study of different generative models including GANs operating on the raw point clouds, significantly improved GANs trained in the fixed latent space our AEs and, Gaussian mixture models (GMM).Interestingly, GMMs trained in the latent space of our AEs produce samples of the best fidelity and diversity.To perform our quantitative evaluation of generative models, we propose simple measures of fidelity and diversity based on optimally matching between sets point clouds.", "target": ["Deep Autoencoders zum Erlernen einer guten Darstellung für geometrische 3D-Punktwolkendaten; Generative Modelle für Punktwolken.", "Ansätze zum Erlernen generativer Modelle vom Typ GAN unter Verwendung der PointNet-Architektur und des Latent-Space-GAN."]} {"source": "Despite the remarkable performance of deep neural networks (DNNs) on various tasks, they are susceptible to adversarial perturbations which makes it difficult to deploy them in real-world safety-critical applications.In this paper, we aim to obtain robust networks by sparsifying DNN's latent features sensitive to adversarial perturbation.Specifically, we define vulnerability at the latent feature space and then propose a Bayesian framework to prioritize/prune features based on their contribution to both the original and adversarial loss.We also suggest regularizing the features' vulnerability during training to improve robustness further.While such network sparsification has been primarily studied in the literature for computational efficiency and regularization effect of DNNs, we confirm that it is also useful to design a defense mechanism through quantitative evaluation and qualitative analysis.We validate our method, \\emph{Adversarial Neural Pruning (ANP)} on multiple benchmark datasets, which results in an improvement in test accuracy and leads to state-of-the-art robustness.ANP also tackles the practical problem of obtaining sparse and robust networks at the same time, which could be crucial to ensure adversarial robustness on lightweight networks deployed to computation and memory-limited devices.", "target": ["Wir schlagen eine neue Methode zur Unterdrückung der Anfälligkeit des latenten Merkmalsraums vor, um robuste und kompakte Netzwerke zu erhalten.", "In dieser Arbeit wird die Methode des \"adversen neuronalen Pruning\" vorgeschlagen, bei der eine Pruning-Maske und ein neuer Verlust zur Unterdrückung von Schwachstellen trainiert wird, um die Genauigkeit und die Robustheit gegenüber Angriffen zu verbessern."]} {"source": "In anomaly detection (AD), one seeks to identify whether a test sample is abnormal, given a data set of normal samples. A recent and promising approach to AD relies on deep generative models, such as variational autoencoders (VAEs),for unsupervised learning of the normal data distribution.In semi-supervised AD (SSAD), the data also includes a small sample of labeled anomalies.In this work,we propose two variational methods for training VAEs for SSAD.The intuitive idea in both methods is to train the encoder to ‘separate’ between latent vectors for normal and outlier data.We show that this idea can be derived from principled probabilistic formulations of the problem, and propose simple and effective algorithms. Our methods can be applied to various data types, as we demonstrate on SSAD datasets ranging from natural images to astronomy and medicine, and can be combined with any VAE model architecture.When comparing to state-of-the-art SSAD methods that are not specific to particular data types, we obtain marked improvement in outlier detection.", "target": ["Wir haben zwei VAE-Modifikationen vorgeschlagen, die negative Datenbeispiele berücksichtigen, und sie für die halbüberwachte Erkennung von Anomalien verwendet.", "In den Beiträgen werden zwei VAE-ähnliche Methoden für die halbüberwachte Neuheitserkennung vorgeschlagen, MML-VAE und DP-VAE."]} {"source": "We introduce dynamic instance hardness (DIH) to facilitate the training of machine learning models.DIH is a property of each training sample and is computed as the running mean of the sample's instantaneous hardness as measured over the training history.We use DIH to evaluate how well a model retains knowledge about each training sample over time.We find that for deep neural nets (DNNs), the DIH of a sample in relatively early training stages reflects its DIH in later stages and as a result, DIH can be effectively used to reduce the set of training samples in future epochs.Specifically, during each epoch, only samples with high DIH are trained (since they are historically hard) while samples with low DIH can be safely ignored.DIH is updated each epoch only for the selected samples, so it does not require additional computation.Hence, using DIH during training leads to an appreciable speedup.Also, since the model is focused on the historically more challenging samples, resultant models are more accurate.The above, when formulated as an algorithm, can be seen as a form of curriculum learning, so we call our framework DIH curriculum learning (or DIHCL).The advantages of DIHCL, compared to other curriculum learning approaches, are: (1) DIHCL does not require additional inference steps over the data not selected by DIHCL in each epoch, (2) the dynamic instance hardness, compared to static instance hardness (e.g., instantaneous loss), is more stable as it integrates information over the entire training history up to the present time.Making certain mathematical assumptions, we formulate the problem of DIHCL as finding a curriculum that maximizes a multi-set function $f(\\cdot)$, and derive an approximation bound for a DIH-produced curriculum relative to the optimal curriculum.Empirically, DIHCL-trained DNNs significantly outperform random mini-batch SGD and other recently developed curriculum learning methods in terms of efficiency, early-stage convergence, and final performance, and this is shown in training several state-of-the-art DNNs on 11 modern datasets.", "target": ["Ein neues Verständnis der Trainingsdynamik und Metriken der Auswendiglernhärte führen zu effizientem und nachweisbarem Lernen von Curriculums.", "Diese Arbeit formuliert DIH als ein Curriclum Learning Problem, das die Daten zum Trainieren von DNNs effektiver nutzen kann, und leitet die Theorie der Approximationsgrenze ab."]} {"source": "This paper explores many immediate connections between adaptive control and machine learning, both through common update laws as well as common concepts.Adaptive control as a field has focused on mathematical rigor and guaranteed convergence.The rapid advances in machine learning on the other hand have brought about a plethora of new techniques and problems for learning.This paper elucidates many of the numerous common connections between both fields such that results from both may be leveraged together to solve new problems.In particular, a specific problem related to higher order learning is solved through insights obtained from these intersections.", "target": ["Geschichte der parallelen Entwicklungen von Aktualisierungsgesetzen und Konzepten zwischen adaptiver Steuerung und Optimierung beim maschinellen Lernen."]} {"source": "Recurrent convolution (RC) shares the same convolutional kernels and unrolls them multiple times, which is originally proposed to model time-space signals.We suggest that RC can be viewed as a model compression strategy for deep convolutional neural networks.RC reduces the redundancy across layers and is complementary to most existing model compression approaches.However, the performance of an RC network can't match the performance of its corresponding standard one, i.e. with the same depth but independent convolutional kernels. This reduces the value of RC for model compression.In this paper, we propose a simple variant which improves RC networks: The batch normalization layers of an RC module are learned independently (not shared) for different unrolling steps.We provide insights on why this works.Experiments on CIFAR show that unrolling a convolutional layer several steps can improve the performance, thus indirectly plays a role in model compression.", "target": ["Rekurrente Convoltion für die Modellkompression und ein Trick für das Training, d.h. das Lernen unabhängiger BN-Schichten über Schritte.", "Der Autor modifiziert das Recurrent Convoltional Neural Network (RCNN) mit unabhängiger Batch Normalisierung, wobei die experimentellen Ergebnisse des RCNN mit der Architektur des neuronalen Netzwerks ResNet kompatibel sind, wenn es die gleiche Anzahl von Schichten enthält."]} {"source": "The visual world is vast and varied, but its variations divide into structured and unstructured factors.Structured factors, such as scale and orientation, admit clear theories and efficient representation design.Unstructured factors, such as what it is that makes a cat look like a cat, are too complicated to model analytically, and so require free-form representation learning.We compose structured Gaussian filters and free-form filters, optimized end-to-end, to factorize the representation for efficient yet general learning.Our experiments on dynamic structure, in which the structured filters vary with the input, equal the accuracy of dynamic inference with more degrees of freedom while improving efficiency.(Please see https://arxiv.org/abs/1904.11487 for the full edition.)", "target": ["Dynamische rezeptive Felder mit räumlicher Gaußstruktur sind genau und effizient.", "In diesem Beitrag wird ein strukturierter Faltungsoperator zur Modellierung von Deformationen lokaler Bildregionen vorgeschlagen, der die Anzahl der Parameter erheblich reduziert."]} {"source": "It is widely known that well-designed perturbations can cause state-of-the-art machine learning classifiers to mis-label an image, with sufficiently small perturbations that are imperceptible to the human eyes.However, by detecting the inconsistency between the image and wrong label, the human observer would be alerted of the attack.In this paper, we aim to design attacks that not only make classifiers generate wrong labels, but also make the wrong labels imperceptible to human observers.To achieve this, we propose an algorithm called LabelFool which identifies a target label similar to the ground truth label and finds a perturbation of the image for this target label.We first find the target label for an input image by a probability model, then move the input in the feature space towards the target label.Subjective studies on ImageNet show that in the label space, our attack is much less recognizable by human observers, while objective experimental results on ImageNet show that we maintain similar performance in the image space as well as attack rates to state-of-the-art attack algorithms.", "target": ["Ein Trick für adversarial Beispiele, damit die falsch klassifizierten Labels im Bezeichnungsraum für menschliche Beobachter nicht wahrnehmbar sind.", "Eine Methode zur Konstruktion von Angriffen, die von Menschen weniger leicht erkannt werden können, indem die Zielklasse so verändert wird, dass sie der Originalklasse des Bildes ähnlich ist."]} {"source": "This paper presents noise type/position classification of various impact noises generated in a building which is a serious conflict issue in apartment complexes.For this study, a collection of floor impact noise dataset is recorded with a single microphone.Noise types/positions are selected based on a report by the Floor Management Center under Korea Environmental Corporation.Using a convolutional neural networks based classifier, the impact noise signals converted to log-scaled Mel-spectrograms are classified into noise types or positions.Also, our model is evaluated on a standard environmental sound dataset ESC-50 to show extensibility on environmental sound classification.", "target": ["In diesem Beitrag wird eine Klassifizierung der Lärmarten und -positionen verschiedener Trittschallgeräusche vorgestellt, die in einem Gebäude erzeugt werden und ein ernsthaftes Konfliktproblem in Wohnkomplexen darstellen.", "Diese Arbeit beschreibt den Einsatz von Convolutional Neural Networks in einem neuartigen Anwendungsbereich der Klassifizierung von Gebäudelärmarten und Lärmpositionen. "]} {"source": "Recordings of neural circuits in the brain reveal extraordinary dynamical richness and high variability.At the same time, dimensionality reduction techniques generally uncover low-dimensional structures underlying these dynamics.What determines the dimensionality of activity in neural circuits?What is the functional role of dimensionality in behavior and task learning?In this work we address these questions using recurrent neural network (RNN) models.We find that, depending on the dynamics of the initial network, RNNs learn to increase and reduce dimensionality in a way that matches task demands.These findings shed light on fundamental dynamical mechanisms by which neural networks solve tasks with robust representations that generalize to new cases.", "target": ["Rekurrente neuronale Netze lernen, um die Dimensionalität ihrer internen Repräsentation in Abhängigkeit von der Dynamik des Ausgangsnetzes aufgabengerecht zu erhöhen und zu verringern."]} {"source": "Domain adaptation addresses the common problem when the target distribution generating our test data drifts from the source (training) distribution.While absent assumptions, domain adaptation is impossible, strict conditions, e.g. covariate or label shift, enable principled algorithms.Recently-proposed domain-adversarial approaches consist of aligning source and target encodings, often motivating this approach as minimizing two (of three) terms in a theoretical bound on target error.Unfortunately, this minimization can cause arbitrary increases in the third term, e.g. they can break down under shifting label distributions.We propose asymmetrically-relaxed distribution alignment, a new approach that overcomes some limitations of standard domain-adversarial algorithms.Moreover, we characterize precise assumptions under which our algorithm is theoretically principled and demonstrate empirical benefits on both synthetic and real datasets.", "target": ["Anstelle von strikten Verteilungsanpassungen in traditionellen tiefen Domänenanpassungszielen, die versagen, wenn sich die Verteilung der Zielbeschriftungen ändert, schlagen wir vor, ein entspanntes Ziel mit neuen Analysen, neuen Algorithmen und experimenteller Validierung zu optimieren.", "In dieser Arbeit werden entspannte Metriken für die Bereichsanpassung vorgeschlagen, die neue theoretische Grenzen für den Zielfehler setzen."]} {"source": "In this paper, we explore \\textit{summary-to-article generation}: the task of generating long articles given a short summary, which provides finer-grained content control for the generated text.To prevent sequence-to-sequence (seq2seq) models from degenerating into language models and better controlling the long text to be generated, we propose a hierarchical generation approach which first generates a sketch of intermediate length based on the summary and then completes the article by enriching the generated sketch.To mitigate the discrepancy between the ``oracle'' sketch used during training and the noisy sketch generated during inference, we propose an end-to-end joint training framework based on multi-agent reinforcement learning.For evaluation, we use text summarization corpora by reversing their inputs and outputs, and introduce a novel evaluation method that employs a summarization system to summarize the generated article and test its match with the original input summary.Experiments show that our proposed hierarchical generation approach can generate a coherent and relevant article based on the given summary, yielding significant improvements upon conventional seq2seq models.", "target": ["Wir untersuchen die Aufgabe der Generierung von Zusammenfassungen zu Artikeln und schlagen ein hierarchisches Generierungsschema zusammen mit einem gemeinsamen Ende-zu-Ende Reinforcement Learning Framework zum Trainieren des hierarchischen Modells vor.", "Um das Problem der Degeneration bei der Generierung von Zusammenfassungen zu Artikeln zu lösen, wird in diesem Artikel ein hierarchischer Generierungsansatz vorgeschlagen, der zunächst eine Zwischenversion des Artikels und dann den vollständigen Artikel generiert."]} {"source": "When training a deep neural network for supervised image classification, one can broadly distinguish between two types of latent features of images that will drive the classification of class Y. Following the notation of Gong et al. (2016), we can divide features broadly into the classes of(i) “core” or “conditionally invariant” features X^ci whose distribution P(X^ci | Y) does not change substantially across domains and(ii) “style” or “orthogonal” features X^orth whose distribution P(X^orth | Y) can change substantially across domains.These latter orthogonal features would generally include features such as position, rotation, image quality or brightness but also more complex ones like hair color or posture for images of persons.We try to guard against future adversarial domain shifts by ideally just using the “conditionally invariant” features for classification.In contrast to previous work, we assume that the domain itself is not observed and hence a latent variable.We can hence not directly see the distributional change of features across different domains. We do assume, however, that we can sometimes observe a so-called identifier or ID variable.We might know, for example, that two images show the same person, with ID referring to the identity of the person.In data augmentation, we generate several images from the same original image, with ID referring to the relevant original image.The method requires only a small fraction of images to have an ID variable.We provide a causal framework for the problem by adding the ID variable to the model of Gong et al. (2016).However, we are interested in settings where we cannot observe the domain directly and we treat domain as a latent variable.If two or more samples share the same class and identifier, (Y, ID)=(y,i), then we treat those samples as counterfactuals under different style interventions on the orthogonal or style features.Using this grouping-by-ID approach, we regularize the network to provide near constant output across samples that share the same ID by penalizing with an appropriate graph Laplacian.This is shown to substantially improve performance in settings where domains change in terms of image quality, brightness, color changes, and more complex changes such as changes in movement and posture.We show links to questions of interpretability, fairness and transfer learning.", "target": ["Wir schlagen eine kontrafaktische Regularisierung vor, um uns vor nachteiligen Domänenverschiebungen zu schützen, die durch Verschiebungen in der Verteilung der latenten \"Stilmerkmale\" von Bildern entstehen.", "Die Arbeit erörtert Möglichkeiten zum Schutz vor nachteiligen Domänenverschiebungen mit kontrafaktischer Regularisierung durch das Erlernen eines Klassifikators, der gegenüber oberflächlichen Veränderungen (oder \"Stil\"-Merkmalen) in der Vorstellungswelt invariant ist.", "Diese Arbeit zielt auf eine robuste Bildklassifizierung gegen nachteilige Veränderungen im Bereich ab, und das Ziel wird erreicht, indem die Verwendung von sich ändernden Stilmerkmalen vermieden wird."]} {"source": "Gradient-based meta-learning algorithms require several steps of gradient descent to adapt to newly incoming tasks.This process becomes more costly as the number of samples increases. Moreover, the gradient updates suffer from several sources of noise leading to a degraded performance. In this work, we propose a meta-learning algorithm equipped with the GradiEnt Component COrrections, aGECCO cell for short, which generates a multiplicative corrective low-rank matrix which (after vectorization) corrects the estimated gradients. GECCO contains a simple decoder-like network with learnable parameters, an attention module and a so-called context input parameter. The context parameter of GECCO is updated to generate a low-rank corrective term for the network gradients. As a result, meta-learning requires only a few of gradient updates to absorb new task (often, a single update is sufficient in the few-shot scenario). While previous approaches address this problem by altering the learning rates, factorising network parameters or directly learning feature corrections from features and/or gradients, GECCO is an off-the-shelf generator-like unit that performs element-wise gradient corrections without the need to ‘observe’ the features and/or the gradients directly. We show that our GECCO(i) accelerates learning,(ii) performs robust corrections of the gradients corrupted by a noise, and(iii) leads to notable improvements over existing gradient-based meta-learning algorithms.", "target": ["Wir schlagen einen Meta-Lerner vor, der sich schnell an mehrere Aufgaben anpassen kann, sogar in einem Schritt in einer Few-Shot Einstellung.", "In diesem Beitrag wird eine Methode zum Meta-Lernen eines Gradientenkorrekturmoduls vorgeschlagen, bei der die Vorkonditionierung durch ein neuronales Netz parametrisiert wird und ein zweistufiger Gradientenaktualisierungsprozess während der Anpassung eingebaut wird. "]} {"source": "Discriminative question answering models can overfit to superficial biases in datasets, because their loss function saturates when any clue makes the answer likely. We introduce generative models of the joint distribution of questions and answers, which are trained to explain the whole question, not just to answer it.Our question answering (QA) model is implemented by learning a prior over answers, and a conditional language model to generate the question given the answer—allowing scalable and interpretable many-hop reasoning as the question is generated word-by-word. Our model achieves competitive performance with specialised discriminative models on the SQUAD and CLEVR benchmarks, indicating that it is a more general architecture for language understanding and reasoning than previous work.The model greatly improves generalisation both from biased training data and to adversarial testing data, achieving a new state-of-the-art on ADVERSARIAL SQUAD.We will release our code.", "target": ["Modelle zur Beantwortung von Fragen, die die gemeinsame Verteilung von Fragen und Antworten modellieren, können mehr lernen als unterschiedliche Modelle.", "In diesem Beitrag wird ein generativer Ansatz für die textuelle und visuelle Qualitätssicherung vorgeschlagen, bei dem eine gemeinsame Verteilung über den Frage- und Antwortraum unter Berücksichtigung des Kontexts erlernt wird, wodurch komplexere Beziehungen erfasst werden.", "In diesem Beitrag wird ein generatives Modell für die Beantwortung von Fragen vorgestellt und vorgeschlagen, p(q,a|c) zu modellieren, faktorisiert als p(a|c) * p(q|a,c). ", "Die Autoren schlagen ein generatives QA-Modell vor, das die Verteilung von Fragen und Antworten in einem Dokument/Kontext gemeinsam optimiert. "]} {"source": "In this paper, we turn our attention to the interworking between the activation functions and the batch normalization, which is a virtually mandatory technique to train deep networks currently.We propose the activation function Displaced Rectifier Linear Unit (DReLU) by conjecturing that extending the identity function of ReLU to the third quadrant enhances compatibility with batch normalization.Moreover, we used statistical tests to compare the impact of using distinct activation functions (ReLU, LReLU, PReLU, ELU, and DReLU) on the learning speed and test accuracy performance of standardized VGG and Residual Networks state-of-the-art models.These convolutional neural networks were trained on CIFAR-100 and CIFAR-10, the most commonly used deep learning computer vision datasets.The results showed DReLU speeded up learning in all models and datasets.Besides, statistical significant performance assessments (p<0.05) showed DReLU enhanced the test accuracy presented by ReLU in all scenarios.Furthermore, DReLU showed better test accuracy than any other tested activation function in all experiments with one exception, in which case it presented the second best performance.Therefore, this work demonstrates that it is possible to increase performance replacing ReLU by an enhanced activation function.", "target": ["Es wird eine neue Aktivierungsfunktion namens Displaced Rectifier Linear Unit vorgeschlagen. Es wurde gezeigt, dass sie die Trainings- und Inferenzleistung von Batch normalisierten neuronalen Netzen verbessern kann.", "Die Arbeit vergleicht und rät ab von der Verwendung von Batch-Normalisierung nach der Verwendung von Rectifier Linear Units.", "In dieser Arbeit wird eine Aktivierungsfunktion, genannt \"displaced ReLU\", vorgeschlagen, um die Leistung von CNNs zu verbessern, die eine Batch-Normalisierung verwenden."]} {"source": "Encoding the input scale information explicitly into the representation learned by a convolutional neural network (CNN) is beneficial for many vision tasks especially when dealing with multiscale input signals.We study, in this paper, a scale-equivariant CNN architecture with joint convolutions across the space and the scaling group, which is shown to be both sufficient and necessary to achieve scale-equivariant representations.To reduce the model complexity and computational burden, we decompose the convolutional filters under two pre-fixed separable bases and truncate the expansion to low-frequency components.A further benefit of the truncated filter expansion is the improved deformation robustness of the equivariant representation.Numerical experiments demonstrate that the proposed scale-equivariant neural network with decomposed convolutional filters (ScDCFNet) achieves significantly improved performance in multiscale image classification and better interpretability than regular CNNs at a reduced model size.", "target": ["Wir konstruieren skalenäquivariante Convolutional Neural Networks in der allgemeinsten Form, die sowohl rechnerisch effizient als auch nachweislich deformationsresistent sind.", "Die Autoren schlagen eine CNN-Architektur vor, die theoretisch äquivariant zu isotropen Skalierungen und Translationen ist, indem eine zusätzliche Skalendimension zu den Aktivierungstensoren hinzugefügt wird."]} {"source": "In this paper, we diagnose deep neural networks for 3D point cloud processing to explore the utility of different network architectures.We propose a number of hypotheses on the effects of specific network architectures on the representation capacity of DNNs.In order to prove the hypotheses, we design five metrics to diagnose various types of DNNs from the following perspectives, information discarding, information concentration, rotation robustness, adversarial robustness, and neighborhood inconsistency.We conduct comparative studies based on such metrics to verify the hypotheses, which may shed new lights on the architectural design of neural networks.Experiments demonstrated the effectiveness of our method.The code will be released when this paper is accepted.", "target": ["Wir diagnostizieren tiefe neuronale Netze für die 3D-Punktwolkenverarbeitung, um den Nutzen verschiedener Netzarchitekturen zu untersuchen. ", "Die Arbeit untersucht verschiedene neuronale Netzwerkarchitekturen für die Verarbeitung von 3D-Punktwolken und schlägt Metriken für die Robustheit gegenüber nachteiligen Einflüssen, die Rotationsrobustheit und die Nachbarschaftskonsistenz vor."]} {"source": "In this work we construct flexible joint distributions from low-dimensional conditional semi-implicit distributions.Explicitly defining the structure of the approximation allows to make the variational lower bound tighter, resulting in more accurate inference.", "target": ["Die Nutzung der Struktur von Verteilungen verbessert die semi-implizite variationale Inferenz."]} {"source": "Imitation learning from human-expert demonstrations has been shown to be greatly helpful for challenging reinforcement learning problems with sparse environment rewards.However, it is very difficult to achieve similar success without relying on expert demonstrations.Recent works on self-imitation learning showed that imitating the agent's own past good experience could indirectly drive exploration in some environments, but these methods often lead to sub-optimal and myopic behavior.To address this issue, we argue that exploration in diverse directions by imitating diverse trajectories, instead of focusing on limited good trajectories, is more desirable for the hard-exploration tasks.We propose a new method of learning a trajectory-conditioned policy to imitate diverse trajectories from the agent's own past experiences and show that such self-imitation helps avoid myopic behavior and increases the chance of finding a globally optimal solution for hard-exploration tasks, especially when there are misleading rewards.Our method significantly outperforms existing self-imitation learning and count-based exploration methods on various hard-exploration tasks with local optima.In particular, we report a state-of-the-art score of more than 20,000 points on Montezumas Revenge without using expert demonstrations or resetting to arbitrary states.", "target": ["Self-Imitation Learning von verschiedenen Trajektorien mit trajektorienbedingten Regeln.", "Diese Arbeit befasst sich mit schwierigen Explorationsaufgaben, indem die Selbstimitation auf eine vielfältige Auswahl von Trajektorien aus der Vergangenheit angewendet wird, um eine effizientere Exploration in spärlichen Belohnungsproblemen zu ermöglichen und SOTA-Ergebnisse zu erzielen."]} {"source": "We present a method that trains large capacity neural networks with significantly improved accuracy and lower dynamic computational cost.This is achieved by gating the deep-learning architecture on a fine-grained-level.Individual convolutional maps are turned on/off conditionally on features in the network.To achieve this, we introduce a new residual block architecture that gates convolutional channels in a fine-grained manner.We also introduce a generally applicable tool batch-shaping that matches the marginal aggregate posteriors of features in a neural network to a pre-specified prior distribution.We use this novel technique to force gates to be more conditional on the data.We present results on CIFAR-10 and ImageNet datasets for image classification, and Cityscapes for semantic segmentation.Our results show that our method can slim down large architectures conditionally, such that the average computational cost on the data is on par with a smaller architecture, but with higher accuracy.In particular, on ImageNet, our ResNet50 and ResNet34 gated networks obtain 74.60% and 72.55% top-1 accuracy compared to the 69.76% accuracy of the baseline ResNet18 model, for similar complexity.We also show that the resulting networks automatically learn to use more features for difficult examples and fewer features for simple examples.", "target": ["Eine Methode, die neuronale Netze mit großer Kapazität mit deutlich verbesserter Genauigkeit und geringeren dynamischen Rechenkosten trainiert.", "Ein Verfahren zum Trainieren eines Netzes mit großer Kapazität, von dem nur Teile zum Zeitpunkt der Inferenz abhängig von der Eingabe verwendet werden, unter Verwendung einer feinkörnigen bedingten Auswahl und einer neuen Regularisierungsmethode, dem \"Batch Shaping\"."]} {"source": "With a view to bridging the gap between deep learning and symbolic AI, we present a novel end-to-end neural network architecture that learns to form propositional representations with an explicitly relational structure from raw pixel data.In order to evaluate and analyse the architecture, we introduce a family of simple visual relational reasoning tasks of varying complexity.We show that the proposed architecture, when pre-trained on a curriculum of such tasks, learns to generate reusable representations that better facilitate subsequent learning on previously unseen tasks when compared to a number of baseline architectures.The workings of a successfully trained model are visualised to shed some light on how the architecture functions.", "target": ["Wir stellen eine differenzierbare Ende-zu-Ende Architektur vor, die lernt, Pixel auf Prädikate abzubilden, und evaluieren sie anhand einer Reihe einfacher relationaler Argumentationsaufgaben.", "Eine Netzwerkarchitektur, die auf dem Multi-Head Self-Attention Modul basiert, um eine neue Form von relationalen Darstellungen zu erlernen, die die Dateneffizienz und die Generalisierungsfähigkeit beim Lernen von Curriculums verbessert."]} {"source": "In natural language inference, the semantics of some words do not affect the inference.Such information is considered superficial and brings overfitting.How can we represent and discard such superficial information?In this paper, we use first order logic (FOL) - a classic technique from meaning representation language – to explain what information is superficial for a given sentence pair.Such explanation also suggests two inductive biases according to its properties.We proposed a neural network-based approach that utilizes the two inductive biases.We obtain substantial improvements over extensive experiments.", "target": ["Wir verwenden neuronale Netze, um oberflächliche Informationen für die Inferenz natürlicher Sprache zu projizieren, indem wir die oberflächlichen Informationen aus der Perspektive der Logik erster Ordnung definieren und identifizieren.", "In diesem Beitrag wird versucht, oberflächliche Informationen in der natürlichsprachlichen Inferenz zu reduzieren, um eine Überanpassung zu verhindern, und es wird ein neuronales Graphennetz zur Modellierung der Beziehung zwischen Prämisse und Hypothese eingeführt. ", "Ein Ansatz zur Behandlung der Inferenz natürlicher Sprache unter Verwendung der Logik erster Ordnung und zur Ergänzung von NLI-Modellen mit logischen Informationen, um die Inferenz zu verbessern."]} {"source": "We propose an approach to training machine learning models that are fair in the sense that their performance is invariant under certain perturbations to the features.For example, the performance of a resume screening system should be invariant under changes to the name of the applicant.We formalize this intuitive notion of fairness by connecting it to the original notion of individual fairness put forth by Dwork et al and show that the proposed approach achieves this notion of fairness.We also demonstrate the effectiveness of the approach on two machine learning tasks that are susceptible to gender and racial biases.", "target": ["Algorithmus für das Training eines individuell fairen Klassifizierers unter Verwendung von adversarialer Robustheit.", "In diesem Beitrag wird eine neue Definition von algorithmischer Fairness und ein Algorithmus vorgeschlagen, mit dem ein ML-Modell gefunden werden kann, das die Fairness-Bedingungen erfüllt."]} {"source": "In this paper, we propose a Seed-Augment-Train/Transfer (SAT) framework that contains a synthetic seed image dataset generation procedure for languages with different numeral systems using freely available open font file datasets.This seed dataset of images is then augmented to create a purely synthetic training dataset, which is in turn used to train a deep neural network and test on held-out real world handwritten digits dataset spanning five Indic scripts, Kannada, Tamil, Gujarati, Malayalam, and Devanagari.We showcase the efficacy of this approach both qualitatively, by training a Boundary-seeking GAN (BGAN) that generates realistic digit images in the five languages, and also qualitatively by testing a CNN trained on the synthetic data on the real-world datasets.This establishes not only an interesting nexus between the font-datasets-world and transfer learning but also provides a recipe for universal-digit classification in any script.", "target": ["Sind Seeding und Augmentation alles, was Sie für die Klassifizierung von Ziffern in jeder Sprache brauchen?", "In diesem Beitrag werden neue Datensätze für fünf Sprachen vorgestellt und ein neuer Rahmen (SAT) für die Erstellung von Schriftbilddatensätzen für die universelle Ziffernklassifikation vorgeschlagen."]} {"source": "An important research direction in machine learning has centered around developing meta-learning algorithms to tackle few-shot learning.An especially successful algorithm has been Model Agnostic Meta-Learning (MAML), a method that consists of two optimization loops, with the outer loop finding a meta-initialization, from which the inner loop can efficiently learn new tasks.Despite MAML's popularity, a fundamental open question remains -- is the effectiveness of MAML due to the meta-initialization being primed for rapid learning (large, efficient changes in the representations) or due to feature reuse, with the meta initialization already containing high quality features?We investigate this question, via ablation studies and analysis of the latent representations, finding that feature reuse is the dominant factor.This leads to the ANIL (Almost No Inner Loop) algorithm, a simplification of MAML where we remove the inner loop for all but the (task-specific) head of the underlying neural network.ANIL matches MAML's performance on benchmark few-shot image classification and RL and offers computational improvements over MAML.We further study the precise contributions of the head and body of the network, showing that performance on the test tasks is entirely determined by the quality of the learned features, and we can remove even the head of the network (the NIL algorithm).We conclude with a discussion of the rapid learning vs feature reuse question for meta-learning algorithms more broadly.", "target": ["Der Erfolg von MAML beruht auf der Wiederverwendung von Merkmalen aus der Meta-Initialisierung, die auch zu einer natürlichen Vereinfachung des Algorithmus führt, indem die innere Schleife für den Netzkörper sowie andere Erkenntnisse über den Kopf und den Körper entfernt werden.", "In der Arbeit wird festgestellt, dass die Wiederverwendung von Merkmalen der wichtigste Faktor für den Erfolg von MAML ist, und es werden neue Algorithmen vorgeschlagen, die wesentlich weniger Rechenzeit benötigen als MAML."]} {"source": "Model training remains a dominant financial cost and time investment in machine learning applications.Developing and debugging models often involve iterative training, further exacerbating this issue.With growing interest in increasingly complex models, there is a need for techniques that help to reduce overall training effort.While incremental training can save substantial time and cost by training an existing model on a small subset of data, little work has explored policies for determining when incremental training provides adequate model performance versus full retraining.We provide a method-agnostic algorithm for deciding when to incrementally train versus fully train.We call this setting of non-deterministic full- or incremental training ``Mixed Setting Training\".Upon evaluation in slot-filling tasks, we find that this algorithm provides a bounded error, avoids catastrophic forgetting, and results in a significant speedup over a policy of always fully training.", "target": ["Wir stellen einen methodenunabhängigen Algorithmus zur Verfügung, um zu entscheiden, wann ein inkrementelles Training und wann ein vollständiges Training durchgeführt werden soll. Dieser Algorithmus bietet eine signifikante Beschleunigung gegenüber dem vollständigen Training und verhindert katastrophales Vergessen.", "Diese Arbeit schlägt einen Ansatz vor, um zu entscheiden, wann ein Modell im Rahmen einer iterativen Modellentwicklung bei Slot-Filling-Aufgaben inkrementell oder vollständig neu trainiert werden soll."]} {"source": "Neural networks have succeeded in many reasoning tasks.Empirically, these tasks require specialized network structures, e.g., Graph Neural Networks (GNNs) perform well on many such tasks, while less structured networks fail.Theoretically, there is limited understanding of why and when a network structure generalizes better than other equally expressive ones.We develop a framework to characterize which reasoning tasks a network can learn well, by studying how well its structure aligns with the algorithmic structure of the relevant reasoning procedure.We formally define algorithmic alignment and derive a sample complexity bound that decreases with better alignment.This framework explains the empirical success of popular reasoning models and suggests their limitations.We unify seemingly different reasoning tasks, such as intuitive physics, visual question answering, and shortest paths, via the lens of a powerful algorithmic paradigm, dynamic programming (DP).We show that GNNs can learn DP and thus solve these tasks.On several reasoning tasks, our theory aligns with empirical results.", "target": ["Wir entwickeln einen theoretischen Rahmen, um zu charakterisieren, welche Denkaufgaben ein neuronales Netz gut lernen kann.", "In dem Beitrag wird ein Maß für die Klassen der algorithmischen Ausrichtung vorgeschlagen, mit dem gemessen wird, wie \"nah\" neuronale Netze an bekannten Algorithmen sind, und das die Verbindung zwischen mehreren Klassen bekannter Algorithmen und Architekturen neuronaler Netze nachweist."]} {"source": "Cell-cell interactions have an integral role in tumorigenesis as they are critical in governing immune responses.As such, investigating specific cell-cell interactions has the potential to not only expand upon the understanding of tumorigenesis, but also guide clinical management of patient responses to cancer immunotherapies.A recent imaging technique for exploring cell-cell interactions, multiplexed ion beam imaging by time-of-flight (MIBI-TOF), allows for cells to be quantified in 36 different protein markers at sub-cellular resolutions in situ as high resolution multiplexed images.To explore the MIBI images, we propose a GAN for multiplexed data with protein specific attention.By conditioning image generation on cell types, sizes, and neighborhoods through semantic segmentation maps, we are able to observe how these factors affect cell-cell interactions simultaneously in different protein channels.Furthermore, we design a set of metrics and offer the first insights towards cell spatial orientations, cell protein expressions, and cell neighborhoods.Our model, cell-cell interaction GAN (CCIGAN), outperforms or matches existing image synthesis methods on all conventional measures and significantly outperforms on biologically motivated metrics.To our knowledge, we are the first to systematically model multiple cellular protein behaviors and interactions under simulated conditions through image synthesis.", "target": ["Wir erforschen Zell-Zell Interaktionen in verschiedenen Kontexten der Tumorumgebung, die in hochgradig multiplexen Bildern beobachtet werden, durch Bildsynthese unter Verwendung einer neuartigen Aufmerksamkeits GAN Architektur.", "Eine neue Methode zur Modellierung von Daten, die durch Multiplex-Ionenstrahl Imaging per Time of Flight (MIBI-TOF) generiert werden, durch Lernen der Many-to-Many Zuordnung zwischen Zelltypen und Expressionsniveaus von Proteinmarkern."]} {"source": "Machine learning models for question-answering (QA), where given a question and a passage, the learner must select some span in the passage as an answer, are known to be brittle.By inserting a single nuisance sentence into the passage, an adversary can fool the model into selecting the wrong span.A promising new approach for QA decomposes the task into two stages:(i) select relevant sentences from the passage; and(ii) select a span among those sentences.Intuitively, if the sentence selector excludes the offending sentence, then the downstream span selector will be robust.While recent work has hinted at the potential robustness of two-stage QA, these methods have never, to our knowledge, been explicitly combined with adversarial training.This paper offers a thorough empirical investigation of adversarial robustness, demonstrating that although the two-stage approach lags behind single-stage span selection, adversarial training improves its performance significantly, leading to an improvement of over 22 points in F1 score over the adversarially-trained single-stage model.", "target": ["Ein zweistufiger Ansatz, der aus einer Satzauswahl und einer anschließenden Abstandauswahl besteht, kann im Vergleich zu einem einstufigen Modell, das auf der Grundlage des gesamten Kontexts trainiert wird, robuster gegenüber adversarischen Angriffen gemacht werden.", "In diesem Beitrag wird ein bestehendes Modell untersucht und es wird festgestellt, dass eine zweistufige trainierte QS-Methode im Vergleich zu anderen Methoden nicht widerstandsfähiger gegen Angriffe von außen ist."]} {"source": "The aim of this study is to introduce a formal framework for analysis and synthesis of driver assistance systems.It applies formal methods to the verification of a stochastic human driver model built using the cognitive architecture ACT-R, and then bootstraps safety in semi-autonomous vehicles through the design of provably correct Advanced Driver Assistance Systems.The main contributions include the integration of probabilistic ACT-R models in the formal analysis of semi-autonomous systems and an abstraction technique that enables a finite representation of a large dimensional, continuous system in the form of a Markov model.The effectiveness of the method is illustrated in several case studies under various conditions.", "target": ["Verifizierung eines menschlichen Fahrermodells auf der Grundlage einer kognitiven Architektur und Synthese eines korrekt konstruierten ADAS auf dieser Grundlage."]} {"source": "In contrast to the older writing system of the 19th century, modern Hawaiian orthography employs characters for long vowels and glottal stops.These extra characters account for about one-third of the phonemes in Hawaiian, so including them makes a big difference to reading comprehension and pronunciation.However, transliterating between older and newer texts is a laborious task when performed manually.We introduce two related methods to help solve this transliteration problem automatically, given that there were not enough data to train an end-to-end deep learning model.One approach is implemented, end-to-end, using finite state transducers (FSTs).The other is a hybrid deep learning approach which approximately composes an FST with a recurrent neural network (RNN).We find that the hybrid approach outperforms the end-to-end FST by partitioning the original problem into one part that can be modelled by hand, using an FST, and into another part, which is easily solved by an RNN trained on the available data.", "target": ["Ein neuartiger, hybrider Deep-Learning-Ansatz bietet die beste Lösung für ein Problem mit begrenzten Daten (das für den Erhalt der hawaiischen Sprache wichtig ist)."]} {"source": "In many real-world settings, a learning model must perform few-shot classification: learn to classify examples from unseen classes using only a few labeled examples per class.Additionally, to be safely deployed, it should have the ability to detect out-of-distribution inputs: examples that do not belong to any of the classes.While both few-shot classification and out-of-distribution detection are popular topics,their combination has not been studied.In this work, we propose tasks for out-of-distribution detection in the few-shot setting and establish benchmark datasets, based on four popular few-shot classification datasets. Then, we propose two new methods for this task and investigate their performance.In sum, we establish baseline out-of-distribution detection results using standard metrics on new benchmark datasets and show improved results with our proposed methods.", "target": ["Wir untersuchen quantitativ die Out-of-Distribution Erkennung in einer Situation mit wenigen Aufnahmen, ermitteln die grundlegenden Ergebnisse mit ProtoNet, MAML und ABML und verbessern sie.", "In dem Artikel werden zwei neue Konfidenzwerte vorgeschlagen, die für die Erkennung von Out-of-Distribution bei der Few-Shot Klassifizierung besser geeignet sind, und es wird gezeigt, dass ein auf Distanzmetriken basierender Ansatz die Leistung verbessert."]} {"source": "While modern generative models are able to synthesize high-fidelity, visually appealing images, successfully generating examples that are useful for recognition tasks remains an elusive goal.To this end, our key insight is that the examples should be synthesized to recover classifier decision boundaries that would be learned from a large amount of real examples.More concretely, we treat a classifier trained on synthetic examples as ''student'' and a classifier trained on real examples as ''teacher''.By introducing knowledge distillation into a meta-learning framework, we encourage the generative model to produce examples in a way that enables the student classifier to mimic the behavior of the teacher.To mitigate the potential gap between student and teacher classifiers, we further propose to distill the knowledge in a progressive manner, either by gradually strengthening the teacher or weakening the student.We demonstrate the use of our model-agnostic distillation approach to deal with data scarcity, significantly improving few-shot learning performance on miniImageNet and ImageNet1K benchmarks.", "target": ["In diesem Beitrag wird eine progressive Wissensdestillation für das Lernen generativer Modelle vorgestellt, die auf Erkennungsaufgaben ausgerichtet sind.", "In diesem Beitrag wird gezeigt, wie man durch einfaches bis schweres Curriculum-Lernen ein generatives Modell trainieren kann, um die Few-Shot Klassifizierung zu verbessern."]} {"source": "Deep neural networks provide state-of-the-art performance for many applications of interest.Unfortunately they are known to be vulnerable to adversarial examples, formed by applying small but malicious perturbations to the original inputs.Moreover, the perturbations can transfer across models: adversarial examples generated for a specific model will often mislead other unseen models.Consequently the adversary can leverage it to attack against the deployed black-box systems. In this work, we demonstrate that the adversarial perturbation can be decomposed into two components: model-specific and data-dependent one, and it is the latter that mainly contributes to the transferability.Motivated by this understanding, we propose to craft adversarial examples by utilizing the noise reduced gradient (NRG) which approximates the data-dependent component.Experiments on various classification models trained on ImageNet demonstrates that the new approach enhances the transferability dramatically.We also find that low-capacity models have more powerful attack capability than high-capacity counterparts, under the condition that they have comparable test performance. These insights give rise to a principled manner to construct adversarial examples with high success rates and could potentially provide us guidance for designing effective defense approaches against black-box attacks.", "target": ["Wir schlagen eine neue Methode zur Verbesserung der Übertragbarkeit von Negativbeispielen vor, indem wir den Störungsreduzierten Gradienten verwenden.", "In dieser Arbeit wird postuliert, dass eine Störung aus einer modellspezifischen und einer datenspezifischen Komponente besteht und dass die Verstärkung der letzteren am besten für adversarial Angriffe geeignet ist.", "In dieser Arbeit geht es darum, die Übertragbarkeit von Beispielen aus der Praxis von einem Modell auf ein anderes Modell zu verbessern."]} {"source": "We present the iterative two-pass decomposition flow to accelerate existing convolutional neural networks (CNNs). The proposed rank selection algorithm can effectively determine the proper ranks of the target convolutional layers for the low rank approximation.Our two-pass CP-decomposition helps prevent from the instability problem.The iterative flow makes the decomposition of the deeper networks systematic.The experiment results shows that VGG16 can be accelerated with a 6.2x measured speedup while the accuracy drop remains only 1.2%.", "target": ["Wir stellen den iterativen Zwei-Pass-CP Zerlegungsfluss vor, um bestehende Convolutional Neural Networks (CNNs) effektiv zu beschleunigen.", "In der Arbeit wird ein neuartiger Arbeitsablauf für die Beschleunigung und Komprimierung von CNNs vorgeschlagen, und es wird eine Methode zur Bestimmung des Zielrangs der einzelnen Schichten angesichts der angestrebten Gesamtbeschleunigung vorgeschlagen. ", "Diese Arbeit befasst sich mit dem Problem des Lernens einer Tensor-Filter-Operation niedrigen Ranges für Filterschichten in tiefen neuronalen Netzen (DNNs). "]} {"source": "We introduce LiPopt, a polynomial optimization framework for computing increasingly tighter upper bound on the Lipschitz constant of neural networks.The underlying optimization problems boil down to either linear (LP) or semidefinite (SDP) programming.We show how to use the sparse connectivity of a network, to significantly reduce the complexity of computation.This is specially useful for convolutional as well as pruned neural networks.We conduct experiments on networks with random weights as well as networks trained on MNIST, showing that in the particular case of the $\\ell_\\infty$-Lipschitz constant, our approach yields superior estimates as compared to other baselines available in the literature.", "target": ["LP-basierte obere Schranken für die Lipschitz-Konstante von neuronalen Netzen.", "Die Autoren untersuchen das Problem der Schätzung der Lipschitz-Konstante eines tiefen neuronalen Netzes mit ELO Aktivierungsfunktion und formulieren es als polynomielles Optimierungsproblem."]} {"source": "Although few-shot learning research has advanced rapidly with the help of meta-learning, its practical usefulness is still limited because most of the researches assumed that all meta-training and meta-testing examples came from a single domain.We propose a simple but effective way for few-shot classification in which a task distribution spans multiple domains including previously unseen ones during meta-training.The key idea is to build a pool of embedding models which have their own metric spaces and to learn to select the best one for a particular task through multi-domain meta-learning.This simplifies task-specific adaptation over a complex task distribution as a simple selection problem rather than modifying the model with a number of parameters at meta-testing time.Inspired by common multi-task learning techniques, we let all models in the pool share a base network and add a separate modulator to each model to refine the base network in its own way.This architecture allows the pool to maintain representational diversity and each model to have domain-invariant representation as well. Experiments show that our selection scheme outperforms other few-shot classification algorithms when target tasks could come from many different domains.They also reveal that aggregating outputs from all constituent models is effective for tasks from unseen domains showing the effectiveness of our framework.", "target": ["Wir befassen uns mit der Few-Shot Klassifizierung in mehreren Bereichen, indem wir mehrere Modelle erstellen, um diese komplexe Aufgabenverteilung auf kollektive Weise darzustellen, und die aufgabenspezifische Anpassung als Auswahlproblem aus diesen vortrainierten Modellen vereinfachen.", "Diese Arbeit befasst sich mit der Few-shot Klassifikation mit vielen verschiedenen Domänen, indem ein Pool von Einbettungsmodellen aufgebaut wird, um domäneninvariante und domänenspezifische Merkmale zu erfassen, ohne die Anzahl der Parameter signifikant zu erhöhen."]} {"source": "Still in 2019, many scanned documents come into businesses in non-digital format.Text to be extracted from real world documents is often nestled inside rich formatting, such as tabular structures or forms with fill-in-the-blank boxes or underlines whose ink often touches or even strikes through the ink of the text itself.Such ink artifacts can severely interfere with the performance of recognition algorithms or other downstream processing tasks.In this work, we propose DeepErase, a neural preprocessor to erase ink artifacts from text images.We devise a method to programmatically augment text images with real artifacts, and use them to train a segmentation network in an weakly supervised manner.In additional to high segmentation accuracy, we show that our cleansed images achieve a significant boost in downstream recognition accuracy by popular OCR software such as Tesseract 4.0.We test DeepErase on out-of-distribution datasets (NIST SDB) of scanned IRS tax return forms and achieve double-digit improvements in recognition accuracy over baseline for both printed and handwritten text.", "target": ["Neural-basierte Entfernung von Tintenartefakten in Dokumenten (Unterstreichungen, Flecken usw.) ohne manuell beschriftete Trainingsdaten."]} {"source": "Black-box adversarial attacks require a large number of attempts before finding successful adversarial examples that are visually indistinguishable from the original input.Current approaches relying on substitute model training, gradient estimation or genetic algorithms often require an excessive number of queries.Therefore, they are not suitable for real-world systems where the maximum query number is limited due to cost.We propose a query-efficient black-box attack which uses Bayesian optimisation in combination with Bayesian model selection to optimise over the adversarial perturbation and the optimal degree of search space dimension reduction.We demonstrate empirically that our method can achieve comparable success rates with 2-5 times fewer queries compared to previous state-of-the-art black-box attacks.", "target": ["Wir schlagen einen abfrageeffizienten Black-Box-Angriff vor, der Bayes'sche Optimierung in Kombination mit Bayes'scher Modellauswahl verwendet, um die Störung des Gegners und den optimalen Grad der Dimensionsreduktion des Suchraums zu optimieren. ", "Die Autoren schlagen vor, die Bayes'sche Optimierung mit einem GP Surrogate für die Erzeugung von adversarial Bildern zu verwenden, indem sie die additive Struktur ausnutzen und die Bayes'sche Modellauswahl verwenden, um eine optimale Dimensionalitätsreduktion zu bestimmen."]} {"source": "Learning multimodal representations is a fundamentally complex research problem due to the presence of multiple heterogeneous sources of information.Although the presence of multiple modalities provides additional valuable information, there are two key challenges to address when learning from multimodal data:1) models must learn the complex intra-modal and cross-modal interactions for prediction and2) models must be robust to unexpected missing or noisy modalities during testing.In this paper, we propose to optimize for a joint generative-discriminative objective across multimodal data and labels.We introduce a model that factorizes representations into two sets of independent factors: multimodal discriminative and modality-specific generative factors.Multimodal discriminative factors are shared across all modalities and contain joint multimodal features required for discriminative tasks such as sentiment prediction.Modality-specific generative factors are unique for each modality and contain the information required for generating data.Experimental results show that our model is able to learn meaningful multimodal representations that achieve state-of-the-art or competitive performance on six multimodal datasets.Our model demonstrates flexible generative capabilities by conditioning on independent factors and can reconstruct missing modalities without significantly impacting performance.Lastly, we interpret our factorized representations to understand the interactions that influence multimodal learning.", "target": ["Wir schlagen ein Modell zum Erlernen faktorisierter multimodaler Repräsentationen vor, die diskriminativ, generativ und interpretierbar sind.", "Diese Arbeit stellt ein \"Multimodales Faktorisierungsmodell\" vor, das Repräsentationen in gemeinsame multimodale diskriminierende Faktoren und modalitätsspezifische generative Faktoren aufteilt. "]} {"source": "The successful application of flexible, general learning algorithms to real-world robotics applications is often limited by their poor data-efficiency.To address the challenge, domains with more than one dominant task of interest encourage the sharing of information across tasks to limit required experiment time.To this end, we investigate compositional inductive biases in the form of hierarchical policies as a mechanism for knowledge transfer across tasks in reinforcement learning (RL).We demonstrate that this type of hierarchy enables positive transfer while mitigating negative interference.Furthermore, we demonstrate the benefits of additional incentives to efficiently decompose task solutions.Our experiments show that these incentives are naturally given in multitask learning and can be easily introduced for single objectives.We design an RL algorithm that enables stable and fast learning of structured policies and the effective reuse of both behavior components and transition data across tasks in an off-policy setting.Finally, we evaluate our algorithm in simulated environments as well as physical robot experiments and demonstrate substantial improvements in data data-efficiency over competitive baselines.", "target": ["Wir entwickeln einen hierarchischen, akteurskritischen Algorithmus für den kompositorischen Transfer durch die gemeinsame Nutzung von Richtlinienkomponenten und demonstrieren die Komponentenspezialisierung und die damit verbundenen direkten Vorteile in Multitasking-Domänen sowie seine Anpassung für Einzelaufgaben.", "Eine Kombination verschiedener Lerntechniken für den Erwerb von Strukturen und das Lernen mit asymmetrischen Daten, die zum Trainieren einer HRL-Politik verwendet werden.", "Die Autoren stellen eine hierarchische Policy-Struktur vor, die sowohl für Single-Task- als auch für Multitask Reinforcement Learning verwendet werden kann, und bewerten die Nützlichkeit der Struktur bei komplexen Roboteraufgaben."]} {"source": "In this paper, we study the representational power of deep neural networks (DNN) that belong to the family of piecewise-linear (PWL) functions, based on PWL activation units such as rectifier or maxout.We investigate the complexity of such networks by studying the number of linear regions of the PWL function.Typically, a PWL function from a DNN can be seen as a large family of linear functions acting on millions of such regions.We directly build upon the work of Mont´ufar et al. (2014), Mont´ufar (2017), and Raghu et al. (2017) by refining the upper and lower bounds on the number of linear regions for rectified and maxout networks.In addition to achieving tighter bounds, we also develop a novel method to perform exact numeration or counting of the number of linear regions with a mixed-integer linear formulation that maps the input space to output.We use this new capability to visualize how the number of linear regions change while training DNNs.", "target": ["Wir zählen empirisch die Anzahl der linearen Bereiche von Gleichrichternetzen und verfeinern die oberen und unteren Grenzen.", "In diesem Beitrag werden verbesserte Grenzwerte für die Zählung der Anzahl linearer Regionen in ReLU-Netzen vorgestellt."]} {"source": "Convolutional neural networks memorize part of their training data, which is why strategies such as data augmentation and drop-out are employed to mitigate over- fitting.This paper considers the related question of “membership inference”, where the goal is to determine if an image was used during training.We con- sider membership tests over either ensembles of samples or individual samples.First, we show how to detect if a dataset was used to train a model, and in particular whether some validation images were used at train time.Then, we introduce a new approach to infer membership when a few of the top layers are not available or have been fine-tuned, and show that lower layers still carry information about the training samples.To support our findings, we conduct large-scale experiments on Imagenet and subsets of YFCC-100M with modern architectures such as VGG and Resnet.", "target": ["Wir analysieren die Erinnerungseigenschaften durch ein Convnet der Trainingsmenge und schlagen verschiedene Anwendungsfälle vor, in denen wir einige Informationen über die Trainingsmenge extrahieren können. ", "Beleuchtet die Verallgemeinerungs-/Erinnerungseigenschaften von großen und tiefen ConvNets und versucht, Verfahren zu entwickeln, mit denen festgestellt werden kann, ob eine Eingabe für ein trainiertes ConvNet tatsächlich zum Trainieren des Netzes verwendet wurde."]} {"source": "While Generative Adversarial Networks (GANs) have empirically produced impressive results on learning complex real-world distributions, recent works have shown that they suffer from lack of diversity or mode collapse.The theoretical work of Arora et al. (2017a) suggests a dilemma about GANs’ statistical properties: powerful discriminators cause overfitting, whereas weak discriminators cannot detect mode collapse.By contrast, we show in this paper that GANs can in principle learn distributions in Wasserstein distance (or KL-divergence in many cases) with polynomial sample complexity, if the discriminator class has strong distinguishing power against the particular generator class (instead of against all possible generators).For various generator classes such as mixture of Gaussians, exponential families, and invertible and injective neural networks generators, we design corresponding discriminators (which are often neural nets of specific architectures) such that the Integral Probability Metric (IPM) induced by the discriminators can provably approximate the Wasserstein distance and/or KL-divergence.This implies that if the training is successful, then the learned distribution is close to the true distribution in Wasserstein distance or KL divergence, and thus cannot drop modes.Our preliminary experiments show that on synthetic datasets the test IPM is well correlated with KL divergence or the Wasserstein distance, indicating that the lack of diversity in GANs may be caused by the sub-optimality in optimization instead of statistical inefficiency.", "target": ["GANs können Verteilungen im Prinzip stichprobeneffizient lernen, wenn die Diskriminatorklasse kompakt ist und eine starke Unterscheidungskraft gegenüber der jeweiligen Generatorklasse hat.", "Vorschlagen des Begriffs der eingeschränkten Approximierbarkeit und liefern einer Komplexitätsgrenze für Stichproben, die polynomiell in der Dimension ist und bei der Untersuchung mangelnder Vielfalt in GANs nützlich ist.", "Analysiert, dass die Integrale Wahrscheinlichkeitsmetrik unter einigen milden Annahmen eine gute Annäherung an den Wassersteinabstand sein kann."]} {"source": "Understanding the optimization trajectory is critical to understand training of deep neural networks.We show how the hyperparameters of stochastic gradient descent influence the covariance of the gradients (K) and the Hessian of the training loss (H) along this trajectory.Based on a theoretical model, we predict that using a high learning rate or a small batch size in the early phase of training leads SGD to regions of the parameter space with (1) reduced spectral norm of K, and (2) improved conditioning of K and H. We show that the point on the trajectory after which these effects hold, which we refer to as the break-even point, is reached early during training.We demonstrate these effects empirically for a range of deep neural networks applied to multiple different tasks.Finally, we apply our analysis to networks with batch normalization (BN) layers and find that it is necessary to use a high learning rate to achieve loss smoothing effects attributed previously to BN alone.", "target": ["In der frühen Phase des Trainings von tiefen neuronalen Netzen gibt es einen \"Break-even-Point\", der die Eigenschaften der gesamten Optimierungskurve bestimmt.", "In dieser Arbeit wird die Optimierung von tiefen neuronalen Netzen analysiert, indem untersucht wird, wie die Hyperparameter Batch-Größe und Schrittgröße die Lerntrajektorien verändern."]} {"source": "Graph Convolution Network (GCN) has been recognized as one of the most effective graph models for semi-supervised learning, but it extracts merely the first-order or few-order neighborhood information through information propagation, which suffers performance drop-off for deeper structure.Existing approaches that deal with the higher-order neighbors tend to take advantage of adjacency matrix power.In this paper, we assume a seemly trivial condition that the higher-order neighborhood information may be similar to that of the first-order neighbors.Accordingly, we present an unsupervised approach to describe such similarities and learn the weight matrices of higher-order neighbors automatically through Lasso that minimizes the feature loss between the first-order and higher-order neighbors, based on which we formulate the new convolutional filter for GCN to learn the better node representations.Our model, called higher-order weighted GCN (HWGCN), has achieved the state-of-the-art results on a number of node classification tasks over Cora, Citeseer and Pubmed datasets.", "target": ["Wir schlagen HWGCN vor, um die relevanten Nachbarschaftsinformationen auf verschiedenen Ebenen zu mischen, um die Knotenrepräsentationen besser zu lernen.", "Die Autoren schlagen eine Variante von GCN, HWGCN, vor, die eine Convolution jenseits von 1-Schritt-Nachbarn berücksichtigt und mit modernen Methoden vergleichbar ist."]} {"source": "The performance of deep neural networks is often attributed to their automated, task-related feature construction.It remains an open question, though, why this leads to solutions with good generalization, even in cases where the number of parameters is larger than the number of samples.Back in the 90s, Hochreiter and Schmidhuber observed that flatness of the loss surface around a local minimum correlates with low generalization error.For several flatness measures, this correlation has been empirically validated.However, it has recently been shown that existing measures of flatness cannot theoretically be related to generalization: if a network uses ReLU activations, the network function can be reparameterized without changing its output in such a way that flatness is changed almost arbitrarily.This paper proposes a natural modification of existing flatness measures that results in invariance to reparameterization.The proposed measures imply a robustness of the network to changes in the input and the hidden layers.Connecting this feature robustness to generalization leads to a generalized definition of the representativeness of data.With this, the generalization error of a model trained on representative data can be bounded by its feature robustness which depends on our novel flatness measure.", "target": ["Wir führen ein neuartiges Maß für die Flachheit bei lokalen Minima der Verlustfläche tiefer neuronaler Netze ein, das gegenüber schichtweisen Neuparametrisierungen invariant ist, und stellen eine Verbindung zwischen Flachheit und Merkmalsrobustheit und Generalisierung her.", "Die Autoren schlagen einen Begriff der Merkmalsrobustheit vor, der gegenüber einer Neuskalierung der Gewichtung invariant ist, und erörtern die Beziehung des Begriffs zur Generalisierung.", "In diesem Beitrag wird ein Begriff der Merkmalsrobustheit definiert und mit der Epsilon-Repräsentativität einer Funktion kombiniert, um einen Zusammenhang zwischen der Flachheit von Minima und der Generalisierung in tiefen neuronalen Netzen zu beschreiben."]} {"source": "Bayesian methods have been successfully applied to sparsify weights of neural networks and to remove structure units from the networks, e.g.neurons.We apply and further develop this approach for gated recurrent architectures.Specifically, in addition to sparsification of individual weights and neurons, we propose to sparsify preactivations of gates and information flow in LSTM.It makes some gates and information flow components constant, speeds up forward pass and improves compression.Moreover, the resulting structure of gate sparsity is interpretable and depends on the task.", "target": ["Wir schlagen vor, die Voraktivierungen von Gattern und den Informationsfluss im LSTM zu sparsam zu gestalten, um sie konstant zu machen und das Spärlichkeitsniveau der Neuronen zu erhöhen.", "In diesem Papier wird eine Sparsification Methode für rekurrente neuronale Netze vorgeschlagen, bei der Neuronen mit Null-Präaktivierungen eliminiert werden, um kompakte Netze zu erhalten."]} {"source": "Improving the accuracy of numerical methods remains a central challenge in many disciplines and is especially important for nonlinear simulation problems.A representative example of such problems is fluid flow, which has been thoroughly studied to arrive at efficient simulations of complex flow phenomena.This paper presents a data-driven approach that learns to improve the accuracy of numerical solvers.The proposed method utilizes an advanced numerical scheme with a fine simulation resolution to acquire reference data.We, then, employ a neural network that infers a correction to move a coarse thus quickly obtainable result closer to the reference data.We provide insights into the targeted learning problem with different learning approaches: fully supervised learning methods with a naive and an optimized data acquisition as well as an unsupervised learning method with a differentiable Navier-Stokes solver.While our approach is very general and applicable to arbitrary partial differential equation models, we specifically highlight gains in accuracy for fluid flow simulations.", "target": ["Wir stellen einen Ansatz für neuronale Netze zur Unterstützung von Lösern partieller Differentialgleichungen vor.", "Die Autoren zielen darauf ab, die Genauigkeit numerischer Solver zu verbessern, indem sie ein neuronales Netz auf simulierten Referenzdaten trainieren, das den numerischen Solver korrigiert."]} {"source": "A patient’s health information is generally fragmented across silos.Though it is technically feasible to unite data for analysis in a manner that underpins a rapid learning healthcare system, privacy concerns and regulatory barriers limit data centralization.Machine learning can be conducted in a federated manner on patient datasets with the same set of variables, but separated across sites of care.But federated learning cannot handle the situation where different data types for a givenpatient are separated vertically across different organizations.We call methods that enable machine learning model training on data separated by two or more degrees “confederated machine learning.”We built and evaluated a confederated machinelearningmodel to stratify the risk of accidental falls among the elderly.", "target": ["Eine konföderierte Lernmethode, die Modelle aus horizontal und vertikal getrennten medizinischen Daten trainiert.", "Eine \"verbündete\" maschinelle Lernmethode, die über die Grenzen medizinischer Daten hinweg lernt, die sowohl horizontal als auch vertikal getrennt sind."]} {"source": "Existing neural networks are vulnerable to \"adversarial examples\"---created by adding maliciously designed small perturbations in inputs to induce a misclassification by the networks.The most investigated defense strategy is adversarial training which augments training data with adversarial examples.However, applying single-step adversaries in adversarial training does not support the robustness of the networks, instead, they will even make the networks to be overfitted.In contrast to the single-step, multi-step training results in the state-of-the-art performance on MNIST and CIFAR10, yet it needs a massive amount of time.Therefore, we propose a method, Stochastic Quantized Activation (SQA) that solves overfitting problems in single-step adversarial training and fastly achieves the robustness comparable to the multi-step.SQA attenuates the adversarial effects by providing random selectivity to activation functions and allows the network to learn robustness with only single-step training.Throughout the experiment, our method demonstrates the state-of-the-art robustness against one of the strongest white-box attacks as PGD training, but with much less computational cost.Finally, we visualize the learning process of the network with SQA to handle strong adversaries, which is different from existing methods.", "target": ["In dieser Arbeit wird eine stochastische quantisierte Aktivierung vorgeschlagen, die das Problem der Überanpassung beim FGSM-Training löst und schnell eine Robustheit erreicht, die mit dem mehrstufigen Training vergleichbar ist.", "In dem Papier wird ein Modell zur Verbesserung des gegnerischen Trainings vorgeschlagen, indem zufällige Störungen in die Aktivierungen einer der verborgenen Schichten eingeführt werden"]} {"source": "Neural activity is highly variable in response to repeated stimuli.We used an open dataset, the Allen Brain Observatory, to quantify the distribution of responses to repeated natural movie presentations.A large fraction of responses are best fit by log-normal distributions or Gaussian mixtures with two components.These distributions are similar to those from units in deep neural networks with dropout.Using a separate set of electrophysiological recordings, we constructed a population coupling model as a control for state-dependent activity fluctuations and found that the model residuals also show non-Gaussian distributions.We then analyzed responses across trials from multiple sections of different movie clips and observed that the noise in cortex aligns better with in-clip versus out-of-clip stimulus variations.We argue that noise is useful for generalization when it moves along representations of different exemplars in-class, similar to the structure of cortical noise.", "target": ["Wir untersuchen die Struktur von Störfaktoren im Gehirn und stellen fest, dass es zur Generalisierung beitragen kann, indem es die Repräsentationen entlang der Reizvariationen innerhalb einer Klasse verschiebt."]} {"source": "Unsupervised domain adaptation has received significant attention in recent years.Most of existing works tackle the closed-set scenario, assuming that the source and target domains share the exactly same categories.In practice, nevertheless, a target domain often contains samples of classes unseen in source domain (i.e., unknown class).The extension of domain adaptation from closed-set to such open-set situation is not trivial since the target samples in unknown class are not expected to align with the source.In this paper, we address this problem by augmenting the state-of-the-art domain adaptation technique, Self-Ensembling, with category-agnostic clusters in target domain.Specifically, we present Self-Ensembling with Category-agnostic Clusters (SE-CC) --- a novel architecture that steers domain adaptation with the additional guidance of category-agnostic clusters that are specific to target domain.These clustering information provides domain-specific visual cues, facilitating the generalization of Self-Ensembling for both closed-set and open-set scenarios.Technically, clustering is firstly performed over all the unlabeled target samples to obtain the category-agnostic clusters, which reveal the underlying data space structure peculiar to target domain.A clustering branch is capitalized on to ensure that the learnt representation preserves such underlying structure by matching the estimated assignment distribution over clusters to the inherent cluster distribution for each target sample.Furthermore, SE-CC enhances the learnt representation with mutual information maximization.Extensive experiments are conducted on Office and VisDA datasets for both open-set and closed-set domain adaptation, and superior results are reported when comparing to the state-of-the-art approaches.", "target": ["Wir stellen ein neues Design vor, d.h. Self-Ensembling mit kategorie-agnostischen Clustern, sowohl für Closed-Set- als auch für Open-Set-Domain-Adaption.", "Ein neuer Ansatz zur Anpassung von offenen Domänen, bei dem die Kategorien der Quelldomäne in den Kategorien der Zieldomäne enthalten sind, um Ausreißerkategorien herauszufiltern und eine Anpassung innerhalb der gemeinsamen Klassen zu ermöglichen."]} {"source": "We present Spectral Inference Networks, a framework for learning eigenfunctions of linear operators by stochastic optimization.Spectral Inference Networks generalize Slow Feature Analysis to generic symmetric operators, and are closely related to Variational Monte Carlo methods from computational physics.As such, they can be a powerful tool for unsupervised representation learning from video or graph-structured data.We cast training Spectral Inference Networks as a bilevel optimization problem, which allows for online learning of multiple eigenfunctions.We show results of training Spectral Inference Networks on problems in quantum mechanics and feature learning for videos on synthetic datasets.Our results demonstrate that Spectral Inference Networks accurately recover eigenfunctions of linear operators and can discover interpretable representations from video in a fully unsupervised manner.", "target": ["Wir zeigen, wie man spektrale Zerlegungen von linearen Operatoren mit Deep Learning erlernen kann, und nutzen dies für unüberwachtes Lernen ohne generatives Modell.", "Die Autoren schlagen vor, ein Deep Learning Framework zu verwenden, um die Berechnung der größten Eigenvektoren zu lösen.", "In dieser Arbeit wird ein Rahmen zum Erlernen von Eigenfunktionen über einen stochastischen Prozess vorgestellt und vorgeschlagen, die Herausforderung der Berechnung von Eigenfunktionen in einem groß angelegten Kontext durch Annäherung mit Hilfe eines zweiphasigen stochastischen Optimierungsprozesses zu bewältigen."]} {"source": "The Tensor-Train factorization (TTF) is an efficient way to compress large weight matrices of fully-connected layers and recurrent layers in recurrent neural networks (RNNs).However, high Tensor-Train ranks for all the core tensors of parameters need to be element-wise fixed, which results in an unnecessary redundancy of model parameters.This work applies Riemannian stochastic gradient descent (RSGD) to train core tensors of parameters in the Riemannian Manifold before finding vectors of lower Tensor-Train ranks for parameters.The paper first presents the RSGD algorithm with a convergence analysis and then tests it on more advanced Tensor-Train RNNs such as bi-directional GRU/LSTM and Encoder-Decoder RNNs with a Tensor-Train attention model.The experiments on digit recognition and machine translation tasks suggest the effectiveness of the RSGD algorithm for Tensor-Train RNNs.", "target": ["Anwendung des Riemannschen SGD (RSGD) Algorithmus für das Training von Tensor-Train RNNs zur weiteren Reduzierung der Modellparameter.", "Die Arbeit schlägt vor, den Riemannschen stochastischen Gradientenalgorithmus für das Lernen von Tensoren mit niedrigem Rang in tiefen Netzwerken zu verwenden.", "Vorschlagen eines Algorithmus zur Optimierung von neuronalen Netzen, der durch Tensor-Train-Zerlegung parametrisiert ist, basierend auf der Riemannschen Optimierung und Ranganpassung, und entwerfen einer bidirektionale TT-LSTM-Architektur."]} {"source": "In this paper, we consider the problem of learning control policies that optimize areward function while satisfying constraints due to considerations of safety, fairness, or other costs.We propose a new algorithm - Projection Based ConstrainedPolicy Optimization (PCPO), an iterative method for optimizing policies in a two-step process - the first step performs an unconstrained update while the secondstep reconciles the constraint violation by projection the policy back onto the constraint set.We theoretically analyze PCPO and provide a lower bound on rewardimprovement, as well as an upper bound on constraint violation for each policy update.We further characterize the convergence of PCPO with projection basedon two different metrics - L2 norm and Kullback-Leibler divergence.Our empirical results over several control tasks demonstrate that our algorithm achievessuperior performance, averaging more than 3.5 times less constraint violation andaround 15% higher reward compared to state-of-the-art methods.", "target": ["Wir schlagen einen neuen Algorithmus vor, der einschränkungsbefriedigende Strategien lernt, und bieten eine theoretische Analyse und empirische Demonstration im Kontext des Reinforcement Learning mit Einschränkungen.", "In diesem Beitrag wird ein Algorithmus zur Optimierung von Richtlinien mit Beschränkungen vorgestellt, der einen zweistufigen Optimierungsprozess verwendet, bei dem Richtlinien, die die Beschränkung nicht erfüllen, in die Beschränkungsmenge zurückprojiziert werden können."]} {"source": "Deep networks face challenges of ensuring their robustness against inputs that cannot be effectively represented by information learned from training data.We attribute this vulnerability to the limitations inherent to activation-based representation.To complement the learned information from activation-based representation, we propose utilizing a gradient-based representation that explicitly focuses on missing information.In addition, we propose a directional constraint on the gradients as an objective during training to improve the characterization of missing information.To validate the effectiveness of the proposed approach, we compare the anomaly detection performance of gradient-based and activation-based representations.We show that the gradient-based representation outperforms the activation-based representation by 0.093 in CIFAR-10 and 0.361 in CURE-TSR datasets in terms of AUROC averaged over all classes.Also, we propose an anomaly detection algorithm that uses the gradient-based representation, denoted as GradCon, and validate its performance on three benchmarking datasets.The proposed method outperforms the majority of the state-of-the-art algorithms in CIFAR-10, MNIST, and fMNIST datasets with an average AUROC of 0.664, 0.973, and 0.934, respectively.", "target": ["Wir schlagen eine gradientenbasierte Darstellung zur Charakterisierung von Informationen vor, die tiefe Netzwerke nicht gelernt haben.", "Die Autoren stellen Darstellungen vor, die auf Gradienten in Bezug auf die Gewichte basieren, um Informationen zu ergänzen, die im Trainingsdatensatz für tiefe Netzwerke fehlen."]} {"source": "Medical images may contain various types of artifacts with different patterns and mixtures, which depend on many factors such as scan setting, machine condition, patients’ characteristics, surrounding environment, etc.However, existing deep learning based artifact reduction methods are restricted by their training set with specific predetermined artifact type and pattern.As such, they have limited clinical adoption.In this paper, we introduce a “Zero-Shot” medical image Artifact Reduction (ZSAR) framework, which leverages the power of deep learning but without using general pre-trained networks or any clean image reference.Specifically, we utilize the low internal visual entropy of an image and train a light-weight image-specific artifact reduction network to reduce artifacts in an image at test-time.We use Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) as vehicles to show that ZSAR can reduce artifacts better than state-of-the-art both qualitatively and quantitatively, while using shorter execution time.To the best of our knowledge, this is the first deep learning framework that reduces artifacts in medical images without using a priori training set.", "target": ["Wir führen ein Zero-Shot Framework zur Reduzierung von Artefakten in medizinischen Bildern ein, das die Leistungsfähigkeit von Deep Learning nutzt, ohne jedoch allgemeine vortrainierte Netzwerke oder eine saubere Bildreferenz zu verwenden. "]} {"source": "Attribution methods provide insights into the decision-making of machine learning models like artificial neural networks.For a given input sample, they assign a relevance score to each individual input variable, such as the pixels of an image.In this work we adapt the information bottleneck concept for attribution.By adding noise to intermediate feature maps we restrict the flow of information and can quantify (in bits) how much information image regions provide.We compare our method against ten baselines using three different metrics on VGG-16 and ResNet-50, and find that our methods outperform all baselines in five out of six settings.The method’s information-theoretic foundation provides an absolute frame of reference for attribution values (bits) and a guarantee that regions scored close to zero are not necessary for the network's decision.", "target": ["Wir wenden das Konzept des Informationsengpasses auf die Zuordnung an.", "In diesem Beitrag wird eine neuartige, auf Störungen basierende Methode zur Berechnung von Attributions- bzw. Ähnlichkeitskarten für Bildklassifizierer auf der Grundlage tiefer neuronaler Netze vorgeschlagen, bei der künstliches Störungen in eine frühe Schicht des Netzes injiziert wird."]} {"source": "Recurrent Neural Networks (RNNs) are used in state-of-the-art models in domains such as speech recognition, machine translation, and language modelling.Sparsity is a technique to reduce compute and memory requirements of deep learning models.Sparse RNNs are easier to deploy on devices and high-end server processors.Even though sparse operations need less compute and memory relative to their dense counterparts, the speed-up observed by using sparse operations is less than expected on different hardware platforms.In order to address this issue, we investigate two different approaches to induce block sparsity in RNNs: pruning blocks of weights in a layer and using group lasso regularization with pruning to create blocks of weights with zeros.Using these techniques, we can create block-sparse RNNs with sparsity ranging from 80% to 90% with a small loss in accuracy.This technique allows us to reduce the model size by roughly 10x.Additionally, we can prune a larger dense network to recover this loss in accuracy while maintaining high block sparsity and reducing the overall parameter count.Our technique works with a variety of block sizes up to 32x32.Block-sparse RNNs eliminate overheads related to data storage and irregular memory accesses while increasing hardware efficiency compared to unstructured sparsity.", "target": ["Wir zeigen, dass die RNNs gepruned werden können, um eine Blocksparsität zu erzeugen, die eine Beschleunigung für spärliche Operationen auf vorhandener Hardware ermöglicht.", "Die Autoren schlagen einen Block Sparsity Pruning-Ansatz zur Komprimierung von RNNs vor, bei dem Gruppen-LASSO zur Förderung der Sparsity und zum Pruning verwendet wird, allerdings mit einem sehr speziellen Zeitplan für das Pruning und das Pruning-Gewicht."]} {"source": "Value iteration networks are an approximation of the value iteration (VI) algorithm implemented with convolutional neural networks to make VI fully differentiable.In this work, we study these networks in the context of robot motion planning, with a focus on applications to planetary rovers.The key challenging task in learning-based motion planning is to learn a transformation from terrain observations to a suitable navigation reward function.In order to deal with complex terrain observations and policy learning, we propose a value iteration recurrence, referred to as the soft value iteration network (SVIN).SVIN is designed to produce more effective training gradients through the value iteration network.It relies on a soft policy model, where the policy is represented with a probability distribution over all possible actions, rather than a deterministic policy that returns only the best action.We demonstrate the effectiveness of the proposed method in robot motion planning scenarios.In particular, we study the application of SVIN to very challenging problems in planetary rover navigation and present early training results on data gathered by the Curiosity rover that is currently operating on Mars.", "target": ["Wir schlagen eine Verbesserung der Wert-Iterationsnetzwerke vor, mit Anwendungen für die Pfadplanung von Planeten-Rovern.", "In dieser Arbeit wird eine Belohnungsfunktion auf der Grundlage von Expertentrajektorien mit Hilfe eines Value Iteration Module erlernt, um den Planungsschritt differenzierbar zu machen."]} {"source": "Transformer networks have lead to important progress in language modeling and machine translation.These models include two consecutive modules, a feed-forward layer and a self-attention layer.The latter allows the network to capture long term dependencies and are often regarded as the key ingredient in the success of Transformers.Building upon this intuition, we propose a new model that solely consists of attention layers.More precisely, we augment the self-attention layers with persistent memory vectors that play a similar role as the feed-forward layer.Thanks to these vectors, we can remove the feed-forward layer without degrading the performance of a transformer.Our evaluation shows the benefits brought by our model on standard character and word level language modeling benchmarks.", "target": ["Ein neuartiger Attention Layer, der die Selbstaufmerksamkeit und Feed-Forward-Teilschichten von Transformer-Netzen kombiniert.", "In diesem Beitrag wird eine Modifikation des Transformer-Modells vorgeschlagen, bei der die Aufmerksamkeit über \"persistente\" Speichervektoren in die Selbstaufmerksamkeitsschicht integriert wird, was zu einer Leistung führt, die mit bestehenden Modellen vergleichbar ist, während weniger Parameter benötigt werden."]} {"source": "This work views neural networks as data generating systems and applies anomalous pattern detection techniques on that data in order to detect when a network is processing a group of anomalous inputs. Detecting anomalies is a critical component for multiple machine learning problems including detecting the presence of adversarial noise added to inputs.More broadly, this work is a step towards giving neural networks the ability to detect groups of out-of-distribution samples. This work introduces ``Subset Scanning methods from the anomalous pattern detection domain to the task of detecting anomalous inputs to neural networks. Subset Scanning allows us to answer the question: \"``Which subset of inputs have larger-than-expected activations at which subset of nodes?\" Framing the adversarial detection problem this way allows us to identify systematic patterns in the activation space that span multiple adversarially noised images. Such images are ``\"weird together\". Leveraging this common anomalous pattern, we show increased detection power as the proportion of noised images increases in a test set. Detection power and accuracy results are provided for targeted adversarial noise added to CIFAR-10 images on a 20-layer ResNet using the Basic Iterative Method attack.", "target": ["Wir finden effizient eine Untergruppe von Bildern, die höhere Aktivierungen als erwartet für eine Untergruppe von Knoten aufweisen. Diese Bilder erscheinen anomaler und sind leichter zu erkennen, wenn sie als Gruppe betrachtet werden. ", "In dem Beitrag wird ein Verfahren zur Erkennung anomaler Eingaben vorgeschlagen, das auf einem \"Subset-Scanning\"-Ansatz zur Erkennung anomaler Aktivierungen im Deep Learning Netzwerk basiert."]} {"source": "Stability is a fundamental property of dynamical systems, yet to this date it has had little bearing on the practice of recurrent neural networks.In this work, we conduct a thorough investigation of stable recurrent models.Theoretically, we prove stable recurrent neural networks are well approximated by feed-forward networks for the purpose of both inference and training by gradient descent.Empirically, we demonstrate stable recurrent models often perform as well as their unstable counterparts on benchmark sequence tasks.Taken together, these findings shed light on the effective power of recurrent networks and suggest much of sequence learning happens, or can be made to happen, in the stable regime.Moreover, our results help to explain why in many cases practitioners succeed in replacing recurrent models by feed-forward models.", "target": ["Stabile rekurrente Modelle können durch Feed Forward Netzwerke approximiert werden und schneiden empirisch genauso gut ab wie instabile Modelle bei Benchmark-Aufgaben.", "Studien zur Stabilität von RNNs und Untersuchung der spektralen Normalisierung für sequenzielle Vorhersagen."]} {"source": "Weight-sharing plays a significant role in the success of many deep neural networks, by increasing memory efficiency and incorporating useful inductive priors about the problem into the network.But understanding how weight-sharing can be used effectively in general is a topic that has not been studied extensively.Chen et al. (2015) proposed HashedNets, which augments a multi-layer perceptron with a hash table, as a method for neural network compression.We generalize this method into a framework (ArbNets) that allows for efficient arbitrary weight-sharing, and use it to study the role of weight-sharing in neural networks.We show that common neural networks can be expressed as ArbNets with different hash functions.We also present two novel hash functions, the Dirichlet hash and the Neighborhood hash, and use them to demonstrate experimentally that balanced and deterministic weight-sharing helps with the performance of a neural network.", "target": ["Untersuchung der Rolle der Gewichtsteilung in neuronalen Netzen unter Verwendung von Hash-Funktionen und feststellen, dass eine ausgewogene und deterministische Hash-Funktion die Netzleistung verbessert.", "Vorschlag für ArbNets zur systematischeren Untersuchung der Gewichtsteilung durch Definition der Gewichtsteilungsfunktion als Hash-Funktion."]} {"source": "We introduce Neural Markov Logic Networks (NMLNs), a statistical relational learning system that borrows ideas from Markov logic.Like Markov Logic Networks (MLNs), NMLNs are an exponential-family model for modelling distributions over possible worlds, but unlike MLNs, they do not rely on explicitly specified first-order logic rules.Instead, NMLNs learn an implicit representation of such rules as a neural network that acts as a potential function on fragments of the relational structure.Interestingly, any MLN can be represented as an NMLN.Similarly to recently proposed Neural theorem provers (NTPs) (Rocktaschel at al. 2017), NMLNs can exploit embeddings of constants but, unlike NTPs, NMLNs work well also in their absence.This is extremely important for predicting in settings other than the transductive one.We showcase the potential of NMLNs on knowledge-base completion tasks and on generation of molecular (graph) data.", "target": ["Wir stellen ein statistisches relationales Lernsystem vor, das Ideen aus der Markov-Logik entlehnt, aber eine implizite Repräsentation von Regeln als neuronales Netz erlernt.", "Das Papier bietet eine Erweiterung der Markov-Logik-Netzwerke, indem es ihre Abhängigkeit von vordefinierten Logikregeln erster Ordnung aufhebt, um mehr Domänen in Wissensdatenbank-Vervollständigungsaufgaben zu modellieren."]} {"source": "Using variational Bayes neural networks, we develop an algorithm capable of accumulating knowledge into a prior from multiple different tasks.This results in a rich prior capable of few-shot learning on new tasks.The posterior can go beyond the mean field approximation and yields good uncertainty on the performed experiments.Analysis on toy tasks show that it can learn from significantly different tasks while finding similarities among them.Experiments on Mini-Imagenet reach state of the art with 74.5% accuracy on 5 shot learning.Finally, we provide two new benchmarks, each showing a failure mode of existing meta learning algorithms such as MAML and prototypical Networks.", "target": ["Eine skalierbare Methode zum Erlernen eines aussagekräftigen Priors für neuronale Netze über mehrere Aufgaben hinweg.", "Die Arbeit stellt eine Methode für die Ausbildung eines probabilistischen Modells für Multitasks Transfer Learning durch die Einführung einer latenten Variable pro Aufgabe, um die Gemeinsamkeit in den Aufgaben Instanzen zu erfassen.", "In der Arbeit wird ein Variationsansatz für das Meta-Lernen vorgeschlagen, der latente Variablen verwendet, die aufgabenspezifischen Datensätzen entsprechen.", "Ziel ist es, einen Prior über neuronale Netze für mehrere Aufgaben zu lernen. "]} {"source": "Sequential data often originates from diverse environments.Across them exist both shared regularities and environment specifics.To learn robust cross-environment descriptions of sequences we introduce disentangled state space models (DSSM).In the latent space of DSSM environment-invariant state dynamics is explicitly disentangled from environment-specific information governing that dynamics.We empirically show that such separation enables robust prediction, sequence manipulation and environment characterization.We also propose an unsupervised VAE-based training procedure to learn DSSM as Bayesian filters.In our experiments, we demonstrate state-of-the-art performance in controlled generation and prediction of bouncing ball video sequences across varying gravitational influences.", "target": ["Entkoppelte Zustandsraummodelle.", "Die Arbeit stellt ein generatives Zustandsraummodell vor, das eine globale latente Variable E verwendet, um umweltspezifische Informationen zu erfassen."]} {"source": "In this work, we approach one-shot and few-shot learning problems as methods for finding good prototypes for each class, where these prototypes are generalizable to new data samples and classes.We propose a metric learner that learns a Bregman divergence by learning its underlying convex function.Bregman divergences are a good candidate for this framework given they are the only class of divergences with the property that the best representative of a set of points is given by its mean.We propose a flexible extension to prototypical networks to enable joint learning of the embedding and the divergence, while preserving computational efficiency.Our preliminary results are comparable with the prior work on the Omniglot and Mini-ImageNet datasets, two standard benchmarks for one-shot and few-shot learning.We argue that our model can be used for other tasks that involve metric learning or tasks that require approximate convexity such as structured prediction and data completion.", "target": ["Bregman Divergenz Lernen für Few-Shot Learning. "]} {"source": "Motivated by the flexibility of biological neural networks whose connectivity structure changes significantly during their lifetime,we introduce the Unrestricted Recursive Network (URN) and demonstrate that it can exhibit similar flexibility during training via gradient descent.We show empirically that many of the different neural network structures commonly used in practice today (including fully connected, locally connected and residual networks of differ-ent depths and widths) can emerge dynamically from the same URN.These different structures can be derived using gradient descent on a single general loss function where the structure of the data and the relative strengths of various regulator terms determine the structure of the emergent network.We show that this loss function and the regulators arise naturally when considering the symmetries of the network as well as the geometric properties of the input data.", "target": ["Wir stellen einen Netzwerkrahmen vor, der seine Struktur während des Trainings verändern kann, und zeigen, dass er zu verschiedenen ML-Netzwerk-Archetypen wie MLPs und LCNs konvergieren kann. "]} {"source": "We present CROSSGRAD , a method to use multi-domain training data to learn a classifier that generalizes to new domains.CROSSGRAD does not need an adaptation phase via labeled or unlabeled data, or domain features in the new domain.Most existing domain adaptation methods attempt to erase domain signals using techniques like domain adversarial training.In contrast, CROSSGRAD is free to use domain signals for predicting labels, if it can prevent overfitting on training domains.We conceptualize the task in a Bayesian setting, in which a sampling step is implemented as data augmentation, based on domain-guided perturbations of input instances.CROSSGRAD jointly trains a label and a domain classifier on examples perturbed by loss gradients of each other’s objectives.This enables us to directly perturb inputs, without separating and re-mixing domain signals while making various distributional assumptions.Empirical evaluation on three different applications where this setting is natural establishes that (1) domain-guided perturbation provides consistently better generalization to unseen domains, compared to generic instance perturbation methods, and (2) data augmentation is a more stable and accurate method than domain adversarial training.", "target": ["Die bereichsbezogene Datenerweiterung bietet eine robuste und stabile Methode der Bereichsgeneralisierung", "In diesem Artikel wird ein Ansatz zur Generalisierung von Domänen durch domänenabhängige Datenerweiterung vorgeschlagen", "Die Autoren stellen die CrossGrad-Methode vor, die sowohl eine Label-Klassifizierungsaufgabe als auch eine Domänen-Klassifizierungsaufgabe trainiert."]} {"source": "We present sketch-rnn, a recurrent neural network able to construct stroke-based drawings of common objects.The model is trained on a dataset of human-drawn images representing many different classes.We outline a framework for conditional and unconditional sketch generation, and describe new robust training methods for generating coherent sketch drawings in a vector format.", "target": ["Wir untersuchen Alternativen zu traditionellen Pixelbildmodellierungsansätzen und schlagen ein generatives Modell für Vektorbilder vor.", "In diesem Beitrag wird eine neuronale Netzwerkarchitektur zur Erzeugung von Skizzenzeichnungen vorgestellt, die sich an den variationalen Autoencoder anlehnt."]} {"source": "Wilson et al. (2017) showed that, when the stepsize schedule is properly designed, stochastic gradient generalizes better than ADAM (Kingma & Ba, 2014).In light of recent work on hypergradient methods (Baydin et al., 2018), we revisit these claims to see if such methods close the gap between the most popular optimizers.As a byproduct, we analyze the true benefit of these hypergradient methods compared to more classical schedules, such as the fixed decay of Wilson et al. (2017).In particular, we observe they are of marginal help since their performance varies significantly when tuning their hyperparameters.Finally, as robustness is a critical quality of an optimizer, we provide a sensitivity analysis of these gradient based optimizers to assess how challenging their tuning is.", "target": ["Wir stellen eine Studie vor, die versucht zu sehen, wie die jüngste Online-Lernratenanpassung die Schlussfolgerung von Wilson et al. 2018 über adaptive Gradientenmethoden erweitert, zusammen mit einem Vergleich und einer Sensitivitätsanalyse.", "Berichtet über die Ergebnisse von Tests verschiedener Methoden zur Anpassung der Schrittweite, einschließlich einfachem SGD, SGD mit Neserov-Momentum und ADAM, und vergleicht diese Methoden mit und ohne Hypergradient. "]} {"source": "Despite an ever growing literature on reinforcement learning algorithms and applications, much less is known about their statistical inference.In this paper, we investigate the large-sample behaviors of the Q-value estimates with closed-form characterizations of the asymptotic variances.This allows us to efficiently construct confidence regions for Q-value and optimal value functions, and to develop policies to minimize their estimation errors.This also leads to a policy exploration strategy that relies on estimating the relative discrepancies among the Q estimates.Numerical experiments show superior performances of our exploration strategy than other benchmark approaches.", "target": ["Wir untersuchen das Verhalten der Q-Wert-Schätzungen bei großen Stichproben und schlagen eine effiziente Erkundungsstrategie vor, die sich auf die Schätzung der relativen Diskrepanzen zwischen den Q-Schätzungen stützt. ", "In diesem Beitrag wird ein reiner Explorationsalgorithmus für das Reinforcement Learning vorgestellt, der auf einer asymptotischen Analyse der Q-Werte und ihrer Konvergenz zur zentralen Grenzverteilung basiert und damit bessere Ergebnisse als die Benchmark-Explorationsalgorithmen erzielt."]} {"source": "We perform completely unsupervised one-sided image to image translation between a source domain $X$ and a target domain $Y$ such that we preserve relevant underlying shared semantics (e.g., class, size, shape, etc). In particular, we are interested in a more difficult case than those typically addressed in the literature, where the source and target are ``far\" enough that reconstruction-style or pixel-wise approaches fail.We argue that transferring (i.e., \\emph{translating}) said relevant information should involve both discarding source domain-specific information while incorporate target domain-specific information, the latter of which we model with a noisy prior distribution. In order to avoid the degenerate case where the generated samples are only explained by the prior distribution, we propose to minimize an estimate of the mutual information between the generated sample and the sample from the prior distribution.We discover that the architectural choices are an important factor to consider in order to preserve the shared semantic between $X$ and $Y$. We show state of the art results on the MNIST to SVHN task for unsupervised image to image translation.", "target": ["Wir trainieren ein Bild-zu-Bild-Übersetzungsnetzwerk, das als Eingabe das Quellbild und eine Probe aus einer vorherigen Verteilung nimmt, um eine Probe aus der Zielverteilung zu erzeugen", "Diese Arbeit formalisiert das Problem der unbeaufsichtigten Übersetzung und schlägt ein erweitertes GAN Framework vor, der die gegenseitige Information nutzt, um den degenerierten Fall zu vermeiden.", "In dieser Arbeit wird das Problem der unbeaufsichtigten Eins-zu-Viel-Bild-Übersetzung formuliert und das Problem durch Minimierung der gegenseitigen Information gelöst. "]} {"source": "Identifying salient points in images is a crucial component for visual odometry, Structure-from-Motion or SLAM algorithms.Recently, several learned keypoint methods have demonstrated compelling performance on challenging benchmarks. However, generating consistent and accurate training data for interest-point detection in natural images still remains challenging, especially for human annotators.We introduce IO-Net (i.e. InlierOutlierNet), a novel proxy task for the self-supervision of keypoint detection, description and matching.By making the sampling of inlier-outlier sets from point-pair correspondences fully differentiable within the keypoint learning framework, we show that are able to simultaneously self-supervise keypoint description and improve keypoint matching.Second, we introduce KeyPointNet, a keypoint-network architecture that is especially amenable to robust keypoint detection and description.We design the network to allow local keypoint aggregation to avoid artifacts due to spatial discretizations commonly used for this task, and we improve fine-grained keypoint descriptor performance by taking advantage of efficient sub-pixel convolutions to upsample the descriptor feature-maps to a higher operating resolution.Through extensive experiments and ablative analysis, we show that the proposed self-supervised keypoint learning method greatly improves the quality of feature matching and homography estimation on challenging benchmarks over the state-of-the-art.", "target": ["Lernen, um unterscheidbare Schlüsselpunkte aus einer Proxy-Aufgabe zu extrahieren, Zurückweisung von Ausreißern.", "Dieser Beitrag widmet sich dem selbstüberwachten Lernen von lokalen Merkmalen unter Verwendung von Neural Guided RANSAC als zusätzlicher Hilfsverlustlieferant zur Verbesserung der Deskriptorinterpolation."]} {"source": "We study the role of intrinsic motivation as an exploration bias for reinforcement learning in sparse-reward synergistic tasks, which are tasks where multiple agents must work together to achieve a goal they could not individually.Our key idea is that a good guiding principle for intrinsic motivation in synergistic tasks is to take actions which affect the world in ways that would not be achieved if the agents were acting on their own.Thus, we propose to incentivize agents to take (joint) actions whose effects cannot be predicted via a composition of the predicted effect for each individual agent.We study two instantiations of this idea, one based on the true states encountered, and another based on a dynamics model trained concurrently with the policy.While the former is simpler, the latter has the benefit of being analytically differentiable with respect to the action taken.We validate our approach in robotic bimanual manipulation tasks with sparse rewards; we find that our approach yields more efficient learning than both1) training with only the sparse reward and2) using the typical surprise-based formulation of intrinsic motivation, which does not bias toward synergistic behavior.Videos are available on the project webpage: https://sites.google.com/view/iclr2020-synergistic.", "target": ["Wir schlagen eine Formulierung der intrinsischen Motivation vor, die sich als Explorationsverzerrung in synergetischen Multi-Agenten-Aufgaben mit geringer Belohnung eignet, indem sie die Agenten dazu ermutigt, die Welt auf eine Weise zu beeinflussen, die nicht möglich wäre, wenn sie einzeln handeln würden.", "Der Beitrag konzentriert sich auf die Nutzung intrinsischer Motivation zur Verbesserung des Explorationsprozesses von Agenten mit Reinforcement Learning bei Aufgaben, die von mehreren Agenten zu bewältigen sind."]} {"source": "A general graph-structured neural network architecture operates on graphs through two core components: (1) complex enough message functions; (2) a fixed information aggregation process.In this paper, we present the Policy Message Passing algorithm, which takes a probabilistic perspective and reformulates the whole information aggregation as stochastic sequential processes.The algorithm works on a much larger search space, utilizes reasoning history to perform inference, and is robust to noisy edges.We apply our algorithm to multiple complex graph reasoning and prediction tasks and show that our algorithm consistently outperforms state-of-the-art graph-structured models by a significant margin.", "target": ["Ein probabilistischer Inferenzalgorithmus auf der Grundlage eines neuronalen Netzes für graphisch-strukturierte Modelle.", "Diese Arbeit stellt Policy Message Passing vor, ein neuronales Graphen-Netzwerk mit einem Inferenz-Mechanismus, der den Kanten auf rekurrente Weise Nachrichten zuweist, was auf eine konkurrenzfähige Leistung bei visuellen Argumentationsaufgaben hindeutet."]} {"source": "Deep multitask networks, in which one neural network produces multiple predictive outputs, are more scalable and often better regularized than their single-task counterparts.Such advantages can potentially lead to gains in both speed and performance, but multitask networks are also difficult to train without finding the right balance between tasks.We present a novel gradient normalization (GradNorm) technique which automatically balances the multitask loss function by directly tuning the gradients to equalize task training rates.We show that for various network architectures, for both regression and classification tasks, and on both synthetic and real datasets, GradNorm improves accuracy and reduces overfitting over single networks, static baselines, and other adaptive multitask loss balancing techniques.GradNorm also matches or surpasses the performance of exhaustive grid search methods, despite only involving a single asymmetry hyperparameter $\\alpha$.Thus, what was once a tedious search process which incurred exponentially more compute for each task added can now be accomplished within a few training runs, irrespective of the number of tasks.Ultimately, we hope to demonstrate that gradient manipulation affords us great control over the training dynamics of multitask networks and may be one of the keys to unlocking the potential of multitask learning.", "target": ["Wir zeigen, wie man die Leistung eines Multitasking-Netzes steigern kann, indem man eine adaptive Multitasking Verlustfunktion einstellt, die durch direkten Ausgleich der Netzgradienten gelernt wird.", "In dieser Arbeit wird ein dynamisches Aktualisierungsschema für die Gewichte vorgeschlagen, das die Gewichte für verschiedene Aufgabenverluste während der Trainingszeit aktualisiert, indem es die Verlustquoten der verschiedenen Aufgaben nutzt."]} {"source": "Image segmentation aims at grouping pixels that belong to the same object or region.At the heart of image segmentation lies the problem of determining whether a pixel is inside or outside a region, which we denote as the \"insideness\" problem.Many Deep Neural Networks (DNNs) variants excel in segmentation benchmarks, but regarding insideness, they have not been well visualized or understood: What representations do DNNs use to address the long-range relationships of insideness?How do architectural choices affect the learning of these representations?In this paper, we take the reductionist approach by analyzing DNNs solving the insideness problem in isolation, i.e. determining the inside of closed (Jordan) curves.We demonstrate analytically that state-of-the-art feed-forward and recurrent architectures can implement solutions of the insideness problem for any given curve.Yet, only recurrent networks could learn these general solutions when the training enforced a specific \"routine\" capable of breaking down the long-range relationships.Our results highlights the need for new training strategies that decompose the learning into appropriate stages, and that lead to the general class of solutions necessary for DNNs to understand insideness.", "target": ["DNNs für die Bildsegmentierung können Lösungen für das Insideness-Problem implementieren, aber nur einige rekurrente Netze können sie mit einer bestimmten Art von Überwachung lernen.", "In diesem Beitrag wird die Innensicht zur Untersuchung der semantischen Segmentierung in der Ära des Deep Learning eingeführt, und die Ergebnisse können dazu beitragen, dass Modelle besser verallgemeinert werden."]} {"source": "We address the challenging problem of deep representation learning--the efficient adaption of a pre-trained deep network to different tasks.Specifically, we propose to explore gradient-based features.These features are gradients of the model parameters with respect to a task-specific loss given an input sample.Our key innovation is the design of a linear model that incorporates both gradient features and the activation of the network.We show that our model provides a local linear approximation to a underlying deep model, and discuss important theoretical insight.Moreover, we present an efficient algorithm for the training and inference of our model without computing the actual gradients.Our method is evaluated across a number of representation learning tasks on several datasets and using different network architectures.We demonstrate strong results in all settings.And our results are well-aligned with our theoretical insight.", "target": ["Ausgehend von einem trainierten Modell untersuchten wir die Gradienten der Modellparameter pro Stichprobe im Verhältnis zu einem aufgabenspezifischen Verlust und konstruierten ein lineares Modell, das die Gradienten der Modellparameter und die Aktivierung des Modells kombiniert.", "In diesem Beitrag wird vorgeschlagen, die Gradienten bestimmter Schichten von Convolutional Networks als Merkmale in einem linearisierten Modell für Transferlernen und schnelle Anpassung zu verwenden."]} {"source": "Recovering 3D geometry shape, albedo and lighting from a single image has wide applications in many areas, which is also a typical ill-posed problem.In order to eliminate the ambiguity, face prior knowledge like linear 3D morphable models (3DMM) learned from limited scan data are often adopted to the reconstruction process.However, methods based on linear parametric models cannot generalize well for facial images in the wild with various ages, ethnicity, expressions, poses, and lightings.Recent methods aim to learn a nonlinear parametric model using convolutional neural networks (CNN) to regress the face shape and texture directly.However, the models were only trained on a dataset that is generated from a linear 3DMM.Moreover, the identity and expression representations are entangled in these models, which hurdles many facial editing applications.In this paper, we train our model with adversarial loss in a semi-supervised manner on hybrid batches of unlabeled and labeled face images to exploit the value of large amounts of unlabeled face images from unconstrained photo collections.A novel center loss is introduced to make sure that different facial images from the same person have the same identity shape and albedo.Besides, our proposed model disentangles identity, expression, pose, and lighting representations, which improves the overall reconstruction performance and facilitates facial editing applications, e.g., expression transfer.Comprehensive experiments demonstrate that our model produces high-quality reconstruction compared to state-of-the-art methods and is robust to various expression, pose, and lighting conditions.", "target": ["Wir trainieren unser Gesichtsrekonstruktionsmodell mit adversarialem Verlust in halbüberwachter Weise auf hybriden Batches von unbeschrifteten und beschrifteten Gesichtsbildern, um den Wert großer Mengen unbeschrifteter Gesichtsbilder aus unbeschränkten Fotosammlungen zu nutzen.", "In dieser Arbeit wird ein halbüberwachtes und kontradiktorisches Trainingsverfahren vorgeschlagen, um nichtlineare entwirrte Repräsentationen von Gesichtsbildern mit Verlustfunktionen genau zu bestimmen und so die beste Leistung bei der Gesichtsrekonstruktion zu erzielen."]} {"source": "Human conversations naturally evolve around related entities and connected concepts, while may also shift from topic to topic.This paper presents ConceptFlow, which leverages commonsense knowledge graphs to explicitly model such conversation flows for better conversation response generation.ConceptFlow grounds the conversation inputs to the latent concept space and represents the potential conversation flow as a concept flow along the commonsense relations.The concept is guided by a graph attention mechanism that models the possibility of the conversation evolving towards different concepts.The conversation response is then decoded using the encodings of both utterance texts and concept flows, integrating the learned conversation structure in the concept space.Our experiments on Reddit conversations demonstrate the advantage of ConceptFlow over previous commonsense aware dialog models and fine-tuned GPT-2 models, while using much fewer parameters but with explicit modeling of conversation structures.", "target": ["In diesem Beitrag wird ConceptFlow vorgestellt, das den Gesprächsfluss explizit in einem Commonsense-Wissensgraphen modelliert, um eine bessere Gesprächsgenerierung zu ermöglichen.", "Die Arbeit schlägt ein System für die Generierung einer Single-Turn-Antwort auf eine gepostete Äußerung in einer offenen Domäne Dialog Einstellung mit der Diffiusion in die Nachbarn der geerdeten Konzepte."]} {"source": "Biological neural networks face homeostatic and resource constraints that restrict the allowed configurations of connection weights.If a constraint is tight it defines a very small solution space, and the size of these constraint spaces determines their potential overlap with the solutions for computational tasks.We study the geometry of the solution spaces for constraints on neurons' total synaptic weight and on individual synaptic weights, characterizing the connection degrees (numbers of partners) that maximize the size of these solution spaces.We then hypothesize that the size of constraints' solution spaces could serve as a cost function governing neural circuit development.We develop analytical approximations and bounds for the model evidence of the maximum entropy degree distributions under these cost functions.We test these on a published electron microscopic connectome of an associative learning center in the fly brain, finding evidence for a developmental progression in circuit structure.", "target": ["Wir untersuchen die Hypothese, dass die Entropie von Lösungsräumen für Beschränkungen der synaptischen Gewichte (die \"Flexibilität\" der Beschränkung) als Kostenfunktion für die Entwicklung neuronaler Schaltkreise dienen könnte."]} {"source": "In this preliminary work, we study the generalization properties of infinite ensembles of infinitely-wide neural networks. Amazingly, this model family admits tractable calculations for many information-theoretic quantities. We report analytical and empirical investigations in the search for signals that correlate with generalization.", "target": ["Unendliche Ensembles von unendlich großen neuronalen Netzen sind aus informationstheoretischer Sicht eine interessante Modellfamilie."]} {"source": "Learning multilingual representations of text has proven a successful method for many cross-lingual transfer learning tasks.There are two main paradigms for learning such representations: (1) alignment, which maps different independently trained monolingual representations into a shared space, and (2) joint training, which directly learns unified multilingual representations using monolingual and cross-lingual objectives jointly.In this paper, we first conduct direct comparisons of representations learned using both of these methods across diverse cross-lingual tasks.Our empirical results reveal a set of pros and cons for both methods, and show that the relative performance of alignment versus joint training is task-dependent.Stemming from this analysis, we propose a simple and novel framework that combines these two previously mutually-exclusive approaches.Extensive experiments on various tasks demonstrate that our proposed framework alleviates limitations of both approaches, and outperforms existing methods on the MUSE bilingual lexicon induction (BLI) benchmark.We further show that our proposed framework can generalize to contextualized representations and achieves state-of-the-art results on the CoNLL cross-lingual NER benchmark.", "target": ["Wir führen eine vergleichende Studie über sprachübergreifendes Alignment und gemeinsame Trainingsmethoden durch und vereinen diese beiden bisher exklusiven Paradigmen in einem neuen Rahmen. ", "Dieser Beitrag vergleicht Ansätze zur zweisprachigen Lexikoninduktion und zeigt, welche Methode bei Lexikon, Induktion, NER- und MT-Aufgaben besser abschneidet."]} {"source": "Large number of weights in deep neural networks make the models difficult to be deployed in low memory environments such as, mobile phones, IOT edge devices as well as \"inferencing as a service\" environments on the cloud. Prior work has considered reduction in the size of the models, through compression techniques like weight pruning, filter pruning, etc. or through low-rank decomposition of the convolution layers.In this paper, we demonstrate the use of multiple techniques to achieve not only higher model compression but also reduce the compute resources required during inferencing.We do filter pruning followed by low-rank decomposition using Tucker decomposition for model compression.We show that our approach achieves upto 57\\% higher model compression when compared to either Tucker Decomposition or Filter pruning alone at similar accuracy for GoogleNet.Also, it reduces the Flops by upto 48\\% thereby making the inferencing faster.", "target": ["Kombination orthogonaler Modellkomprimierungstechniken, um die Modellgröße und die Anzahl der bei der Inferenzierung erforderlichen Flops erheblich zu verringern.", "In dieser Arbeit wird eine Kombination aus Tucker Zerlegung und Filter Pruning vorgeschlagen."]} {"source": "We review the limitations of BLEU and ROUGE -- the most popular metrics used to assess reference summaries against hypothesis summaries, and introduce JAUNE: a set of criteria for what a good metric should behave like and propose concrete ways to use recent Transformers-based Language Models to assess reference summaries against hypothesis summaries.", "target": ["Einführen von JAUNE, einer Methode die BLEU- und ROUGE-Punkte durch multidimensionale, modellbasierte Evaluatoren zur Bewertung von Zusammenfassungen ersetzt.", "In diesem Beitrag wird eine neue JAUNE-Metrik für die Bewertung von maschinellen Übersetzungs- und Textzusammenfassungssystemen vorgeschlagen, die zeigt, dass ihr Modell besser mit den tatsächlichen Ähnlichkeitsbezeichnungen übereinstimmt als BLEU."]} {"source": "This paper presents a new Graph Neural Network (GNN) type using feature-wise linear modulation (FiLM).Many standard GNN variants propagate information along the edges of a graph by computing ``messages'' based only on the representation of the source of each edge.In GNN-FiLM, the representation of the target node of an edge is additionally used to compute a transformation that can be applied to all incoming messages, allowing feature-wise modulation of the passed information.Results of experiments comparing different GNN architectures on three tasks from the literature are presented, based on re-implementations of baseline methods.Hyperparameters for all methods were found using extensive search, yielding somewhat surprising results: differences between baseline models are smaller than reported in the literature.Nonetheless, GNN-FiLM outperforms baseline methods on a regression task on molecular graphs and performs competitively on other tasks.", "target": ["Neuer GNN-Formalismus mit umfangreichen Experimenten; die Unterschiede zwischen GGNN/GCN/GAT sind geringer als angenommen.", "Die Arbeit schlägt eine neue Graph Neural Network Architektur vor, die eine merkmalsweise lineare Modulation verwendet, um die Weiterleitung von Nachrichten von der Quelle zum Zielknoten auf der Grundlage der Darstellung des Zielknotens zu konditionieren."]} {"source": "To deal simultaneously with both, the attributed network embedding and clustering, we propose a new model.It exploits both content and structure information, capitalising on their simultaneous use.The proposed model relies on the approximation of the relaxed continuous embedding solution by the true discrete clustering one.Thereby, we show that incorporating an embedding representation provides simpler and more interpretable solutions.Experiment results demonstrate that the proposed algorithm performs better, in terms of clustering and embedding, than the state-of-art algorithms, including deep learning methods devoted to similar tasks for attributed network datasets with different proprieties.", "target": ["In diesem Beitrag wird ein neuartiger Matrixzerlegungsrahmen für die gleichzeitige Einbettung und Clustering von Netzwerkdaten vorgeschlagen.", "In dieser Arbeit wird ein Algorithmus vorgeschlagen, der die Einbettung von Attributen in ein Netzwerk und das Clustering gemeinsam durchführt."]} {"source": "We propose a learned image-guided rendering technique that combines the benefits of image-based rendering and GAN-based image synthesis.The goal of our method is to generate photo-realistic re-renderings of reconstructed objects for virtual and augmented reality applications (e.g., virtual showrooms, virtual tours and sightseeing, the digital inspection of historical artifacts).A core component of our work is the handling of view-dependent effects.Specifically, we directly train an object-specific deep neural network to synthesize the view-dependent appearance of an object.As input data we are using an RGB video of the object.This video is used to reconstruct a proxy geometry of the object via multi-view stereo.Based on this 3D proxy, the appearance of a captured view can be warped into a new target view as in classical image-based rendering.This warping assumes diffuse surfaces, in case of view-dependent effects, such as specular highlights, it leads to artifacts.To this end, we propose EffectsNet, a deep neural network that predicts view-dependent effects.Based on these estimations, we are able to convert observed images to diffuse images.These diffuse images can be projected into other views.In the target view, our pipeline reinserts the new view-dependent effects.To composite multiple reprojected images to a final output, we learn a composition network that outputs photo-realistic results.Using this image-guided approach, the network does not have to allocate capacity on ``remembering'' object appearance, instead it learns how to combine the appearance of captured images.We demonstrate the effectiveness of our approach both qualitatively and quantitatively on synthetic as well as on real data.", "target": ["Wir schlagen eine erlernte bildgesteuerte Rendering-Technik vor, die die Vorteile von bildbasiertem Rendering und GAN-basierter Bildsynthese kombiniert und gleichzeitig sichtabhängige Effekte berücksichtigt.", "In diesem Beitrag wird eine Methode zur Behandlung von sichtabhängigen Effekten beim neuronalen Rendering vorgeschlagen, die die Robustheit bestehender neuronaler Renderingmethoden verbessert."]} {"source": "We evaluate the distribution learning capabilities of generative adversarial networks by testing them on synthetic datasets.The datasets include common distributions of points in $R^n$ space and images containing polygons of various shapes and sizes.We find that by and large GANs fail to faithfully recreate point datasets which contain discontinous support or sharp bends with noise.Additionally, on image datasets, we find that GANs do not seem to learn to count the number of objects of the same kind in an image.We also highlight the apparent tension between generalization and learning in GANs.", "target": ["GANs werden anhand synthetischer Datensätze bewertet."]} {"source": "This paper proposes a new approach for step size adaptation in gradient methods.The proposed method called step size optimization (SSO) formulates the step size adaptation as an optimization problem which minimizes the loss function with respect to the step size for the given model parameters and gradients.Then, the step size is optimized based on alternating direction method of multipliers (ADMM).SSO does not require the second-order information or any probabilistic models for adapting the step size, so it is efficient and easy to implement.Furthermore, we also introduce stochastic SSO for stochastic learning environments.In the experiments, we integrated SSO to vanilla SGD and Adam, and they outperformed state-of-the-art adaptive gradient methods including RMSProp, Adam, L4-Adam, and AdaBound on extensive benchmark datasets.", "target": ["Wir schlagen ein effizientes und effektives Verfahren zur Anpassung der Schrittweite für die Gradientenmethoden vor.", "Eine neue Schrittgrößenanpassung in Gradientenmethoden erster Ordnung, die ein neues Optimierungsproblem mit der Erweiterung der Verlustfunktion erster Ordnung und Regularisierung aufstellt, wobei die Schrittgröße als Variable behandelt wird."]} {"source": "Despite the fact that generative models are extremely successful in practice, the theory underlying this phenomenon is only starting to catch up with practice.In this work we address the question of the universality of generative models: is it true that neural networks can approximate any data manifold arbitrarily well?We provide a positive answer to this question and show that under mild assumptions on the activation function one can always find a feedforward neural network that maps the latent space onto a set located within the specified Hausdorff distance from the desired data manifold.We also prove similar theorems for the case of multiclass generative models and cycle generative models, trained to map samples from one manifold to another and vice versa.", "target": ["Wir haben festgestellt, dass eine große Klasse von Vielfältigkeiten durch ReLU- und Sigmoid-Netzwerke mit beliebiger Genauigkeit erzeugt werden kann.", "Diese Arbeit bietet bestimmte grundlegende Garantien dafür, wann Vielfältigkeiten als das Bild einer Karte geschrieben werden können, die durch ein neuronales Netz approximiert wird, und verbindet Theoreme aus der Mannigfaltigkeitsgeometrie und universelle Standard-Approximationsergebnisse.", "In diesem Beitrag wird theoretisch gezeigt, dass generative Modelle, die auf neuronalen Netzen basieren, Datenmannigfaltigkeiten annähern können, und es wird nachgewiesen, dass neuronale Netze unter milden Annahmen einen latenten Raum auf eine Menge abbilden können, die der gegebenen Datenmannigfaltigkeit innerhalb einer kleinen Hausdorff-Distanz nahe kommt."]} {"source": "Model-based reinforcement learning (RL) is considered to be a promising approach to reduce the sample complexity that hinders model-free RL.However, the theoretical understanding of such methods has been rather limited.This paper introduces a novel algorithmic framework for designing and analyzing model-based RL algorithms with theoretical guarantees.We design a meta-algorithm with a theoretical guarantee of monotone improvement to a local maximum of the expected reward.The meta-algorithm iteratively builds a lower bound of the expected reward based on the estimated dynamical model and sample trajectories, and then maximizes the lower bound jointly over the policy and the model.The framework extends the optimism-in-face-of-uncertainty principle to non-linear dynamical models in a way that requires no explicit uncertainty quantification.Instantiating our framework with simplification gives a variant of model-based RL algorithms Stochastic Lower Bounds Optimization (SLBO).Experiments demonstrate that SLBO achieves the state-of-the-art performance when only 1M or fewer samples are permitted on a range of continuous control benchmark tasks.", "target": ["Wir entwerfen modellbasierte Reinforcement Learning Algorithmen mit theoretischen Garantien und erzielen die besten Ergebnisse bei Mujuco-Benchmark-Aufgaben, wenn eine Million oder weniger Beispiele zulässig sind.", "In der Arbeit wird ein Rahmen für die Entwicklung modellbasierter RL-Algorithmen auf der Grundlage von OFU vorgeschlagen, die eine SOTA-Leistung bei MuJoCo-Aufgaben erreichen."]} {"source": "We study the use of knowledge distillation to compress the U-net architecture.We show that, while standard distillation is not sufficient to reliably train a compressed U-net, introducing other regularization methods, such as batch normalization and class re-weighting, in knowledge distillation significantly improves the training process.This allows us to compress a U-net by over 1000x, i.e., to 0.1% of its original number of parameters, at a negligible decrease in performance.", "target": ["Wir stellen zusätzliche Techniken zur Wissensdestillation vor, um das U-Netz um mehr als das 1000-fache zu komprimieren.", "Die Autoren führten eine modifizierte Destillationsstrategie ein, um eine U-Netz-Architektur um mehr als das 1000-fache zu komprimieren und dabei eine Genauigkeit beizubehalten, die der des ursprünglichen U-Netzes nahe kommt."]} {"source": "Learning neural networks with gradient descent over a long sequence of tasks is problematic as their fine-tuning to new tasks overwrites the network weights that are important for previous tasks.This leads to a poor performance on old tasks – a phenomenon framed as catastrophic forgetting. While early approaches use task rehearsal and growing networks that both limit the scalability of the task sequence orthogonal approaches build on regularization. Based on the Fisher information matrix (FIM) changes to parameters that are relevant to old tasks are penalized, which forces the task to be mapped into the available remaining capacity of the network.This requires to calculate the Hessian around a mode, which makes learning tractable.In this paper, we introduce Hessian-free curvature estimates as an alternative method to actually calculating the Hessian. In contrast to previous work, we exploit the fact that most regions in the loss surface are flat and hence only calculate a Hessian-vector-product around the surface that is relevant for the current task.Our experiments show that on a variety of well-known task sequences we either significantly outperform or are en par with previous work.", "target": ["Diese Arbeit bietet einen Ansatz zur Bekämpfung des katastrophalen Vergessens durch hessianfreie Krümmungsschätzungen.", "Die Arbeit schlägt eine ungefähre Laplace-Methode für das Training neuronaler Netze im Rahmen des kontinuierlichen Lernens mit einer geringen Raumkomplexität vor."]} {"source": "There has been recent interest in improving performance of simple models for multiple reasons such as interpretability, robust learning from small data, deployment in memory constrained settings as well as environmental considerations.In this paper, we propose a novel method SRatio that can utilize information from high performing complex models (viz. deep neural networks, boosted trees, random forests) to reweight a training dataset for a potentially low performing simple model such as a decision tree or a shallow network enhancing its performance.Our method also leverages the per sample hardness estimate of the simple model which is not the case with the prior works which primarily consider the complex model's confidences/predictions and is thus conceptually novel.Moreover, we generalize and formalize the concept of attaching probes to intermediate layers of a neural network, which was one of the main ideas in previous work \\citep{profweight}, to other commonly used classifiers and incorporate this into our method.The benefit of these contributions is witnessed in the experiments where on 6 UCI datasets and CIFAR-10 we outperform competitors in a majority (16 out of 27) of the cases and tie for best performance in the remaining cases.In fact, in a couple of cases, we even approach the complex model's performance.We also conduct further experiments to validate assertions and intuitively understand why our method works.Theoretically, we motivate our approach by showing that the weighted loss minimized by simple models using our weighting upper bounds the loss of the complex model.", "target": ["Methode zur Verbesserung der Leistung einfacher Modelle anhand eines (genauen) komplexen Modells.", "In der Arbeit wird eine Methode zur Verbesserung der Vorhersagen eines Modells mit geringer Kapazität vorgeschlagen, die Vorteile gegenüber bestehenden Ansätzen aufweist."]} {"source": "We propose a principled method for kernel learning, which relies on a Fourier-analytic characterization of translation-invariant or rotation-invariant kernels.Our method produces a sequence of feature maps, iteratively refining the SVM margin.We provide rigorous guarantees for optimality and generalization, interpreting our algorithm as online equilibrium-finding dynamics in a certain two-player min-max game.Evaluations on synthetic and real-world datasets demonstrate scalability and consistent improvements over related random features-based methods.", "target": ["Ein einfacher und praktischer Algorithmus zum Erlernen eines margenmaximierenden translationsinvarianten oder sphärisch symmetrischen Kernels aus Trainingsdaten, unter Verwendung von Werkzeugen der Fourier-Analyse und der Regret-Minimierung.", "Die Arbeit schlägt vor, einen benutzerdefinierten übersetzungs- oder drehungsinvarianten Kern in der Fourier-Darstellung zu lernen, um die Marge der SVM zu maximieren.", "Die Autoren schlagen einen interessanten Algorithmus für das gemeinsame Lernen des l1-SVM und des Fourier-Kernels vor.", "Die Autoren betrachten das Erlernen direkter Fourier-Darstellungen von verschiebungs- und translationsinvarianten Kerneln für Anwendungen des maschinellen Lernens, wobei die Ausrichtung des Kernels auf die Daten die zu optimierende Zielfunktion darstellt."]} {"source": "We elaborate on using importance sampling for causal reasoning, in particular for counterfactual inference.We show how this can be implemented natively in probabilistic programming.By considering the structure of the counterfactual query, one can significantly optimise the inference process.We also consider design choices to enable further optimisations.We introduce MultiVerse, a probabilistic programming prototype engine for approximate causal reasoning.We provide experimental results and compare with Pyro, an existing probabilistic programming framework with some of causal reasoning tools.", "target": ["Probabilistische Programmierung, die kausale, kontrafaktische Inferenz unterstützt."]} {"source": "We consider the problem of representing collective behavior of large populations and predicting the evolution of a population distribution over a discrete state space.A discrete time mean field game (MFG) is motivated as an interpretable model founded on game theory for understanding the aggregate effect of individual actions and predicting the temporal evolution of population distributions.We achieve a synthesis of MFG and Markov decision processes (MDP) by showing that a special MFG is reducible to an MDP.This enables us to broaden the scope of mean field game theory and infer MFG models of large real-world systems via deep inverse reinforcement learning.Our method learns both the reward function and forward dynamics of an MFG from real data, and we report the first empirical test of a mean field game model of a real-world social media population.", "target": ["Inferenz eines Mean Field Game (MFG)-Modells für das Verhalten großer Populationen durch eine Synthese von MFG und Markov-Entscheidungsprozessen.", "Die Autoren befassen sich mit der Inferenz in Modellen für kollektives Verhalten, indem sie inverses Reinforcement Learning verwenden, um die Belohnungsfunktionen der Agenten im Modell zu lernen."]} {"source": "We study the problem of training sequential generative models for capturing coordinated multi-agent trajectory behavior, such as offensive basketball gameplay. When modeling such settings, it is often beneficial to design hierarchical models that can capture long-term coordination using intermediate variables. Furthermore, these intermediate variables should capture interesting high-level behavioral semantics in an interpretable and manipulable way.We present a hierarchical framework that can effectively learn such sequential generative models. Our approach is inspired by recent work on leveraging programmatically produced weak labels, which we extend to the spatiotemporal regime.In addition to synthetic settings, we show how to instantiate our framework to effectively model complex interactions between basketball players and generate realistic multi-agent trajectories of basketball gameplay over long time periods.We validate our approach using both quantitative and qualitative evaluations, including a user study comparison conducted with professional sports analysts.", "target": ["Wir kombinieren tiefe generative Modelle mit programmgesteuerter schwacher Überwachung, um koordinierte Multi-Agenten-Trajektorien von deutlich höherer Qualität zu erzeugen als frühere Basislösungen.", "Schlägt sequentielle generative Multi-Agenten-Modelle vor.", "Die Arbeit schlägt vor, generative Modelle zu trainieren, die Multi-Agenten-Trajektorien unter Verwendung heuristischer Funktionen erzeugen, die Variablen kennzeichnen, die ansonsten in den Trainingsdaten verborgen wären."]} {"source": "Many automated machine learning methods, such as those for hyperparameter and neural architecture optimization, are computationally expensive because they involve training many different model configurations.In this work, we present a new method that saves computational budget by terminating poor configurations early on in the training.In contrast to existing methods, we consider this task as a ranking and transfer learning problem.We qualitatively show that by optimizing a pairwise ranking loss and leveraging learning curves from other data sets, our model is able to effectively rank learning curves without having to observe many or very long learning curves.We further demonstrate that our method can be used to accelerate a neural architecture search by a factor of up to 100 without a significant performance degradation of the discovered architecture.In further experiments we analyze the quality of ranking, the influence of different model components as well as the predictive behavior of the model.", "target": ["Lernen, Lernkurven zu ordnen, um vielversprechende Trainingsaufgaben frühzeitig zu beenden. Neuheit: Verwendung von paarweisen Rangfolgeverlusten zur direkten Modellierung der Wahrscheinlichkeit von Verbesserungs- und Transferlernen über Datensätze hinweg, um die erforderlichen Trainingsdaten zu reduzieren.", "In der Arbeit wird eine Methode zur Einstufung von Lernkurven neuronaler Netze vorgeschlagen, die Lernkurven über verschiedene Datensätze hinweg modellieren kann, um höhere Geschwindigkeitssteigerungen bei Bildklassifizierungsaufgaben zu erzielen."]} {"source": "Continual learning is the problem of learning new tasks or knowledge while protecting old knowledge and ideally generalizing from old experience to learn new tasks faster.Neural networks trained by stochastic gradient descent often degrade on old tasks when trained successively on new tasks with different data distributions.This phenomenon, referred to as catastrophic forgetting, is considered a major hurdle to learning with non-stationary data or sequences of new tasks, and prevents networks from continually accumulating knowledge and skills.We examine this issue in the context of reinforcement learning, in a setting where an agent is exposed to tasks in a sequence.Unlike most other work, we do not provide an explicit indication to the model of task boundaries, which is the most general circumstance for a learning agent exposed to continuous experience.While various methods to counteract catastrophic forgetting have recently been proposed, we explore a straightforward, general, and seemingly overlooked solution - that of using experience replay buffers for all past events - with a mixture of on- and off-policy learning, leveraging behavioral cloning.We show that this strategy can still learn new tasks quickly yet can substantially reduce catastrophic forgetting in both Atari and DMLab domains, even matching the performance of methods that require task identities.When buffer storage is constrained, we confirm that a simple mechanism for randomly discarding data allows a limited size buffer to perform almost as well as an unbounded one.", "target": ["Wir zeigen, dass in kontinuierlichen Lernumgebungen katastrophales Vergessen vermieden werden kann, indem man RL mit einer Mischung aus neuer und wiederholter Erfahrung anwendet, mit einem Verhaltensklonverlust.", "Er schlägt eine besondere Variante der Erfahrungswiederholung mit Verhaltensklonen als Methode für kontinuierliches Lernen vor."]} {"source": "We present a method which learns to integrate temporal information, from a learned dynamics model, with ambiguous visual information, from a learned vision model, in the context of interacting agents.Our method is based on a graph-structured variational recurrent neural network, which is trained end-to-end to infer the current state of the (partially observed) world, as well as to forecast future states.We show that our method outperforms various baselines on two sports datasets, one based on real basketball trajectories, and one generated by a soccer game engine.", "target": ["Wir stellen eine Methode vor, die lernt, zeitliche Informationen und mehrdeutige visuelle Informationen im Kontext von interagierenden Agenten zu integrieren.", "Die Autoren schlagen Graph VRNN vor, das die Interaktion mehrerer Agenten modelliert, indem für jeden Agenten ein VRNN eingesetzt wird.", "In diesem Beitrag wird eine auf einem neuronalen Graphennetz basierende Architektur vorgestellt, die trainiert wird, um die Interaktionen von Agenten in einer Umgebung direkt aus Pixeln zu lokalisieren und zu modellieren, und die Vorteile des Modells für die Verfolgung von Aufgaben und die Vorhersage von Agentenstandorten aufzeigt."]} {"source": "In this paper we study the problem of learning the weights of a deep convolutional neural network.We consider a network where convolutions are carried out over non-overlapping patches with a single kernel in each layer.We develop an algorithm for simultaneously learning all the kernels from the training data.Our approach dubbed Deep Tensor Decomposition (DeepTD) is based on a rank-1 tensor decomposition.We theoretically investigate DeepTD under a realizable model for the training data where the inputs are chosen i.i.d. from a Gaussian distribution and the labels are generated according to planted convolutional kernels.We show that DeepTD is data-efficient and provably works as soon as the sample size exceeds the total number of convolutional weights in the network.Our numerical experiments demonstrate the effectiveness of DeepTD and verify our theoretical findings.", "target": ["Wir betrachten ein vereinfachtes Modell eines tiefen Convolutional Neural Networks. Wir zeigen, dass alle Schichten dieses Netzes mit einer geeigneten Anwendung der Tensorzerlegung annähernd gelernt werden können.", "Bietet theoretische Garantien für das Lernen von tiefen faConvolutional Neural Networks unter Verwendung der Rang-Eins-Tensorzerlegung.", "In dieser Arbeit wird eine Lernmethode für einen eingeschränkten Fall von tiefen Convolutional Networks vorgeschlagen, bei dem die Schichten auf den nicht überlappenden Fall beschränkt sind und nur einen Ausgangskanal pro Schicht haben.", "Analysiert das Problem des Lernens einer sehr speziellen Klasse von CNNs: Jede Schicht besteht aus einem einzigen Filter, der auf nicht überlappende Bereiche der Eingabe angewendet wird."]} {"source": "Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance of inference without compromising accuracy.However, contemporary experience is that the sparse architectures produced by pruning are difficult to train from the start, which would similarly improve training performance.We find that a standard pruning technique naturally uncovers subnetworks whose initializations made them capable of training effectively.Based on these results, we articulate the \"lottery ticket hypothesis:\" dense, randomly-initialized, feed-forward networks contain subnetworks (\"winning tickets\") that - when trained in isolation - reach test accuracy comparable to the original network in a similar number of iterations.The winning tickets we find have won the initialization lottery: their connections have initial weights that make training particularly effective.We present an algorithm to identify winning tickets and a series of experiments that support the lottery ticket hypothesis and the importance of these fortuitous initializations.We consistently find winning tickets that are less than 10-20% of the size of several fully-connected and convolutional feed-forward architectures for MNIST and CIFAR10.Above this size, the winning tickets that we find learn faster than the original network and reach higher test accuracy.", "target": ["Neuronale Netze mit Vorwärtskopplung, bei denen die Gewichte nach dem Training pruned werden können, könnten die gleichen Gewichte vor dem Training gepruned worden sein.", "Zeigt, dass es spärliche Subnetze gibt, die von Grund auf mit guter Generalisierungsleistung trainiert werden können, und schlägt unpruned, zufällig initialisierte NNs enthaltende Subnetze vor, die von Grund auf mit ähnlicher Generalisierungsgenauigkeit trainiert werden können.", "In der Arbeit wird die Hypothese untersucht, dass zufällig initialisierte neuronale Netze Teilnetze enthalten, die gleich schnell oder schneller konvergieren und die gleiche oder eine bessere Klassifizierungsgenauigkeit erreichen können."]} {"source": "We investigate the difficulties of training sparse neural networks and make new observations about optimization dynamics and the energy landscape within the sparse regime.Recent work of \\citep{Gale2019, Liu2018} has shown that sparse ResNet-50 architectures trained on ImageNet-2012 dataset converge to solutions that are significantly worse than those found by pruning.We show that, despite the failure of optimizers, there is a linear path with a monotonically decreasing objective from the initialization to the ``good'' solution.Additionally, our attempts to find a decreasing objective path from ``bad'' solutions to the ``good'' ones in the sparse subspace fail.However, if we allow the path to traverse the dense subspace, then we consistently find a path between two solutions.These findings suggest traversing extra dimensions may be needed to escape stationary points found in the sparse subspace.", "target": ["In diesem Beitrag wird die Schwierigkeit des Trainings spärlicher neuronaler Netze durch Interpolationsexperimente in der Energielandschaft aufgezeigt "]} {"source": "Neural network training depends on the structure of the underlying loss landscape, i.e. local minima, saddle points, flat plateaus, and loss barriers.In relation to the structure of the landscape, we study the permutation symmetry of neurons in each layer of a deep neural network, which gives rise not only to multiple equivalent global minima of the loss function but also to critical points in between partner minima.In a network of $d-1$ hidden layers with $n_k$ neurons in layers $k = 1, \\ldots, d$, we construct continuous paths between equivalent global minima that lead through a `permutation point' where the input and output weight vectors of two neurons in the same hidden layer $k$ collide and interchange.We show that such permutation points are critical points which lie inside high-dimensional subspaces of equal loss, contributing to the global flatness of the landscape.We also find that a permutation point for the exchange of neurons $i$ and $j$ transits into a flat high-dimensional plateau that enables all $n_k!$permutations of neurons in a given layer $k$ at the same loss value. Moreover, we introduce higher-order permutation points by exploiting the hierarchical structure in the loss landscapes of neural networks, and find that the number of $K$-th order permutation points is much larger than the (already huge) number of equivalent global minima -- at least by a polynomial factor of order $K$. In twotasks, we demonstrate numerically with our path finding method that continuous paths between partner minima exist: first, in a toy network with a single hidden layer on a function approximation task and, second, in a multilayer network on the MNIST task. Our geometricapproach yields a lower bound on the number of critical points generated by weight-space symmetries and provides a simple intuitive link between previous theoretical results and numerical observations.", "target": ["Die Gewichtsraum-Symmetrie in den Landschaften neuronaler Netze führt zu einer Vielzahl von Sätteln und flachen hochdimensionalen Unterräumen.", "In der Arbeit wird eine verlustarme Methode zur Untersuchung der Verlustfunktion in Bezug auf die Parameter eines neuronalen Netzes unter dem Gesichtspunkt der Gewichtsraumsymmetrie vorgestellt."]} {"source": "The training of stochastic neural network models with binary ($\\pm1$) weights and activations via continuous surrogate networks is investigated.We derive, using mean field theory, a set of scalar equations describing how input signals propagate through surrogate networks.The equations reveal that depending on the choice of surrogate model, the networks may or may not exhibit an order to chaos transition, and the presence of depth scales that limit the maximum trainable depth.Specifically, in solving the equations for edge of chaos conditions, we show that surrogates derived using the Gaussian local reparameterisation trick have no critical initialisation, whereas a deterministic surrogates based on analytic Gaussian integration do.The theory is applied to a range of binary neuron and weight design choices, such as different neuron noise models, allowing the categorisation of algorithms in terms of their behaviour at initialisation.Moreover, we predict theoretically and confirm numerically, that common weight initialization schemes used in standard continuous networks, when applied to the mean values of the stochastic binary weights, yield poor training performance.This study shows that, contrary to common intuition, the means of the stochastic binary weights should be initialised close to close to $\\pm 1$ for deeper networks to be trainable.", "target": ["Signalausbreitungstheorie angewandt auf kontinuierliche Surrogates binärer Netze; kontraintuitive Initialisierung; Reparameterisierungstrick nicht hilfreich.", "Die Autoren untersuchen die Trainingsdynamik binärer neuronaler Netze bei Verwendung kontinuierlicher Surrogates, untersuchen, welche Eigenschaften Netze bei der Initialisierung haben sollten, um optimal zu trainieren, und geben konkrete Ratschläge zu stochastischen Gewichten bei der Initialisierung.", "Eine eingehende Untersuchung von stochastischen binären Netzen, kontinuierlichen Surrogaten und ihrer Trainingsdynamik, mit Einblicken in die Initialisierung von Gewichten für beste Leistung."]} {"source": "Semantic dependency parsing, which aims to find rich bi-lexical relationships, allows words to have multiple dependency heads, resulting in graph-structured representations.We propose an approach to semi-supervised learning of semantic dependency parsers based on the CRF autoencoder framework.Our encoder is a discriminative neural semantic dependency parser that predicts the latent parse graph of the input sentence.Our decoder is a generative neural model that reconstructs the input sentence conditioned on the latent parse graph.Our model is arc-factored and therefore parsing and learning are both tractable.Experiments show our model achieves significant and consistent improvement over the supervised baseline.", "target": ["Wir schlagen einen Ansatz zum halbüberwachten Lernen von semantischen Dependenzparsern vor, der auf dem CRF-Autoencoder-Rahmen basiert.", "Diese Arbeit konzentriert sich auf semi-supervised semantische Abhängigkeit Parsing mit dem CRF-Auto-Kodierer, um das Modell in einem semi-supervised Stil zu trainieren, zeigt die Wirksamkeit auf niedrigen Ressource gelabelten Daten Aufgaben."]} {"source": "For sequence models with large word-level vocabularies, a majority of network parameters lie in the input and output layers.In this work, we describe a new method, DeFINE, for learning deep word-level representations efficiently.Our architecture uses a hierarchical structure with novel skip-connections which allows for the use of low dimensional input and output layers, reducing total parameters and training time while delivering similar or better performance versus existing methods.DeFINE can be incorporated easily in new or existing sequence models.Compared to state-of-the-art methods including adaptive input representations, this technique results in a 6% to 20% drop in perplexity.On WikiText-103, DeFINE reduces total parameters of Transformer-XL by half with minimal impact on performance.On the Penn Treebank, DeFINE improves AWD-LSTM by 4 points with a 17% reduction in parameters, achieving comparable performance to state-of-the-art methods with fewer parameters.For machine translation, DeFINE improves a Transformer model by 2% while simultaneously reducing total parameters by 26%", "target": ["DeFINE verwendet ein tiefes, hierarchisches, spärliches Netzwerk mit neuen Sprungverbindungen, um effizient bessere Worteinbettungen zu lernen. ", "Dieses Papier beschreibt eine neue Methode zum effizienten Erlernen tiefer Wortrepräsentationen durch die Verwendung einer hierarchischen Struktur mit Skip-Verbindungen für die Verwendung von niedrigdimensionalen Eingabe- und Ausgabeschichten."]} {"source": "In this paper, we present a reproduction of the paper of Bertinetto et al. [2019] \"Meta-learning with differentiable closed-form solvers\" as part of the ICLR 2019 Reproducibility Challenge.In successfully reproducing the most crucial part of the paper, we reach a performance that is comparable with or superior to the original paper on two benchmarks for several settings.We evaluate new baseline results, using a new dataset presented in the paper.Yet, we also provide multiple remarks and recommendations about reproducibility and comparability. After we brought our reproducibility work to the authors’ attention, they have updated the original paper on which this work is based and released code as well.Our contributions mainly consist in reproducing the most important results of their original paper, in giving insight in the reproducibility and in providing a first open-source implementation.", "target": ["Wir reproduzieren erfolgreich einen Meta-Learning-Ansatz für die Klassifizierung von wenigen Aufnahmen, der durch Backpropagieren der Lösung einer geschlossenen Form funktioniert, und geben Anmerkungen zum Vergleich mit Baselines."]} {"source": "Network pruning has emerged as a powerful technique for reducing the size of deep neural networks.Pruning uncovers high-performance subnetworks by taking a trained dense network and gradually removing unimportant connections.Recently, alternative techniques have emerged for training sparse networks directly without having to train a large dense model beforehand, thereby achieving small memory footprints during both training and inference.These techniques are based on dynamic reallocation of non-zero parameters during training.Thus, they are in effect executing a training-time search for the optimal subnetwork.We investigate a most recent one of these techniques and conduct additional experiments to elucidate its behavior in training sparse deep convolutional networks.Dynamic parameter reallocation converges early during training to a highly trainable subnetwork.We show that neither the structure, nor the initialization of the discovered high-performance subnetwork is sufficient to explain its good performance.Rather, it is the dynamics of parameter reallocation that are responsible for successful learning.Dynamic parameter reallocation thus improves the trainability of deep convolutional networks, playing a similar role as overparameterization, without incurring the memory and computational cost of the latter.", "target": ["Die dynamische Neuzuweisung von Parametern ermöglicht ein erfolgreiches direktes Training kompakter spärlicher Netze und spielt selbst dann eine unverzichtbare Rolle, wenn wir das optimale spärliche Netz von vornherein kennen."]} {"source": "n this paper we present a thrust in three directions of visual development us- ing supervised and semi-supervised techniques.The first is an implementation of semi-supervised object detection and recognition using the principles of Soft At- tention and Generative Adversarial Networks (GANs).The second and the third are supervised networks that learn basic concepts of spatial locality and quantity respectively using Convolutional Neural Networks (CNNs).The three thrusts to- gether are based on the approach of Experiential Robot Learning, introduced in previous publication.While the results are unripe for implementation, we believe they constitute a stepping stone towards autonomous development of robotic vi- sual modules.", "target": ["3 Schwerpunkte, die als Sprungbretter für das Roboter-Erfahrungslernen des Bildverarbeitungsmoduls dienen.", "Untersucht die Leistung bestehender Bildklassifizierer und Objektdetektoren. "]} {"source": "Characterization of the representations learned in intermediate layers of deep networks can provide valuable insight into the nature of a task and can guide the development of well-tailored learning strategies.Here we study convolutional neural network-based acoustic models in the context of automatic speech recognition.Adapting a method proposed by Yosinski et al. [2014], we measure the transferability of each layer between German and English to assess the their language-specifity.We observe three distinct regions of transferability: (1) the first two layers are entirely transferable between languages, (2) layers 2–8 are also highly transferable but we find evidence of some language specificity, (3) the subsequent fully connected layers are more language specific but can be successfully finetuned to the target language.To further probe the effect of weight freezing, we performed follow-up experiments using freeze-training [Raghu et al., 2017].Our results are consistent with the observation that CCNs converge 'bottom up' during training and demonstrate the benefit of freeze training, especially for transfer learning.", "target": ["Mit Ausnahme der ersten beiden Schichten unserer CNN-basierten akustischen Modelle wiesen alle ein gewisses Maß an Sprachspezifität auf, aber das Einfriertraining ermöglichte einen erfolgreichen Transfer zwischen den Sprachen.", "Die Arbeit misst die Übertragbarkeit von Merkmalen für jede Schicht in CNN-basierten akustischen Modellen über verschiedene Sprachen hinweg und kommt zu dem Schluss, dass AMs, die mit der Technik des \"Freeze-Trainings\" trainiert wurden, andere übertragene Modelle übertreffen."]} {"source": "Policy gradients methods often achieve better performance when the change in policy is limited to a small Kullback-Leibler divergence.We derive policy gradients where the change in policy is limited to a small Wasserstein distance (or trust region).This is done in the discrete and continuous multi-armed bandit settings with entropy regularisation.We show that in the small steps limit with respect to the Wasserstein distance $W_2$, policy dynamics are governed by the heat equation, following the Jordan-Kinderlehrer-Otto result.This means that policies undergo diffusion and advection, concentrating near actions with high reward.This helps elucidate the nature of convergence in the probability matching setup, and provides justification for empirical practices such as Gaussian policy priors and additive gradient noise.", "target": ["Verknüpfung von Wasserstein-Vertrauensgebieten, entropischen politischen Gradienten und der Wärmegleichung.", "Die Arbeit untersucht die Verbindungen zwischen Reinforcement Learning und der Theorie des quadratischen optimalen Transports.", "Die Autoren untersuchten den Policy-Gradienten mit einem Wechsel der Policies, der durch eine Vertrauensregion mit Wasserstein-Distanz im Multi-Armed-Bandit-Setting begrenzt ist, und zeigten, dass die Policy-Dynamik im Grenzbereich kleiner Schritte durch die Wärmegleichung (Fokker-Planck-Gleichung) bestimmt wird."]} {"source": "The softmax function is widely used to train deep neural networks for multi-class classification.Despite its outstanding performance in classification tasks, the features derived from the supervision of softmax are usually sub-optimal in some scenarios where Euclidean distances apply in feature spaces.To address this issue, we propose a new loss, dubbed the isotropic loss, in the sense that the overall distribution of data points is regularized to approach the isotropic normal one.Combined with the vanilla softmax, we formalize a novel criterion called the isotropic softmax, or isomax for short, for supervised learning of deep neural networks.By virtue of the isomax, the intra-class features are penalized by the isotropic loss while inter-class distances are well kept by the original softmax loss.Moreover, the isomax loss does not require any additional modifications to the network, mini-batches or the training process.Extensive experiments on classification and clustering are performed to demonstrate the superiority and robustness of the isomax loss.", "target": ["Die Unterscheidungsfähigkeit von Softmax beim Lernen von Merkmalsvektoren von Objekten wird durch die isotrope Normalisierung der globalen Verteilung der Datenpunkte effektiv verbessert."]} {"source": "A fundamental question in reinforcement learning is whether model-free algorithms are sample efficient.Recently, Jin et al. (2018) proposed a Q-learning algorithm with UCB exploration policy, and proved it has nearly optimal regret bound for finite-horizon episodic MDP.In this paper, we adapt Q-learning with UCB-exploration bonus to infinite-horizon MDP with discounted rewards \\emph{without} accessing a generative model.We show that the \\textit{sample complexity of exploration} of our algorithm is bounded by $\\tilde{O}({\\frac{SA}{\\epsilon^2(1-\\gamma)^7}})$.This improves the previously best known result of $\\tilde{O}({\\frac{SA}{\\epsilon^4(1-\\gamma)^8}})$ in this setting achieved by delayed Q-learning (Strehlet al., 2006),, and matches the lower bound in terms of $\\epsilon$ as well as $S$ and $A$ up to logarithmic factors.", "target": ["Wir passen das Q-Lernen mit UCB-Explorationsbonus an ein MDP mit unendlichem Zeithorizont und diskontierten Belohnungen an, ohne auf ein generatives Modell zuzugreifen, und verbessern das bisher beste bekannte Ergebnis.", "In dieser Arbeit wurde ein Q-Learning-Algorithmus mit UCB-Explorationspolitik für MDP mit unendlichem Horizont betrachtet."]} {"source": "Backpropagation is driving today's artificial neural networks (ANNs).However, despite extensive research, it remains unclear if the brain implements this algorithm.Among neuroscientists, reinforcement learning (RL) algorithms are often seen as a realistic alternative: neurons can randomly introduce change, and use unspecific feedback signals to observe their effect on the cost and thus approximate their gradient.However, the convergence rate of such learning scales poorly with the number of involved neurons.Here we propose a hybrid learning approach.Each neuron uses an RL-type strategy to learn how to approximate the gradients that backpropagation would provide.We provide proof that our approach converges to the true gradient for certain classes of networks.In both feedforward and convolutional networks, we empirically show that our approach learns to approximate the gradient, and can match the performance of gradient-based learning.Learning feedback weights provides a biologically plausible mechanism of achieving good performance, without the need for precise, pre-specified learning rules.", "target": ["Störungen können verwendet werden, um Rückkopplungsgewichte zu trainieren, die in Fully Connected und Convolutional Neural Networks lernen.", "In dieser Arbeit wird eine Methode vorgeschlagen, die das Problem des \"Gewichtstransports\" löst, indem die Gewichte für den Rückwärtsdurchlauf mit Hilfe eines rauschbasierten Schätzers geschätzt werden."]} {"source": "This paper proposes and demonstrates a surprising pattern in the training of neural networks: there is a one to one relation between the values of any pair of losses (such as cross entropy, mean squared error, 0/1 error etc.) evaluated for a model arising at (any point of) a training run.This pattern is universal in the sense that this one to one relationship is identical across architectures (such as VGG, Resnet, Densenet etc.), algorithms (SGD and SGD with momentum) and training loss functions (cross entropy and mean squared error).", "target": ["Wir identifizieren einige universelle Muster (d.h. über alle Architekturen hinweg) im Verhalten verschiedener Ersatzverluste (CE, MSE, 0-1 Verlust) beim Training neuronaler Netze und präsentieren unterstützende empirische Beweise."]}