Link/DOI string | Publication Date timestamp[ns] | Title string | Authors string | Abstract string | Categories string | label int64 | source string | Classification_embedding list | Proximity_embedding list | top_10_similar string | max_similarity float64 | avg_similarity float64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
http://arxiv.org/abs/1902.05605v4 | 2019-02-14T00:00:00 | CrossQ: Batch Normalization in Deep Reinforcement Learning for Greater Sample Efficiency and Simplicity | Aditya Bhatt; Daniel Palenicek; Boris Belousov; Max Argus; Artemij Amiranashvili; Thomas Brox; Jan Peters | Sample efficiency is a crucial problem in deep reinforcement learning. Recent algorithms, such as REDQ and DroQ, found a way to improve the sample efficiency by increasing the update-to-data (UTD) ratio to 20 gradient update steps on the critic per environment sample. However, this comes at the expense of a greatly increased computational cost. To reduce this computational burden, we introduce CrossQ: A lightweight algorithm for continuous control tasks that makes careful use of Batch Normalization and removes target networks to surpass the current state-of-the-art in sample efficiency while maintaining a low UTD ratio of 1. Notably, CrossQ does not rely on advanced bias-reduction schemes used in current methods. CrossQ's contributions are threefold: (1) it matches or surpasses current state-of-the-art methods in terms of sample efficiency, (2) it substantially reduces the computational cost compared to REDQ and DroQ, (3) it is easy to implement, requiring just a few lines of code on top of SAC. | cs.LG; stat.ML | 1 | ICLR-2024 | [
-0.66291344165802,
-0.9310970306396484,
0.3393835425376892,
-0.39459407329559326,
-0.15434421598911285,
-0.09034492075443268,
0.3100298047065735,
0.26732906699180603,
-0.5123820304870605,
-0.21383555233478546,
-0.4221090078353882,
1.0684516429901123,
0.8614672422409058,
0.4628746509552002,... | [
-0.11044733226299286,
0.369069904088974,
-0.395124226808548,
0.10284718871116638,
-0.1350134015083313,
0.06156173720955849,
0.7323307394981384,
-0.13476814329624176,
-0.8261107802391052,
0.44935739040374756,
-0.009715866297483444,
0.0547449067234993,
0.44324997067451477,
-0.143585309386253... | {"http://arxiv.org/abs/2304.10466v1": 0.954596757888794, "http://arxiv.org/abs/2205.15043v2": 0.9472078084945679, "http://arxiv.org/abs/2205.11027v3": 0.9450389742851257, "http://arxiv.org/abs/2302.10145v1": 0.9449824094772339, "http://arxiv.org/abs/2210.12566v2": 0.944953203201294, "http://arxiv.org/abs/2301.11490v3": 0.9447583556175232, "http://arxiv.org/abs/2210.01542v1": 0.942772626876831, "http://arxiv.org/abs/2302.11312v1": 0.9404850602149963, "http://arxiv.org/abs/2208.06193v3": 0.9399589896202087, "http://arxiv.org/abs/2306.13085v1": 0.9398437142372131} | 0.954597 | 0.94446 |
http://arxiv.org/abs/2003.13898v3 | 2020-03-31T00:00:00 | Edge Guided GANs with Contrastive Learning for Semantic Image Synthesis | Hao Tang; Xiaojuan Qi; Guolei Sun; Dan Xu; Nicu Sebe; Radu Timofte; Luc Van Gool | We propose a novel ECGAN for the challenging semantic image synthesis task. Although considerable improvement has been achieved, the quality of synthesized images is far from satisfactory due to three largely unresolved challenges. 1) The semantic labels do not provide detailed structural information, making it difficult to synthesize local details and structures. 2) The widely adopted CNN operations such as convolution, down-sampling, and normalization usually cause spatial resolution loss and thus cannot fully preserve the original semantic information, leading to semantically inconsistent results. 3) Existing semantic image synthesis methods focus on modeling local semantic information from a single input semantic layout. However, they ignore global semantic information of multiple input semantic layouts, i.e., semantic cross-relations between pixels across different input layouts. To tackle 1), we propose to use edge as an intermediate representation which is further adopted to guide image generation via a proposed attention guided edge transfer module. Edge information is produced by a convolutional generator and introduces detailed structure information. To tackle 2), we design an effective module to selectively highlight class-dependent feature maps according to the original semantic layout to preserve the semantic information. To tackle 3), inspired by current methods in contrastive learning, we propose a novel contrastive learning method, which aims to enforce pixel embeddings belonging to the same semantic class to generate more similar image content than those from different classes. Doing so can capture more semantic relations by explicitly exploring the structures of labeled pixels from multiple input semantic layouts. Experiments on three challenging datasets show that our ECGAN achieves significantly better results than state-of-the-art methods. | cs.CV; cs.LG; eess.IV | 1 | ICLR-2023 | [
-0.49626874923706055,
-0.46649807691574097,
-0.3833135962486267,
-0.343882292509079,
-0.4688773453235626,
0.43584397435188293,
0.252405047416687,
-0.16754350066184998,
0.09305083006620407,
-1.051220417022705,
-0.3884459435939789,
1.9267717599868774,
0.7328929901123047,
-0.15373994410037994... | [
0.3297836184501648,
0.399527907371521,
-0.05010775476694107,
0.10227156430482864,
-0.22314831614494324,
0.057503435760736465,
0.2564406991004944,
-0.32262739539146423,
0.10919182002544403,
-0.4649718999862671,
0.6851951479911804,
0.9265543222427368,
0.35406437516212463,
-0.6678822040557861... | {} | null | null |
http://arxiv.org/abs/2006.07796v4 | 2020-06-14T00:00:00 | Structure by Architecture: Structured Representations without Regularization | Felix Leeb; Guilia Lanzillotta; Yashas Annadani; Michel Besserve; Stefan Bauer; Bernhard Schölkopf | We study the problem of self-supervised structured representation learning using autoencoders for downstream tasks such as generative modeling. Unlike most methods which rely on matching an arbitrary, relatively unstructured, prior distribution for sampling, we propose a sampling technique that relies solely on the independence of latent variables, thereby avoiding the trade-off between reconstruction quality and generative performance typically observed in VAEs. We design a novel autoencoder architecture capable of learning a structured representation without the need for aggressive regularization. Our structural decoders learn a hierarchy of latent variables, thereby ordering the information without any additional regularization or supervision. We demonstrate how these models learn a representation that improves results in a variety of downstream tasks including generation, disentanglement, and extrapolation using several challenging and natural image datasets. | cs.LG; cs.CV; stat.ML | 1 | ICLR-2023 | [
-0.617473840713501,
-0.3876475393772125,
-0.34355098009109497,
-0.1945188045501709,
-0.5904003381729126,
-0.10441194474697113,
0.49440860748291016,
0.12180665135383606,
-0.16045475006103516,
-0.9508771896362305,
-0.14502418041229248,
1.3771533966064453,
0.7589877843856812,
0.40906929969787... | [
0.2846464514732361,
0.788372814655304,
-0.23505939543247223,
-0.06445222347974777,
-0.30919349193573,
-0.2559911608695984,
0.7198917269706726,
-0.7416119575500488,
0.25550389289855957,
-0.49763917922973633,
0.7615611553192139,
0.4657799005508423,
0.31415700912475586,
-0.2725054621696472,
... | {} | null | null |
http://arxiv.org/abs/2007.09890v3 | 2020-07-20T00:00:00 | Learning the Positions in CountSketch | Simin Liu; Tianrui Liu; Ali Vakilian; Yulin Wan; David P. Woodruff | We consider sketching algorithms which first quickly compress data by multiplication with a random sketch matrix, and then apply the sketch to quickly solve an optimization problem, e.g., low rank approximation. In the learning-based sketching paradigm proposed by Indyk et al. [2019], the sketch matrix is found by choosing a random sparse matrix, e.g., the CountSketch, and then updating the values of the non-zero entries by running gradient descent on a training data set. Despite the growing body of work on this paradigm, a noticeable omission is that the locations of the non-zero entries of previous algorithms were fixed, and only their values were learned. In this work we propose the first learning algorithm that also optimizes the locations of the non-zero entries. We show this algorithm gives better accuracy for low rank approximation than previous work, and apply it to other problems such as $k$-means clustering for the first time. We show that our algorithm is provably better in the spiked covariance model and for Zipfian matrices. We also show the importance of the sketch monotonicity property for combining learned sketches. Our empirical results show the importance of optimizing not only the values of the non-zero entries but also their positions. | cs.LG; cs.DS; cs.NA; math.NA; stat.ML | 1 | ICLR-2023 | [
-1.6176917552947998,
-0.7511022090911865,
-0.17473077774047852,
-0.6125714778900146,
-1.694733738899231,
-0.21196356415748596,
0.30018311738967896,
0.5417760610580444,
-0.02536710351705551,
-0.4176151752471924,
0.1254631131887436,
1.4425902366638184,
-0.06326741725206375,
0.760006129741668... | [
0.12348458170890808,
0.5360383987426758,
-0.9392450451850891,
-0.02199743688106537,
-0.4796910881996155,
-0.037633974105119705,
0.8480938076972961,
-0.261338472366333,
-0.2165193110704422,
-0.5568140745162964,
0.7258276343345642,
0.22235091030597687,
-0.08329570293426514,
-0.04748295247554... | {} | null | null |
http://arxiv.org/abs/2008.03738v2 | 2020-08-09T00:00:00 | Treatment Effects Estimation by Uniform Transformer | Ruoqi Yu; Shulei Wang | In observational studies, balancing covariates in different treatment groups is essential to estimate treatment effects. One of the most commonly used methods for such purposes is weighting. The performance of this class of methods usually depends on strong regularity conditions for the underlying model, which might not hold in practice. In this paper, we investigate weighting methods from a functional estimation perspective and argue that the weights needed for covariate balancing could differ from those needed for treatment effects estimation under low regularity conditions. Motivated by this observation, we introduce a new framework of weighting that directly targets the treatment effects estimation. Unlike existing methods, the resulting estimator for a treatment effect under this new framework is a simple kernel-based $U$-statistic after applying a data-driven transformation to the observed covariates. We characterize the theoretical properties of the new estimators of treatment effects under a nonparametric setting and show that they are able to work robustly under low regularity conditions. The new framework is also applied to several numerical examples to demonstrate its practical merits. | stat.ME; math.ST; stat.TH | 1 | ICLR-2024 | [
-1.1140481233596802,
-0.3708285391330719,
-1.4242852926254272,
-0.6285358667373657,
-0.6922053098678589,
1.2139124870300293,
0.6684509515762329,
0.241927832365036,
0.16394862532615662,
1.6242477893829346,
0.02076822519302368,
1.1422077417373657,
0.16787250339984894,
-0.45601192116737366,
... | [
0.87340247631073,
0.6607159376144409,
-1.2003120183944702,
-0.016772247850894928,
-0.7433194518089294,
0.6987985968589783,
-0.13270306587219238,
-0.30221444368362427,
-0.13744932413101196,
0.2862550616264343,
0.8816241025924683,
0.4016844630241394,
-0.2706587016582489,
-0.257060706615448,
... | {"http://arxiv.org/abs/2306.06263v1": 0.922035813331604, "http://arxiv.org/abs/2208.08544v3": 0.921151340007782, "http://arxiv.org/abs/2210.00079v3": 0.9103657603263855, "http://arxiv.org/abs/2307.11503v1": 0.8976566791534424, "http://arxiv.org/abs/2206.02792v1": 0.896964430809021, "http://arxiv.org/abs/2201.12293v4": 0.8897626399993896, "http://arxiv.org/abs/2302.11337v3": 0.8893073797225952, "http://arxiv.org/abs/2307.08609v1": 0.8885442614555359, "http://arxiv.org/abs/2205.14977v3": 0.8884826898574829, "http://arxiv.org/abs/2309.14581v1": 0.8831429481506348} | 0.922036 | 0.898741 |
http://arxiv.org/abs/2102.09407v5 | 2021-02-18T00:00:00 | Adaptive Rational Activations to Boost Deep Reinforcement Learning | Quentin Delfosse; Patrick Schramowski; Martin Mundt; Alejandro Molina; Kristian Kersting | Latest insights from biology show that intelligence not only emerges from the connections between neurons but that individual neurons shoulder more computational responsibility than previously anticipated. This perspective should be critical in the context of constantly changing distinct reinforcement learning environments, yet current approaches still primarily employ static activation functions. In this work, we motivate why rationals are suitable for adaptable activation functions and why their inclusion into neural networks is crucial. Inspired by recurrence in residual networks, we derive a condition under which rational units are closed under residual connections and formulate a naturally regularised version: the recurrent-rational. We demonstrate that equipping popular algorithms with (recurrent-)rational activations leads to consistent improvements on Atari games, especially turning simple DQN into a solid approach, competitive to DDQN and Rainbow. | cs.LG | 1 | ICLR-2024 | [
-0.820600688457489,
-0.66475909948349,
-0.06603653728961945,
-0.0813857689499855,
-0.22895292937755585,
0.037353962659835815,
-0.26541951298713684,
0.6578511595726013,
-0.22478929162025452,
0.6004663705825806,
-0.7870171070098877,
1.0944037437438965,
0.3034815788269043,
0.308514803647995,
... | [
-0.23682548105716705,
0.9733121395111084,
-0.37704765796661377,
0.420199453830719,
0.35452955961227417,
0.01584509387612343,
0.3571116030216217,
-0.3924747407436371,
-0.7421926856040955,
0.15809044241905212,
0.035127826035022736,
0.07422138750553131,
0.26842379570007324,
-0.062350831925868... | {"http://arxiv.org/abs/2210.01542v1": 0.9413725137710571, "http://arxiv.org/abs/2210.13435v1": 0.9281185269355774, "http://arxiv.org/abs/2303.11934v1": 0.9249167442321777, "http://arxiv.org/abs/2205.15043v2": 0.9246573448181152, "http://arxiv.org/abs/2210.02157v2": 0.9245254397392273, "http://arxiv.org/abs/2209.10634v2": 0.9243888258934021, "http://arxiv.org/abs/2303.00599v1": 0.9241913557052612, "http://arxiv.org/abs/2304.10466v1": 0.9224520921707153, "http://arxiv.org/abs/2301.11490v3": 0.9222976565361023, "http://arxiv.org/abs/2201.08115v2": 0.9220247864723206} | 0.941373 | 0.925895 |
http://arxiv.org/abs/2102.10882v3 | 2021-02-22T00:00:00 | Conditional Positional Encodings for Vision Transformers | Xiangxiang Chu; Zhi Tian; Bo Zhang; Xinlong Wang; Chunhua Shen | We propose a conditional positional encoding (CPE) scheme for vision Transformers. Unlike previous fixed or learnable positional encodings, which are pre-defined and independent of input tokens, CPE is dynamically generated and conditioned on the local neighborhood of the input tokens. As a result, CPE can easily generalize to the input sequences that are longer than what the model has ever seen during training. Besides, CPE can keep the desired translation-invariance in the image classification task, resulting in improved performance. We implement CPE with a simple Position Encoding Generator (PEG) to get seamlessly incorporated into the current Transformer framework. Built on PEG, we present Conditional Position encoding Vision Transformer (CPVT). We demonstrate that CPVT has visually similar attention maps compared to those with learned positional encodings and delivers outperforming results. Our code is available at https://github.com/Meituan-AutoML/CPVT . | cs.CV; cs.AI; cs.LG | 1 | ICLR-2023 | [
-0.0836898684501648,
-0.6185870170593262,
0.19620980322360992,
-0.5301298499107361,
-0.04490542411804199,
-0.008586380630731583,
0.6108390688896179,
0.36469733715057373,
-0.6776522397994995,
-1.0573068857192993,
-0.37013331055641174,
1.1633328199386597,
0.7199555039405823,
0.18908077478408... | [
0.6201993227005005,
0.22138410806655884,
-0.05666289106011391,
0.013244099915027618,
-0.19509850442409515,
-0.0066283708438277245,
0.8183027505874634,
-0.15678533911705017,
-0.6908849477767944,
-0.6637924909591675,
0.4480058252811432,
0.19327974319458008,
0.26356256008148193,
-0.0861389413... | {} | null | null |
http://arxiv.org/abs/2103.01403v3 | 2021-03-02T00:00:00 | A Minimalist Dataset for Systematic Generalization of Perception, Syntax, and Semantics | Qing Li; Siyuan Huang; Yining Hong; Yixin Zhu; Ying Nian Wu; Song-Chun Zhu | Inspired by humans' exceptional ability to master arithmetic and generalize to new problems, we present a new dataset, Handwritten arithmetic with INTegers (HINT), to examine machines' capability of learning generalizable concepts at three levels: perception, syntax, and semantics. In HINT, machines are tasked with learning how concepts are perceived from raw signals such as images (i.e., perception), how multiple concepts are structurally combined to form a valid expression (i.e., syntax), and how concepts are realized to afford various reasoning tasks (i.e., semantics), all in a weakly supervised manner. Focusing on systematic generalization, we carefully design a five-fold test set to evaluate both the interpolation and the extrapolation of learned concepts w.r.t. the three levels. Further, we design a few-shot learning split to determine whether or not models can rapidly learn new concepts and generalize them to more complex scenarios. To comprehend existing models' limitations, we undertake extensive experiments with various sequence-to-sequence models, including RNNs, Transformers, and GPT-3 (with the chain of thought prompting). The results indicate that current models struggle to extrapolate to long-range syntactic dependency and semantics. Models exhibit a considerable gap toward human-level generalization when evaluated with new concepts in a few-shot setting. Moreover, we discover that it is infeasible to solve HINT by merely scaling up the dataset and the model size; this strategy contributes little to the extrapolation of syntax and semantics. Finally, in zero-shot GPT-3 experiments, the chain of thought prompting exhibits impressive results and significantly boosts the test accuracy. We believe the HINT dataset and the experimental findings are of great interest to the learning community on systematic generalization. | cs.LG; cs.AI; cs.CV | 1 | ICLR-2023 | [
-0.4402918815612793,
-0.46931028366088867,
0.8327910900115967,
-0.6764956712722778,
0.33663028478622437,
-0.21095795929431915,
-0.14375193417072296,
0.5198475122451782,
-0.46770697832107544,
-1.368025302886963,
-0.8177934288978577,
1.02467942237854,
0.925028920173645,
0.39266303181648254,
... | [
0.1770109236240387,
0.49474531412124634,
-0.012065744958817959,
-0.6595368981361389,
0.2432122379541397,
-0.041184600442647934,
0.3271341621875763,
0.27895477414131165,
-0.36322125792503357,
-0.3992106318473816,
0.2622905969619751,
0.00456913560628891,
0.19794493913650513,
-0.0174632780253... | {} | null | null |
http://arxiv.org/abs/2105.03692v4 | 2021-05-08T00:00:00 | Incompatibility Clustering as a Defense Against Backdoor Poisoning Attacks | Charles Jin; Melinda Sun; Martin Rinard | We propose a novel clustering mechanism based on an incompatibility property between subsets of data that emerges during model training. This mechanism partitions the dataset into subsets that generalize only to themselves, i.e., training on one subset does not improve performance on the other subsets. Leveraging the interaction between the dataset and the training process, our clustering mechanism partitions datasets into clusters that are defined by--and therefore meaningful to--the objective of the training process. We apply our clustering mechanism to defend against data poisoning attacks, in which the attacker injects malicious poisoned data into the training dataset to affect the trained model's output. Our evaluation focuses on backdoor attacks against deep neural networks trained to perform image classification using the GTSRB and CIFAR-10 datasets. Our results show that (1) these attacks produce poisoned datasets in which the poisoned and clean data are incompatible and (2) our technique successfully identifies (and removes) the poisoned data. In an end-to-end evaluation, our defense reduces the attack success rate to below 1% on 134 out of 165 scenarios, with only a 2% drop in clean accuracy on CIFAR-10 and a negligible drop in clean accuracy on GTSRB. | cs.LG; cs.CR; stat.ML | 1 | ICLR-2023 | [
-0.37018436193466187,
-1.4452083110809326,
-0.31904152035713196,
-0.45182371139526367,
-0.12147645652294159,
-0.061908505856990814,
1.05083167552948,
0.1093696728348732,
-0.9198442101478577,
-0.8656286001205444,
-0.90220707654953,
0.798670768737793,
0.255199670791626,
0.3586270213127136,
... | [
-0.516154944896698,
-0.31915998458862305,
-0.5338889956474304,
0.39147329330444336,
-0.26011013984680176,
-0.648097574710846,
0.559228241443634,
-0.30570095777511597,
-0.33112457394599915,
-0.5737159848213196,
0.05314182862639427,
-0.14378681778907776,
0.32628944516181946,
0.33085709810256... | {} | null | null |
http://arxiv.org/abs/2105.14559v3 | 2021-05-30T00:00:00 | Active Learning in Bayesian Neural Networks with Balanced Entropy Learning Principle | Jae Oh Woo | Acquiring labeled data is challenging in many machine learning applications with limited budgets. Active learning gives a procedure to select the most informative data points and improve data efficiency by reducing the cost of labeling. The info-max learning principle maximizing mutual information such as BALD has been successful and widely adapted in various active learning applications. However, this pool-based specific objective inherently introduces a redundant selection and further requires a high computational cost for batch selection. In this paper, we design and propose a new uncertainty measure, Balanced Entropy Acquisition (BalEntAcq), which captures the information balance between the uncertainty of underlying softmax probability and the label variable. To do this, we approximate each marginal distribution by Beta distribution. Beta approximation enables us to formulate BalEntAcq as a ratio between an augmented entropy and the marginalized joint entropy. The closed-form expression of BalEntAcq facilitates parallelization by estimating two parameters in each marginal Beta distribution. BalEntAcq is a purely standalone measure without requiring any relational computations with other data points. Nevertheless, BalEntAcq captures a well-diversified selection near the decision boundary with a margin, unlike other existing uncertainty measures such as BALD, Entropy, or Mean Standard Deviation (MeanSD). Finally, we demonstrate that our balanced entropy learning principle with BalEntAcq consistently outperforms well-known linearly scalable active learning methods, including a recently proposed PowerBALD, a simple but diversified version of BALD, by showing experimental results obtained from MNIST, CIFAR-100, SVHN, and TinyImageNet datasets. | cs.LG; stat.ML | 1 | ICLR-2023 | [
-0.34924548864364624,
-0.8688428997993469,
0.09013675153255463,
-0.8125857710838318,
-0.20999838411808014,
0.14888039231300354,
0.40915441513061523,
0.42102277278900146,
-0.9711588621139526,
-0.025488846004009247,
-0.5186123847961426,
1.2371513843536377,
0.8325861692428589,
0.8093088269233... | [
0.19341912865638733,
1.2231199741363525,
-1.0490398406982422,
-0.12455292791128159,
-0.42525553703308105,
0.23279394209384918,
0.5866612792015076,
-0.2136472463607788,
-0.4815607964992523,
0.21379615366458893,
0.8596210479736328,
0.7320253849029541,
0.455506831407547,
0.11812826991081238,
... | {} | null | null |
http://arxiv.org/abs/2106.09256v4 | 2021-06-17T00:00:00 | Seeing Differently, Acting Similarly: Heterogeneously Observable Imitation Learning | Xin-Qiang Cai; Yao-Xiang Ding; Zi-Xuan Chen; Yuan Jiang; Masashi Sugiyama; Zhi-Hua Zhou | In many real-world imitation learning tasks, the demonstrator and the learner have to act under different observation spaces. This situation brings significant obstacles to existing imitation learning approaches, since most of them learn policies under homogeneous observation spaces. On the other hand, previous studies under different observation spaces have strong assumptions that these two observation spaces coexist during the entire learning process. However, in reality, the observation coexistence will be limited due to the high cost of acquiring expert observations. In this work, we study this challenging problem with limited observation coexistence under heterogeneous observations: Heterogeneously Observable Imitation Learning (HOIL). We identify two underlying issues in HOIL: the dynamics mismatch and the support mismatch, and further propose the Importance Weighting with REjection (IWRE) algorithm based on importance weighting and learning with rejection to solve HOIL problems. Experimental results show that IWRE can solve various HOIL tasks, including the challenging tasks of transforming the vision-based demonstrations to random access memory (RAM)-based policies in the Atari domain, even with limited visual observations. | cs.LG; cs.AI | 1 | ICLR-2023 | [
-1.3380966186523438,
-0.9419226050376892,
-0.19381506741046906,
-0.5760135054588318,
-0.6192589998245239,
0.5576387643814087,
-0.10213492810726166,
0.8026064038276672,
-0.47118857502937317,
-0.09810575842857361,
-1.1096491813659668,
0.6672664284706116,
0.3943876624107361,
0.351197957992553... | [
0.016970589756965637,
0.5249782204627991,
-0.7417017817497253,
0.5488038063049316,
-0.37736108899116516,
0.2222440540790558,
0.3721325695514679,
-0.13776016235351562,
-0.8824924230575562,
0.09408016502857208,
0.11544178426265717,
-0.20038168132305145,
-0.04509687051177025,
0.24152711033821... | {} | null | null |
http://arxiv.org/abs/2106.09779v10 | 2021-06-17T00:00:00 | Private Federated Learning Without a Trusted Server: Optimal Algorithms for Convex Losses | Andrew Lowy; Meisam Razaviyayn | This paper studies federated learning (FL)--especially cross-silo FL--with data from people who do not trust the server or other silos. In this setting, each silo (e.g. hospital) has data from different people (e.g. patients) and must maintain the privacy of each person's data (e.g. medical record), even if the server or other silos act as adversarial eavesdroppers. This requirement motivates the study of Inter-Silo Record-Level Differential Privacy (ISRL-DP), which requires silos' communications to satisfy record/item-level differential privacy (DP). ISRL-DP ensures that the data of each person (e.g. patient) in silo i (e.g. hospital i) cannot be leaked. ISRL-DP is different from well-studied privacy notions. Central and user-level DP assume that people trust the server/other silos. On the other end of the spectrum, local DP assumes that people do not trust anyone at all (even their own silo). Sitting between central and local DP, ISRL-DP makes the realistic assumption (in cross-silo FL) that people trust their own silo, but not the server or other silos. In this work, we provide tight (up to logarithms) upper and lower bounds for ISRL-DP FL with convex/strongly convex loss functions and homogeneous (i.i.d.) silo data. Remarkably, we show that similar bounds are attainable for smooth losses with arbitrary heterogeneous silo data distributions, via an accelerated ISRL-DP algorithm. We also provide tight upper and lower bounds for ISRL-DP federated empirical risk minimization, and use acceleration to attain the optimal bounds in fewer rounds of communication than the state-of-the-art. Finally, with a secure "shuffler" to anonymize silo messages (but without a trusted server), our algorithm attains the optimal central DP rates under more practical trust assumptions. Numerical experiments show favorable privacy-accuracy tradeoffs for our algorithm in classification and regression tasks. | cs.LG; cs.CR; math.OC; stat.ML | 1 | ICLR-2023 | [
-1.2900537252426147,
0.27248454093933105,
-0.5491219162940979,
-0.7924072742462158,
-1.0884712934494019,
0.6130950450897217,
0.1256091147661209,
1.2123488187789917,
-0.2354287952184677,
0.22863036394119263,
-0.6383213996887207,
1.2778881788253784,
-0.31300443410873413,
0.032071031630039215... | [
0.4836312532424927,
0.4506132900714874,
-0.9173232316970825,
0.08411850035190582,
-0.7405535578727722,
0.3953872621059418,
0.14718055725097656,
0.3281877338886261,
-0.18046459555625916,
0.049115851521492004,
0.0876193717122078,
0.19473160803318024,
-0.0836520865559578,
-0.06489405781030655... | {} | null | null |
http://arxiv.org/abs/2107.14317v2 | 2021-07-29T00:00:00 | Temporal Dependencies in Feature Importance for Time Series Predictions | Kin Kwan Leung; Clayton Rooke; Jonathan Smith; Saba Zuberi; Maksims Volkovs | Time series data introduces two key challenges for explainability methods: firstly, observations of the same feature over subsequent time steps are not independent, and secondly, the same feature can have varying importance to model predictions over time. In this paper, we propose Windowed Feature Importance in Time (WinIT), a feature removal based explainability approach to address these issues. Unlike existing feature removal explanation methods, WinIT explicitly accounts for the temporal dependence between different observations of the same feature in the construction of its importance score. Furthermore, WinIT captures the varying importance of a feature over time, by summarizing its importance over a window of past time steps. We conduct an extensive empirical study on synthetic and real-world data, compare against a wide range of leading explainability methods, and explore the impact of various evaluation strategies. Our results show that WinIT achieves significant gains over existing methods, with more consistent performance across different evaluation metrics. The code for our work is publicly available at \url{https://github.com/layer6ai-labs/WinIT}. | cs.LG | 1 | ICLR-2023 | [
-1.9814486503601074,
-1.3910081386566162,
-0.4110342562198639,
-0.7246707081794739,
-0.5926828980445862,
-0.32587122917175293,
0.7588282227516174,
-0.18983328342437744,
-0.13029196858406067,
-0.3828207850456238,
-0.056385476142168045,
1.9053164720535278,
0.6107302308082581,
0.0361184626817... | [
0.0048394352197647095,
0.4149002432823181,
-0.4514267146587372,
-0.3286390006542206,
-0.07649215310811996,
0.14372208714485168,
0.7504097819328308,
-0.27369484305381775,
0.005543246399611235,
-0.09689952433109283,
0.5678091049194336,
0.47309350967407227,
-0.2982041835784912,
0.061680372804... | {} | null | null |
http://arxiv.org/abs/2108.11371v1 | 2021-08-25T00:00:00 | Understanding the Generalization of Adam in Learning Neural Networks with Proper Regularization | Difan Zou; Yuan Cao; Yuanzhi Li; Quanquan Gu | Adaptive gradient methods such as Adam have gained increasing popularity in deep learning optimization. However, it has been observed that compared with (stochastic) gradient descent, Adam can converge to a different solution with a significantly worse test error in many deep learning applications such as image classification, even with a fine-tuned regularization. In this paper, we provide a theoretical explanation for this phenomenon: we show that in the nonconvex setting of learning over-parameterized two-layer convolutional neural networks starting from the same random initialization, for a class of data distributions (inspired from image data), Adam and gradient descent (GD) can converge to different global solutions of the training objective with provably different generalization errors, even with weight decay regularization. In contrast, we show that if the training objective is convex, and the weight decay regularization is employed, any optimization algorithms including Adam and GD will converge to the same solution if the training is successful. This suggests that the inferior generalization performance of Adam is fundamentally tied to the nonconvex landscape of deep learning optimization. | cs.LG; math.OC; stat.ML | 1 | ICLR-2023 | [
-0.9644656181335449,
-0.6835591793060303,
0.2905294895172119,
-0.005392841994762421,
-0.7079866528511047,
0.20387521386146545,
0.16938376426696777,
-0.00418319646269083,
-0.4316701889038086,
-0.16891422867774963,
-0.8092949986457825,
1.1330541372299194,
0.6742006540298462,
0.58436763286590... | [
-0.10097421705722809,
1.3478302955627441,
-0.2815351188182831,
-0.09282144159078598,
-0.3221208453178406,
0.6977221369743347,
0.15782928466796875,
-0.4677950143814087,
-0.9332365393638611,
0.10309773683547974,
0.07543672621250153,
0.05283692479133606,
0.5553334355354309,
-0.153076872229576... | {} | null | null |
http://arxiv.org/abs/2109.07867v1 | 2021-09-16T00:00:00 | Humanly Certifying Superhuman Classifiers | Qiongkai Xu; Christian Walder; Chenchen Xu | Estimating the performance of a machine learning system is a longstanding challenge in artificial intelligence research. Today, this challenge is especially relevant given the emergence of systems which appear to increasingly outperform human beings. In some cases, this "superhuman" performance is readily demonstrated; for example by defeating legendary human players in traditional two player games. On the other hand, it can be challenging to evaluate classification models that potentially surpass human performance. Indeed, human annotations are often treated as a ground truth, which implicitly assumes the superiority of the human over any models trained on human annotations. In reality, human annotators can make mistakes and be subjective. Evaluating the performance with respect to a genuine oracle may be more objective and reliable, even when querying the oracle is expensive or impossible. In this paper, we first raise the challenge of evaluating the performance of both humans and models with respect to an oracle which is unobserved. We develop a theory for estimating the accuracy compared to the oracle, using only imperfect human annotations for reference. Our analysis provides a simple recipe for detecting and certifying superhuman performance in this setting, which we believe will assist in understanding the stage of current research on classification. We validate the convergence of the bounds and the assumptions of our theory on carefully designed toy experiments with known oracles. Moreover, we demonstrate the utility of our theory by meta-analyzing large-scale natural language processing tasks, for which an oracle does not exist, and show that under our assumptions a number of models from recent years are with high probability superhuman. | cs.LG; cs.AI; cs.CL; cs.CV | 1 | ICLR-2023 | [
-0.627796471118927,
-0.19342216849327087,
0.1165633350610733,
-0.9952383041381836,
-0.0989808589220047,
-0.4980052709579468,
1.0079305171966553,
0.7985635995864868,
-1.2529585361480713,
-0.6461376547813416,
-0.6865808367729187,
0.9912590980529785,
1.0012619495391846,
0.9286454916000366,
... | [
0.38746213912963867,
0.7983626127243042,
-0.465483695268631,
-0.2771667540073395,
-0.7336669564247131,
0.20463275909423828,
0.43938058614730835,
-0.3774973154067993,
-0.7192583084106445,
0.012030825018882751,
-0.44885656237602234,
0.24673251807689667,
0.23798461258411407,
0.378736078739166... | {} | null | null |
http://arxiv.org/abs/2109.08927v3 | 2021-09-18T00:00:00 | Weakly Supervised Explainable Phrasal Reasoning with Neural Fuzzy Logic | Zijun Wu; Zi Xuan Zhang; Atharva Naik; Zhijian Mei; Mauajama Firdaus; Lili Mou | Natural language inference (NLI) aims to determine the logical relationship between two sentences, such as Entailment, Contradiction, and Neutral. In recent years, deep learning models have become a prevailing approach to NLI, but they lack interpretability and explainability. In this work, we address the explainability of NLI by weakly supervised logical reasoning, and propose an Explainable Phrasal Reasoning (EPR) approach. Our model first detects phrases as the semantic unit and aligns corresponding phrases in the two sentences. Then, the model predicts the NLI label for the aligned phrases, and induces the sentence label by fuzzy logic formulas. Our EPR is almost everywhere differentiable and thus the system can be trained end to end. In this way, we are able to provide explicit explanations of phrasal logical relationships in a weakly supervised manner. We further show that such reasoning results help textual explanation generation. | cs.CL; cs.AI | 1 | ICLR-2023 | [
-0.8300524353981018,
0.1335914134979248,
-0.14983688294887543,
-1.2059146165847778,
0.9257673621177673,
-0.27508386969566345,
0.6222620606422424,
0.43002334237098694,
-0.9093372821807861,
-0.6404191851615906,
-0.2351386994123459,
0.9622907638549805,
0.9073736071586609,
0.443074494600296,
... | [
-0.20205079019069672,
1.0816162824630737,
-0.5535009503364563,
-0.36582231521606445,
0.21182946860790253,
-0.3293421268463135,
0.5623939037322998,
-0.5598782300949097,
-0.25929588079452515,
0.3051794767379761,
0.8873260617256165,
-0.21310849487781525,
-0.06748204678297043,
-0.1853686273097... | {} | null | null |
http://arxiv.org/abs/2110.03469v3 | 2021-10-07T00:00:00 | Federated Learning from Small Datasets | Michael Kamp; Jonas Fischer; Jilles Vreeken | Federated learning allows multiple parties to collaboratively train a joint model without sharing local data. This enables applications of machine learning in settings of inherently distributed, undisclosable data such as in the medical domain. In practice, joint training is usually achieved by aggregating local models, for which local training objectives have to be in expectation similar to the joint (global) objective. Often, however, local datasets are so small that local objectives differ greatly from the global objective, resulting in federated learning to fail. We propose a novel approach that intertwines model aggregations with permutations of local models. The permutations expose each local model to a daisy chain of local datasets resulting in more efficient training in data-sparse domains. This enables training on extremely small local datasets, such as patient data across hospitals, while retaining the training efficiency and privacy benefits of federated learning. | cs.LG; cs.AI; cs.DC | 1 | ICLR-2023 | [
-1.5363843441009521,
-0.08629953861236572,
-0.1694578379392624,
-0.9206295013427734,
-1.454982042312622,
0.31158629059791565,
0.5065540671348572,
1.0529115200042725,
-0.1575910598039627,
0.19283391535282135,
-0.47505053877830505,
1.463444709777832,
-0.09044815599918365,
0.6316537261009216,... | [
0.2580144703388214,
0.5340262651443481,
-0.9137656688690186,
-0.04216305911540985,
-0.8104877471923828,
0.26308900117874146,
0.5414389371871948,
-0.2365342229604721,
-0.12570001184940338,
0.3105313777923584,
0.20403927564620972,
0.03578343987464905,
-0.3327956199645996,
0.00574435479938983... | {} | null | null |
http://arxiv.org/abs/2110.06482v3 | 2021-10-13T00:00:00 | Parallel Deep Neural Networks Have Zero Duality Gap | Yifei Wang; Tolga Ergen; Mert Pilanci | Training deep neural networks is a challenging non-convex optimization problem. Recent work has proven that the strong duality holds (which means zero duality gap) for regularized finite-width two-layer ReLU networks and consequently provided an equivalent convex training problem. However, extending this result to deeper networks remains to be an open problem. In this paper, we prove that the duality gap for deeper linear networks with vector outputs is non-zero. In contrast, we show that the zero duality gap can be obtained by stacking standard deep networks in parallel, which we call a parallel architecture, and modifying the regularization. Therefore, we prove the strong duality and existence of equivalent convex problems that enable globally optimal training of deep networks. As a by-product of our analysis, we demonstrate that the weight decay regularization on the network parameters explicitly encourages low-rank solutions via closed-form expressions. In addition, we show that strong duality holds for three-layer standard ReLU networks given rank-1 data matrices. | cs.LG; math.OC | 1 | ICLR-2023 | [
-1.058407187461853,
-0.29233285784721375,
0.4192620813846588,
-0.18232637643814087,
-0.6868179440498352,
0.12445598840713501,
-0.23904281854629517,
-0.16116096079349518,
-0.042825743556022644,
0.1785239577293396,
-0.5131791234016418,
0.724135160446167,
0.2597047984600067,
0.739327788352966... | [
-0.20532585680484772,
1.0622442960739136,
-0.30062419176101685,
0.4878557324409485,
0.010647688060998917,
0.3992810845375061,
0.39561599493026733,
-0.5237253904342651,
-0.8070563077926636,
0.2120181918144226,
0.39446601271629333,
-0.3070833683013916,
0.35631972551345825,
-0.075793527066707... | {} | null | null |
http://arxiv.org/abs/2110.09022v3 | 2021-10-18T00:00:00 | Mitigating Memorization of Noisy Labels via Regularization between Representations | Hao Cheng; Zhaowei Zhu; Xing Sun; Yang Liu | Designing robust loss functions is popular in learning with noisy labels while existing designs did not explicitly consider the overfitting property of deep neural networks (DNNs). As a result, applying these losses may still suffer from overfitting/memorizing noisy labels as training proceeds. In this paper, we first theoretically analyze the memorization effect and show that a lower-capacity model may perform better on noisy datasets. However, it is non-trivial to design a neural network with the best capacity given an arbitrary task. To circumvent this dilemma, instead of changing the model architecture, we decouple DNNs into an encoder followed by a linear classifier and propose to restrict the function space of a DNN by a representation regularizer. Particularly, we require the distance between two self-supervised features to be positively related to the distance between the corresponding two supervised model outputs. Our proposed framework is easily extendable and can incorporate many other robust loss functions to further improve performance. Extensive experiments and theoretical analyses support our claims. Code is available at github.com/UCSC-REAL/SelfSup_NoisyLabel. | cs.LG; cs.CV | 1 | ICLR-2023 | [
-1.0302304029464722,
-0.9474754333496094,
-0.3479882478713989,
-0.8102068901062012,
-0.43593496084213257,
-0.0016027167439460754,
0.11463599652051926,
0.0781770646572113,
-0.44465720653533936,
-0.44583117961883545,
-0.3435843884944916,
1.3039566278457642,
0.8592466115951538,
0.677990794181... | [
-0.031816497445106506,
0.9150803685188293,
-0.5694997906684875,
-0.3953262269496918,
-0.6989397406578064,
0.06567439436912537,
0.09710860252380371,
-0.20096762478351593,
-0.5834112763404846,
0.009627312421798706,
0.32020103931427,
0.7483561635017395,
0.5472802519798279,
0.3362579643726349,... | {} | null | null |
http://arxiv.org/abs/2110.14053v7 | 2021-10-26T00:00:00 | NeuroBack: Improving CDCL SAT Solving using Graph Neural Networks | Wenxi Wang; Yang Hu; Mohit Tiwari; Sarfraz Khurshid; Kenneth McMillan; Risto Miikkulainen | Propositional satisfiability (SAT) is an NP-complete problem that impacts many research fields, such as planning, verification, and security. Mainstream modern SAT solvers are based on the Conflict-Driven Clause Learning (CDCL) algorithm. Recent work aimed to enhance CDCL SAT solvers using Graph Neural Networks (GNNs). However, so far this approach either has not made solving more effective, or required substantial GPU resources for frequent online model inferences. Aiming to make GNN improvements practical, this paper proposes an approach called NeuroBack, which builds on two insights: (1) predicting phases (i.e., values) of variables appearing in the majority (or even all) of the satisfying assignments are essential for CDCL SAT solving, and (2) it is sufficient to query the neural model only once for the predictions before the SAT solving starts. Once trained, the offline model inference allows NeuroBack to execute exclusively on the CPU, removing its reliance on GPU resources. To train NeuroBack, a new dataset called DataBack containing 120,286 data samples is created. NeuroBack is implemented as an enhancement to a state-of-the-art SAT solver called Kissat. As a result, it allowed Kissat to solve up to 5.2% and 7.4% more problems on two recent SAT competition problem sets, SATCOMP-2022 and SATCOMP-2023, respectively. NeuroBack therefore shows how machine learning can be harnessed to improve SAT solving in an effective and practical manner. | cs.AI; cs.LG | 1 | ICLR-2024 | [
-0.8903768062591553,
-0.41487088799476624,
-0.4256059229373932,
-0.47814232110977173,
0.023142579942941666,
-1.1934500932693481,
-0.8287585377693176,
0.9886249899864197,
-0.4325481355190277,
0.5225064158439636,
-0.40574726462364197,
1.031727910041809,
0.7106152176856995,
-0.430890172719955... | [
-0.17550450563430786,
0.9692082405090332,
-0.593386173248291,
0.40233638882637024,
-0.19327034056186676,
-0.621910572052002,
0.4768993556499481,
-0.2783958911895752,
-0.10269017517566681,
0.3751489520072937,
0.011997204273939133,
-0.5052996873855591,
0.1457352489233017,
-0.3997919261455536... | {"http://arxiv.org/abs/2307.04895v1": 0.9265872240066528, "http://arxiv.org/abs/2305.16373v1": 0.9063270688056946, "http://arxiv.org/abs/2302.04496v1": 0.9060431122779846, "http://arxiv.org/abs/2305.01623v1": 0.9046722650527954, "http://arxiv.org/abs/2307.11444v1": 0.900811493396759, "http://arxiv.org/abs/2309.08883v2": 0.8990209102630615, "http://arxiv.org/abs/2303.02588v1": 0.8969050049781799, "http://arxiv.org/abs/2308.11652v1": 0.8960661888122559, "http://arxiv.org/abs/2206.00702v10": 0.8957732319831848, "http://arxiv.org/abs/2303.01158v1": 0.8955315947532654} | 0.926587 | 0.902774 |
http://arxiv.org/abs/2110.15771v4 | 2021-10-29T00:00:00 | Collaborative Pure Exploration in Kernel Bandit | Yihan Du; Wei Chen; Yuko Kuroki; Longbo Huang | In this paper, we formulate a Collaborative Pure Exploration in Kernel Bandit problem (CoPE-KB), which provides a novel model for multi-agent multi-task decision making under limited communication and general reward functions, and is applicable to many online learning tasks, e.g., recommendation systems and network scheduling. We consider two settings of CoPE-KB, i.e., Fixed-Confidence (FC) and Fixed-Budget (FB), and design two optimal algorithms CoopKernelFC (for FC) and CoopKernelFB (for FB). Our algorithms are equipped with innovative and efficient kernelized estimators to simultaneously achieve computation and communication efficiency. Matching upper and lower bounds under both the statistical and communication metrics are established to demonstrate the optimality of our algorithms. The theoretical bounds successfully quantify the influences of task similarities on learning acceleration and only depend on the effective dimension of the kernelized feature space. Our analytical techniques, including data dimension decomposition, linear structured instance transformation and (communication) round-speedup induction, are novel and applicable to other bandit problems. Empirical evaluations are provided to validate our theoretical results and demonstrate the performance superiority of our algorithms. | cs.LG | 1 | ICLR-2023 | [
-1.7217090129852295,
-1.151424527168274,
-0.15360094606876373,
-0.5105953812599182,
-0.9356390237808228,
0.5292217135429382,
0.1323830783367157,
0.8027742505073547,
-0.6013379096984863,
0.46106892824172974,
-0.9711248278617859,
1.2062393426895142,
-0.476217120885849,
0.3435024321079254,
... | [
-0.17539052665233612,
0.5783328413963318,
-1.042575716972351,
0.20101860165596008,
-0.39957937598228455,
0.383195698261261,
0.366138219833374,
-0.014661164954304695,
-0.29801371693611145,
0.304705411195755,
-0.3384164273738861,
0.2946053147315979,
-0.48021388053894043,
-0.03600706532597542... | {} | null | null |
http://arxiv.org/abs/2111.00843v3 | 2021-11-01T00:00:00 | How I Learned to Stop Worrying and Love Retraining | Max Zimmer; Christoph Spiegel; Sebastian Pokutta | Many Neural Network Pruning approaches consist of several iterative training and pruning steps, seemingly losing a significant amount of their performance after pruning and then recovering it in the subsequent retraining phase. Recent works of Renda et al. (2020) and Le & Hua (2021) demonstrate the significance of the learning rate schedule during the retraining phase and propose specific heuristics for choosing such a schedule for IMP (Han et al., 2015). We place these findings in the context of the results of Li et al. (2020) regarding the training of models within a fixed training budget and demonstrate that, consequently, the retraining phase can be massively shortened using a simple linear learning rate schedule. Improving on existing retraining approaches, we additionally propose a method to adaptively select the initial value of the linear schedule. Going a step further, we propose similarly imposing a budget on the initial dense training phase and show that the resulting simple and efficient method is capable of outperforming significantly more complex or heavily parameterized state-of-the-art approaches that attempt to sparsify the network during training. These findings not only advance our understanding of the retraining phase, but more broadly question the belief that one should aim to avoid the need for retraining and reduce the negative effects of 'hard' pruning by incorporating the sparsification process into the standard training. | cs.LG | 1 | ICLR-2023 | [
-0.33667945861816406,
-0.6190099716186523,
0.13621409237384796,
-0.40128442645072937,
0.10350176692008972,
-0.24744431674480438,
0.42098045349121094,
-0.08930817991495132,
-0.2182232141494751,
-0.38977426290512085,
-0.7126115560531616,
0.47327083349227905,
0.5952352285385132,
0.67429584264... | [
-0.12531253695487976,
1.1151010990142822,
-0.6867955923080444,
0.22527381777763367,
0.26998183131217957,
0.2755518853664398,
0.3986513316631317,
-0.7222946286201477,
-0.27234187722206116,
0.3048253655433655,
0.12872953712940216,
-0.09312760084867477,
-0.06691429018974304,
0.074278563261032... | {} | null | null |
http://arxiv.org/abs/2111.13207v4 | 2021-11-25T00:00:00 | Characteristic Neural Ordinary Differential Equations | Xingzi Xu; Ali Hasan; Khalil Elkhalil; Jie Ding; Vahid Tarokh | We propose Characteristic-Neural Ordinary Differential Equations (C-NODEs), a framework for extending Neural Ordinary Differential Equations (NODEs) beyond ODEs. While NODEs model the evolution of a latent variables as the solution to an ODE, C-NODE models the evolution of the latent variables as the solution of a family of first-order quasi-linear partial differential equations (PDEs) along curves on which the PDEs reduce to ODEs, referred to as characteristic curves. This in turn allows the application of the standard frameworks for solving ODEs, namely the adjoint method. Learning optimal characteristic curves for given tasks improves the performance and computational efficiency, compared to state of the art NODE models. We prove that the C-NODE framework extends the classical NODE on classification tasks by demonstrating explicit C-NODE representable functions not expressible by NODEs. Additionally, we present C-NODE-based continuous normalizing flows, which describe the density evolution of latent variables along multiple dimensions. Empirical results demonstrate the improvements provided by the proposed method for classification and density estimation on CIFAR-10, SVHN, and MNIST datasets under a similar computational budget as the existing NODE methods. The results also provide empirical evidence that the learned curves improve the efficiency of the system through a lower number of parameters and function evaluations compared with baselines. | cs.LG | 1 | ICLR-2023 | [
-0.4861389994621277,
-0.6137993931770325,
0.23991067707538605,
-0.6049519777297974,
-0.048575859516859055,
0.5901167392730713,
0.17924751341342926,
0.21116814017295837,
-0.22397740185260773,
0.2770574390888214,
-0.6892314553260803,
1.1790493726730347,
0.5316581130027771,
0.5112488865852356... | [
0.11811135709285736,
0.5943522453308105,
0.11706480383872986,
0.024384360760450363,
0.5181614756584167,
0.18885491788387299,
0.38857150077819824,
-0.5525909066200256,
-0.5331414341926575,
-0.13619118928909302,
0.09800354391336441,
0.3910564184188843,
0.1615402102470398,
0.5007529258728027,... | {} | null | null |
http://arxiv.org/abs/2111.13802v4 | 2021-11-27T00:00:00 | Factorized Fourier Neural Operators | Alasdair Tran; Alexander Mathews; Lexing Xie; Cheng Soon Ong | We propose the Factorized Fourier Neural Operator (F-FNO), a learning-based approach for simulating partial differential equations (PDEs). Starting from a recently proposed Fourier representation of flow fields, the F-FNO bridges the performance gap between pure machine learning approaches to that of the best numerical or hybrid solvers. This is achieved with new representations - separable spectral layers and improved residual connections - and a combination of training strategies such as the Markov assumption, Gaussian noise, and cosine learning rate decay. On several challenging benchmark PDEs on regular grids, structured meshes, and point clouds, the F-FNO can scale to deeper networks and outperform both the FNO and the geo-FNO, reducing the error by 83% on the Navier-Stokes problem, 31% on the elasticity problem, 57% on the airfoil flow problem, and 60% on the plastic forging problem. Compared to the state-of-the-art pseudo-spectral method, the F-FNO can take a step size that is an order of magnitude larger in time and achieve an order of magnitude speedup to produce the same solution quality. | cs.LG; cs.CE | 1 | ICLR-2023 | [
-1.19281804561615,
-0.4808935523033142,
0.7445935010910034,
0.7146683931350708,
-0.1326790153980255,
-0.6604366898536682,
0.017765043303370476,
0.36254969239234924,
-0.7570965886116028,
0.5700566172599792,
-0.347946435213089,
1.0891600847244263,
-0.5149797797203064,
-0.04100674390792847,
... | [
0.5401683449745178,
0.5795854926109314,
-0.24608975648880005,
0.5687857866287231,
0.2297108918428421,
0.336954802274704,
0.029500026255846024,
-0.5479587316513062,
-0.2559584975242615,
-0.06887255609035492,
0.5221386551856995,
0.13013482093811035,
-0.04413037374615669,
-0.4221070408821106,... | {} | null | null |
http://arxiv.org/abs/2112.00725v4 | 2021-12-01T00:00:00 | The Augmented Image Prior: Distilling 1000 Classes by Extrapolating from a Single Image | Yuki M. Asano; Aaqib Saeed | What can neural networks learn about the visual world when provided with only a single image as input? While any image obviously cannot contain the multitudes of all existing objects, scenes and lighting conditions - within the space of all 256^(3x224x224) possible 224-sized square images, it might still provide a strong prior for natural images. To analyze this `augmented image prior' hypothesis, we develop a simple framework for training neural networks from scratch using a single image and augmentations using knowledge distillation from a supervised pretrained teacher. With this, we find the answer to the above question to be: `surprisingly, a lot'. In quantitative terms, we find accuracies of 94%/74% on CIFAR-10/100, 69% on ImageNet, and by extending this method to video and audio, 51% on Kinetics-400 and 84% on SpeechCommands. In extensive analyses spanning 13 datasets, we disentangle the effect of augmentations, choice of data and network architectures and also provide qualitative evaluations that include lucid `panda neurons' in networks that have never even seen one. | cs.CV | 1 | ICLR-2023 | [
0.15512216091156006,
-0.5946708917617798,
-0.03507710620760918,
-0.6629677414894104,
0.1727249026298523,
-0.25644510984420776,
0.5951715707778931,
0.44590744376182556,
-0.6743816137313843,
-0.6580297946929932,
-0.2720840275287628,
1.1771727800369263,
1.0160194635391235,
0.3174975514411926,... | [
0.591668963432312,
0.8206729292869568,
-0.1564185470342636,
-0.05629964917898178,
0.03847825154662132,
0.07676701247692108,
0.8748428225517273,
-0.5511417388916016,
-0.6156476140022278,
-0.1839110106229782,
0.5340005159378052,
0.3299903869628906,
0.7063827514648438,
-0.007221449166536331,
... | {} | null | null |
http://arxiv.org/abs/2112.03860v5 | 2021-12-07T00:00:00 | Differentiable Gaussianization Layers for Inverse Problems Regularized by Deep Generative Models | Dongzhuo Li | Deep generative models such as GANs, normalizing flows, and diffusion models are powerful regularizers for inverse problems. They exhibit great potential for helping reduce ill-posedness and attain high-quality results. However, the latent tensors of such deep generative models can fall out of the desired high-dimensional standard Gaussian distribution during inversion, particularly in the presence of data noise and inaccurate forward models, leading to low-fidelity solutions. To address this issue, we propose to reparameterize and Gaussianize the latent tensors using novel differentiable data-dependent layers wherein custom operators are defined by solving optimization problems. These proposed layers constrain inverse problems to obtain high-fidelity in-distribution solutions. We validate our technique on three inversion tasks: compressive-sensing MRI, image deblurring, and eikonal tomography (a nonlinear PDE-constrained inverse problem) using two representative deep generative models: StyleGAN2 and Glow. Our approach achieves state-of-the-art performance in terms of accuracy and consistency. | cs.CV; cs.LG | 1 | ICLR-2023 | [
-0.5564066171646118,
-0.14359618723392487,
1.0876919031143188,
-0.032149262726306915,
-1.3736157417297363,
0.3127385973930359,
1.276926040649414,
-0.8661171793937683,
0.5458321571350098,
-0.30037808418273926,
0.1427505910396576,
0.778704047203064,
0.47106441855430603,
0.1850612461566925,
... | [
0.6837103366851807,
0.825980007648468,
0.248992457985878,
0.18415844440460205,
-0.011829778552055359,
0.09525342285633087,
0.34993353486061096,
-0.9325260519981384,
-0.11027549207210541,
-0.5739105939865112,
0.9140637516975403,
-0.07754230499267578,
-0.24065950512886047,
-0.169603139162063... | {} | null | null |
http://arxiv.org/abs/2112.03740v4 | 2021-12-07T00:00:00 | Dilated convolution with learnable spacings | Ismail Khalfaoui-Hassani; Thomas Pellegrini; Timothée Masquelier | Recent works indicate that convolutional neural networks (CNN) need large receptive fields (RF) to compete with visual transformers and their attention mechanism. In CNNs, RFs can simply be enlarged by increasing the convolution kernel sizes. Yet the number of trainable parameters, which scales quadratically with the kernel's size in the 2D case, rapidly becomes prohibitive, and the training is notoriously difficult. This paper presents a new method to increase the RF size without increasing the number of parameters. The dilated convolution (DC) has already been proposed for the same purpose. DC can be seen as a convolution with a kernel that contains only a few non-zero elements placed on a regular grid. Here we present a new version of the DC in which the spacings between the non-zero elements, or equivalently their positions, are no longer fixed but learnable via backpropagation thanks to an interpolation technique. We call this method "Dilated Convolution with Learnable Spacings" (DCLS) and generalize it to the n-dimensional convolution case. However, our main focus here will be on the 2D case. We first tried our approach on ResNet50: we drop-in replaced the standard convolutions with DCLS ones, which increased the accuracy of ImageNet1k classification at iso-parameters, but at the expense of the throughput. Next, we used the recent ConvNeXt state-of-the-art convolutional architecture and drop-in replaced the depthwise convolutions with DCLS ones. This not only increased the accuracy of ImageNet1k classification but also of typical downstream and robustness tasks, again at iso-parameters but this time with negligible cost on throughput, as ConvNeXt uses separable convolutions. Conversely, classic DC led to poor performance with both ResNet50 and ConvNeXt. The code of the method is available at: https://github.com/K-H-Ismail/Dilated-Convolution-with-Learnable-Spacings-PyTorch. | cs.CV; cs.AI; cs.NE | 1 | ICLR-2023 | [
-0.2874498963356018,
-0.7819046974182129,
0.4018572270870209,
-0.5669637322425842,
0.06544767320156097,
-0.2990022897720337,
0.4548012912273407,
0.06125593185424805,
-0.7647283673286438,
-0.5242921710014343,
-0.026501160115003586,
1.5728662014007568,
1.268166422843933,
0.31375452876091003,... | [
0.2087414562702179,
0.900306761264801,
-0.25126662850379944,
0.04541910067200661,
0.03701401129364967,
-0.09991489350795746,
0.5880476832389832,
-0.44429436326026917,
-0.8442961573600769,
-0.2993469834327698,
0.5873778462409973,
0.14515292644500732,
0.1908106803894043,
0.03198343515396118,... | {} | null | null |
http://arxiv.org/abs/2112.10769v4 | 2021-12-19T00:00:00 | Accurate Neural Training with 4-bit Matrix Multiplications at Standard Formats | Brian Chmiel; Ron Banner; Elad Hoffer; Hilla Ben Yaacov; Daniel Soudry | Quantization of the weights and activations is one of the main methods to reduce the computational footprint of Deep Neural Networks (DNNs) training. Current methods enable 4-bit quantization of the forward phase. However, this constitutes only a third of the training process. Reducing the computational footprint of the entire training process requires the quantization of the neural gradients, i.e., the loss gradients with respect to the outputs of intermediate neural layers. Previous works separately showed that accurate 4-bit quantization of the neural gradients needs to (1) be unbiased and (2) have a log scale. However, no previous work aimed to combine both ideas, as we do in this work. Specifically, we examine the importance of having unbiased quantization in quantized neural network training, where to maintain it, and how to combine it with logarithmic quantization. Based on this, we suggest a $\textit{logarithmic unbiased quantization}$ (LUQ) method to quantize both the forward and backward phases to 4-bit, achieving state-of-the-art results in 4-bit training without the overhead. For example, in ResNet50 on ImageNet, we achieved a degradation of 1.1%. We further improve this to a degradation of only 0.32% after three epochs of high precision fine-tuning, combined with a variance reduction method -- where both these methods add overhead comparable to previously suggested methods. | cs.LG | 1 | ICLR-2023 | [
0.19110608100891113,
-1.0078840255737305,
0.17975498735904694,
-0.6353636384010315,
0.1903609186410904,
-0.13112609088420868,
0.20796610414981842,
0.03520878776907921,
-0.48901310563087463,
-0.5137426853179932,
-0.30035072565078735,
1.1627472639083862,
1.4431135654449463,
0.364233613014221... | [
0.19403500854969025,
0.6511404514312744,
-0.21655653417110443,
-0.2469656765460968,
-0.23087729513645172,
0.49729984998703003,
0.05090794712305069,
-0.30224424600601196,
-0.8020504713058472,
0.024510163813829422,
0.3059993386268616,
0.1925351768732071,
0.7351198792457581,
-0.25591668486595... | {} | null | null |
http://arxiv.org/abs/2201.02658v2 | 2022-01-07T00:00:00 | Fair and efficient contribution valuation for vertical federated learning | Zhenan Fan; Huang Fang; Xinglu Wang; Zirui Zhou; Jian Pei; Michael P. Friedlander; Yong Zhang | Federated learning is an emerging technology for training machine learning models across decentralized data sources without sharing data. Vertical federated learning, also known as feature-based federated learning, applies to scenarios where data sources have the same sample IDs but different feature sets. To ensure fairness among data owners, it is critical to objectively assess the contributions from different data sources and compensate the corresponding data owners accordingly. The Shapley value is a provably fair contribution valuation metric originating from cooperative game theory. However, its straight-forward computation requires extensively retraining a model on each potential combination of data sources, leading to prohibitively high communication and computation overheads due to multiple rounds of federated learning. To tackle this challenge, we propose a contribution valuation metric called vertical federated Shapley value (VerFedSV) based on the classic Shapley value. We show that VerFedSV not only satisfies many desirable properties of fairness but is also efficient to compute. Moreover, VerFedSV can be adapted to both synchronous and asynchronous vertical federated learning algorithms. Both theoretical analysis and extensive experimental results demonstrate the fairness, efficiency, adaptability, and effectiveness of VerFedSV. | cs.LG | 1 | ICLR-2024 | [
-1.5859179496765137,
-1.038097858428955,
-0.2643173635005951,
-0.6301515698432922,
-1.1555774211883545,
0.4734838008880615,
0.3108305335044861,
0.6467308402061462,
-0.6556019186973572,
0.3516573905944824,
-1.0687142610549927,
1.3581231832504272,
-0.38086798787117004,
0.48080453276634216,
... | [
0.31971102952957153,
0.3439103662967682,
-0.7165533304214478,
0.2317647933959961,
-0.4927873909473419,
0.37872421741485596,
0.3620024621486664,
0.2801792621612549,
-0.5009266138076782,
0.531306266784668,
0.18888579308986664,
0.26930201053619385,
-0.2900039553642273,
-0.16627566516399384,
... | {"http://arxiv.org/abs/2308.11841v2": 0.9350095987319946, "http://arxiv.org/abs/2309.01098v4": 0.9320402145385742, "http://arxiv.org/abs/2309.00416v1": 0.9308669567108154, "http://arxiv.org/abs/2309.02160v1": 0.9295060038566589, "http://arxiv.org/abs/2309.10283v3": 0.9285225868225098, "http://arxiv.org/abs/2303.00250v1": 0.9275097250938416, "http://arxiv.org/abs/2110.03469v3": 0.9272910356521606, "http://arxiv.org/abs/2302.04228v1": 0.923570454120636, "http://arxiv.org/abs/2309.15348v1": 0.923420786857605, "http://arxiv.org/abs/2308.15709v2": 0.9214820861816406} | 0.93501 | 0.927922 |
http://arxiv.org/abs/2201.08115v2 | 2022-01-20T00:00:00 | Priors, Hierarchy, and Information Asymmetry for Skill Transfer in Reinforcement Learning | Sasha Salter; Kristian Hartikainen; Walter Goodwin; Ingmar Posner | The ability to discover behaviours from past experience and transfer them to new tasks is a hallmark of intelligent agents acting sample-efficiently in the real world. Equipping embodied reinforcement learners with the same ability may be crucial for their successful deployment in robotics. While hierarchical and KL-regularized reinforcement learning individually hold promise here, arguably a hybrid approach could combine their respective benefits. Key to these fields is the use of information asymmetry across architectural modules to bias which skills are learnt. While asymmetry choice has a large influence on transferability, existing methods base their choice primarily on intuition in a domain-independent, potentially sub-optimal, manner. In this paper, we theoretically and empirically show the crucial expressivity-transferability trade-off of skills across sequential tasks, controlled by information asymmetry. Given this insight, we introduce Attentive Priors for Expressive and Transferable Skills (APES), a hierarchical KL-regularized method, heavily benefiting from both priors and hierarchy. Unlike existing approaches, APES automates the choice of asymmetry by learning it in a data-driven, domain-dependent, way based on our expressivity-transferability theorems. Experiments over complex transfer domains of varying levels of extrapolation and sparsity, such as robot block stacking, demonstrate the criticality of the correct asymmetric choice, with APES drastically outperforming previous methods. | cs.AI; cs.LG; cs.RO; stat.ML | 1 | ICLR-2023 | [
-0.7081420421600342,
-0.6275913119316101,
0.43127045035362244,
-0.39825770258903503,
-0.10652593523263931,
0.11577363312244415,
0.106725312769413,
0.72978276014328,
-0.5602959394454956,
-0.2815425395965576,
-1.0246329307556152,
0.7191545367240906,
0.10311762988567352,
-0.036933425813913345... | [
0.4063950181007385,
0.7219462990760803,
-0.5769445300102234,
-0.008996643126010895,
-0.19955483078956604,
0.379772424697876,
0.2268712818622589,
-0.3926311731338501,
-0.32220226526260376,
0.5056697130203247,
0.22245368361473083,
0.07516631484031677,
-0.525570273399353,
0.29764413833618164,... | {} | null | null |
http://arxiv.org/abs/2201.12220v3 | 2022-01-28T00:00:00 | Neural Optimal Transport | Alexander Korotin; Daniil Selikhanovych; Evgeny Burnaev | We present a novel neural-networks-based algorithm to compute optimal transport maps and plans for strong and weak transport costs. To justify the usage of neural networks, we prove that they are universal approximators of transport plans between probability distributions. We evaluate the performance of our optimal transport algorithm on toy examples and on the unpaired image-to-image translation. | cs.LG | 1 | ICLR-2023 | [
-0.6778312921524048,
-0.34493786096572876,
-0.6627939343452454,
-0.39195069670677185,
-1.3737679719924927,
-0.19994667172431946,
0.04966575652360916,
0.4209274351596832,
-0.3141928017139435,
-0.29206740856170654,
-0.5178566575050354,
0.9929836988449097,
-0.109140545129776,
0.02945991791784... | [
0.21483488380908966,
0.5898095369338989,
-0.8982083797454834,
0.06082248315215111,
-0.23129872977733612,
0.14827746152877808,
0.31051620841026306,
-0.15044958889484406,
-0.6839742064476013,
-0.0001252293586730957,
0.28393393754959106,
-0.18824252486228943,
-0.303832471370697,
-0.4430194199... | {} | null | null |
http://arxiv.org/abs/2201.11945v3 | 2022-01-28T00:00:00 | Learning Proximal Operators to Discover Multiple Optima | Lingxiao Li; Noam Aigerman; Vladimir G. Kim; Jiajin Li; Kristjan Greenewald; Mikhail Yurochkin; Justin Solomon | Finding multiple solutions of non-convex optimization problems is a ubiquitous yet challenging task. Most past algorithms either apply single-solution optimization methods from multiple random initial guesses or search in the vicinity of found solutions using ad hoc heuristics. We present an end-to-end method to learn the proximal operator of a family of training problems so that multiple local minima can be quickly obtained from initial guesses by iterating the learned operator, emulating the proximal-point algorithm that has fast convergence. The learned proximal operator can be further generalized to recover multiple optima for unseen problems at test time, enabling applications such as object detection. The key ingredient in our formulation is a proximal regularization term, which elevates the convexity of our training loss: by applying recent theoretical results, we show that for weakly-convex objectives with Lipschitz gradients, training of the proximal operator converges globally with a practical degree of over-parameterization. We further present an exhaustive benchmark for multi-solution optimization to demonstrate the effectiveness of our method. | cs.LG | 1 | ICLR-2023 | [
-1.5523216724395752,
-0.3061284124851227,
0.1436404436826706,
0.11069557815790176,
-1.2499645948410034,
0.14942729473114014,
-0.22136563062667847,
0.29444968700408936,
-0.3916708528995514,
-0.23278768360614777,
-0.2927606403827667,
1.3847274780273438,
0.2898196578025818,
0.5587531328201294... | [
-0.0034405291080474854,
0.6961774826049805,
-0.5085241794586182,
0.06839021295309067,
-0.5174151659011841,
0.41086187958717346,
-0.26059433817863464,
-0.1436133086681366,
-0.4053150713443756,
-0.42505621910095215,
0.20919035375118256,
0.28172850608825684,
-0.17065076529979706,
0.2441045492... | {} | null | null |
http://arxiv.org/abs/2201.12293v4 | 2022-01-28T00:00:00 | Understanding Why Generalized Reweighting Does Not Improve Over ERM | Runtian Zhai; Chen Dan; Zico Kolter; Pradeep Ravikumar | Empirical risk minimization (ERM) is known in practice to be non-robust to distributional shift where the training and the test distributions are different. A suite of approaches, such as importance weighting, and variants of distributionally robust optimization (DRO), have been proposed to solve this problem. But a line of recent work has empirically shown that these approaches do not significantly improve over ERM in real applications with distribution shift. The goal of this work is to obtain a comprehensive theoretical understanding of this intriguing phenomenon. We first posit the class of Generalized Reweighting (GRW) algorithms, as a broad category of approaches that iteratively update model parameters based on iterative reweighting of the training samples. We show that when overparameterized models are trained under GRW, the resulting models are close to that obtained by ERM. We also show that adding small regularization which does not greatly affect the empirical training accuracy does not help. Together, our results show that a broad category of what we term GRW approaches are not able to achieve distributionally robust generalization. Our work thus has the following sobering takeaway: to make progress towards distributionally robust generalization, we either have to develop non-GRW approaches, or perhaps devise novel classification/regression loss functions that are adapted to the class of GRW approaches. | cs.LG; stat.ML | 1 | ICLR-2023 | [
-1.7533289194107056,
-0.6596031188964844,
-0.00027964264154434204,
-0.5115602612495422,
-1.322196364402771,
-0.01785004884004593,
0.1720696985721588,
-0.04222259670495987,
-0.6694986820220947,
0.031389348208904266,
-0.639186441898346,
1.087585210800171,
0.3526817858219147,
0.84618675708770... | [
0.2679014801979065,
0.7880616188049316,
-0.27073559165000916,
-0.3122890591621399,
-0.7548819184303284,
0.2964059114456177,
0.22570623457431793,
-0.6844972372055054,
-0.5950379967689514,
-0.07299284636974335,
0.09047961235046387,
0.3526151776313782,
-0.26469799876213074,
0.2511321008205414... | {} | null | null |
http://arxiv.org/abs/2201.12675v2 | 2022-01-29T00:00:00 | Decepticons: Corrupted Transformers Breach Privacy in Federated Learning for Language Models | Liam Fowl; Jonas Geiping; Steven Reich; Yuxin Wen; Wojtek Czaja; Micah Goldblum; Tom Goldstein | A central tenet of Federated learning (FL), which trains models without centralizing user data, is privacy. However, previous work has shown that the gradient updates used in FL can leak user information. While the most industrial uses of FL are for text applications (e.g. keystroke prediction), nearly all attacks on FL privacy have focused on simple image classifiers. We propose a novel attack that reveals private user text by deploying malicious parameter vectors, and which succeeds even with mini-batches, multiple users, and long sequences. Unlike previous attacks on FL, the attack exploits characteristics of both the Transformer architecture and the token embedding, separately extracting tokens and positional embeddings to retrieve high-fidelity text. This work suggests that FL on text, which has historically been resistant to privacy attacks, is far more vulnerable than previously thought. | cs.LG; cs.CL; cs.CR | 1 | ICLR-2023 | [
-0.19340406358242035,
-0.9732610583305359,
-0.6098537445068359,
-0.923269510269165,
-1.304764986038208,
-0.5726274251937866,
1.1780602931976318,
0.5850715041160583,
-0.9237738251686096,
-1.0172626972198486,
-0.4483342468738556,
0.41797393560409546,
0.2573239505290985,
0.8561705946922302,
... | [
0.3554377555847168,
0.048950307071208954,
-0.5062136650085449,
0.3713284730911255,
-0.745995044708252,
-0.5943827033042908,
0.9762506484985352,
-0.11916951835155487,
-0.5770623087882996,
-0.4598919749259949,
-0.04337545111775398,
-0.38247907161712646,
0.368621826171875,
0.09288212656974792... | {} | null | null |
http://arxiv.org/abs/2201.12558v6 | 2022-01-29T00:00:00 | The KFIoU Loss for Rotated Object Detection | Xue Yang; Yue Zhou; Gefan Zhang; Jirui Yang; Wentao Wang; Junchi Yan; Xiaopeng Zhang; Qi Tian | Differing from the well-developed horizontal object detection area whereby the computing-friendly IoU based loss is readily adopted and well fits with the detection metrics. In contrast, rotation detectors often involve a more complicated loss based on SkewIoU which is unfriendly to gradient-based training. In this paper, we propose an effective approximate SkewIoU loss based on Gaussian modeling and Gaussian product, which mainly consists of two items. The first term is a scale-insensitive center point loss, which is used to quickly narrow the distance between the center points of the two bounding boxes. In the distance-independent second term, the product of the Gaussian distributions is adopted to inherently mimic the mechanism of SkewIoU by its definition, and show its alignment with the SkewIoU loss at trend-level within a certain distance (i.e. within 9 pixels). This is in contrast to recent Gaussian modeling based rotation detectors e.g. GWD loss and KLD loss that involve a human-specified distribution distance metric which require additional hyperparameter tuning that vary across datasets and detectors. The resulting new loss called KFIoU loss is easier to implement and works better compared with exact SkewIoU loss, thanks to its full differentiability and ability to handle the non-overlapping cases. We further extend our technique to the 3-D case which also suffers from the same issues as 2-D. Extensive results on various public datasets (2-D/3-D, aerial/text/face images) with different base detectors show the effectiveness of our approach. | cs.CV; cs.AI; cs.LG | 1 | ICLR-2023 | [
-1.4669133424758911,
-0.49556392431259155,
-0.5889966487884521,
-1.084528923034668,
-0.019773777574300766,
0.22593426704406738,
0.5086915493011475,
-0.22116826474666595,
-0.4478500187397003,
-0.911320686340332,
-0.3234057128429413,
2.181800365447998,
0.8961095213890076,
0.12375152856111526... | [
-0.43261730670928955,
0.6541479229927063,
-0.5825808048248291,
-0.046884819865226746,
0.15144331753253937,
0.5529007911682129,
0.43949681520462036,
-0.7470494508743286,
-0.2863878309726715,
-1.0519639253616333,
0.3408104479312897,
0.624732255935669,
0.5482081174850464,
-0.2505560517311096,... | {} | null | null |
http://arxiv.org/abs/2202.00395v2 | 2022-02-01T00:00:00 | Is the Performance of My Deep Network Too Good to Be True? A Direct Approach to Estimating the Bayes Error in Binary Classification | Takashi Ishida; Ikko Yamane; Nontawat Charoenphakdee; Gang Niu; Masashi Sugiyama | There is a fundamental limitation in the prediction performance that a machine learning model can achieve due to the inevitable uncertainty of the prediction target. In classification problems, this can be characterized by the Bayes error, which is the best achievable error with any classifier. The Bayes error can be used as a criterion to evaluate classifiers with state-of-the-art performance and can be used to detect test set overfitting. We propose a simple and direct Bayes error estimator, where we just take the mean of the labels that show \emph{uncertainty} of the class assignments. Our flexible approach enables us to perform Bayes error estimation even for weakly supervised data. In contrast to others, our method is model-free and even instance-free. Moreover, it has no hyperparameters and gives a more accurate estimate of the Bayes error than several baselines empirically. Experiments using our method suggest that recently proposed deep networks such as the Vision Transformer may have reached, or is about to reach, the Bayes error for benchmark datasets. Finally, we discuss how we can study the inherent difficulty of the acceptance/rejection decision for scientific articles, by estimating the Bayes error of the ICLR papers from 2017 to 2023. | cs.LG; stat.ML | 1 | ICLR-2023 | [
-0.7139066457748413,
-0.7032172083854675,
-0.012453864328563213,
-0.530734658241272,
-0.49027031660079956,
-0.38997915387153625,
0.7502502799034119,
-0.2892058789730072,
-0.9204798936843872,
-0.6510615348815918,
-0.3687654435634613,
1.555830478668213,
1.3456878662109375,
0.9254526495933533... | [
0.2714177966117859,
1.1587104797363281,
-0.3232252597808838,
0.28831934928894043,
-0.10980039089918137,
0.14988623559474945,
0.9548841118812561,
-0.8313148021697998,
-0.26023685932159424,
0.05205937474966049,
0.11396659910678864,
-0.09891268610954285,
0.6535571217536926,
0.0052095074206590... | {} | null | null |
http://arxiv.org/abs/2202.01344v1 | 2022-02-03T00:00:00 | Formal Mathematics Statement Curriculum Learning | Stanislas Polu; Jesse Michael Han; Kunhao Zheng; Mantas Baksys; Igor Babuschkin; Ilya Sutskever | We explore the use of expert iteration in the context of language modeling applied to formal mathematics. We show that at same compute budget, expert iteration, by which we mean proof search interleaved with learning, dramatically outperforms proof search only. We also observe that when applied to a collection of formal statements of sufficiently varied difficulty, expert iteration is capable of finding and solving a curriculum of increasingly difficult problems, without the need for associated ground-truth proofs. Finally, by applying this expert iteration to a manually curated set of problem statements, we achieve state-of-the-art on the miniF2F benchmark, automatically solving multiple challenging problems drawn from high school olympiads. | cs.LG; cs.AI | 1 | ICLR-2023 | [
-0.5254300832748413,
-0.9129440784454346,
0.9453426003456116,
-0.5308396816253662,
-0.790184497833252,
-1.1267380714416504,
0.4144619405269623,
0.6733044981956482,
-0.7510125637054443,
-0.45028048753738403,
-0.6326760649681091,
-0.20147567987442017,
0.2707490921020508,
-0.28004327416419983... | [
0.173925518989563,
0.645684540271759,
-0.36973562836647034,
0.07781651616096497,
-0.6361886858940125,
-0.5157780647277832,
0.46361690759658813,
-0.27620434761047363,
-0.0638299286365509,
0.16908128559589386,
-0.097954660654068,
-0.6542993783950806,
-0.205595001578331,
0.13702568411827087,
... | {} | null | null |
http://arxiv.org/abs/2202.04414v2 | 2022-02-09T00:00:00 | Agree to Disagree: Diversity through Disagreement for Better Transferability | Matteo Pagliardini; Martin Jaggi; François Fleuret; Sai Praneeth Karimireddy | Gradient-based learning algorithms have an implicit simplicity bias which in effect can limit the diversity of predictors being sampled by the learning procedure. This behavior can hinder the transferability of trained models by (i) favoring the learning of simpler but spurious features -- present in the training data but absent from the test data -- and (ii) by only leveraging a small subset of predictive features. Such an effect is especially magnified when the test distribution does not exactly match the train distribution -- referred to as the Out of Distribution (OOD) generalization problem. However, given only the training data, it is not always possible to apriori assess if a given feature is spurious or transferable. Instead, we advocate for learning an ensemble of models which capture a diverse set of predictive features. Towards this, we propose a new algorithm D-BAT (Diversity-By-disAgreement Training), which enforces agreement among the models on the training data, but disagreement on the OOD data. We show how D-BAT naturally emerges from the notion of generalized discrepancy, as well as demonstrate in multiple experiments how the proposed method can mitigate shortcut-learning, enhance uncertainty and OOD detection, as well as improve transferability. | cs.LG | 1 | ICLR-2023 | [
-1.590965747833252,
-0.7901101112365723,
-0.10641062259674072,
-0.34412509202957153,
-1.0083138942718506,
-0.40787824988365173,
0.3261750042438507,
0.11195135116577148,
-0.8706613183021545,
-0.3687722682952881,
-0.3588094413280487,
0.9406529068946838,
0.3918636739253998,
0.7425159215927124... | [
-0.028447188436985016,
0.7315566539764404,
-0.8150596618652344,
-0.2939118444919586,
-0.9601038098335266,
-0.08301481604576111,
0.22233350574970245,
-0.19146589934825897,
0.15274018049240112,
0.14195242524147034,
-0.20232777297496796,
-0.0891188308596611,
-0.03695657476782799,
0.4999101758... | {} | null | null |
http://arxiv.org/abs/2202.07477v2 | 2022-02-14T00:00:00 | Understanding DDPM Latent Codes Through Optimal Transport | Valentin Khrulkov; Gleb Ryzhakov; Andrei Chertkov; Ivan Oseledets | Diffusion models have recently outperformed alternative approaches to model the distribution of natural images, such as GANs. Such diffusion models allow for deterministic sampling via the probability flow ODE, giving rise to a latent space and an encoder map. While having important practical applications, such as estimation of the likelihood, the theoretical properties of this map are not yet fully understood. In the present work, we partially address this question for the popular case of the VP SDE (DDPM) approach. We show that, perhaps surprisingly, the DDPM encoder map coincides with the optimal transport map for common distributions; we support this claim theoretically and by extensive numerical experiments. | stat.ML; cs.AI; cs.LG; cs.NA; math.AP; math.NA | 1 | ICLR-2023 | [
-0.9056518077850342,
-0.14059904217720032,
-0.3337489068508148,
-0.2713302671909332,
-1.596039056777954,
0.2623334228992462,
0.3300396203994751,
0.5386406779289246,
0.16806858777999878,
-0.26750481128692627,
-0.44559502601623535,
1.110391616821289,
0.19760490953922272,
0.384414404630661,
... | [
0.6796332597732544,
0.8686394691467285,
-0.6292385458946228,
0.06093332916498184,
-0.13157349824905396,
0.13217675685882568,
0.7854995727539062,
-0.3096361756324768,
-0.024784209206700325,
-0.49675822257995605,
0.44546636939048767,
-0.0013496987521648407,
0.08250485360622406,
0.16987997293... | {} | null | null |
http://arxiv.org/abs/2202.06854v3 | 2022-02-14T00:00:00 | Random Laplacian Features for Learning with Hyperbolic Space | Tao Yu; Christopher De Sa | Due to its geometric properties, hyperbolic space can support high-fidelity embeddings of tree- and graph-structured data, upon which various hyperbolic networks have been developed. Existing hyperbolic networks encode geometric priors not only for the input, but also at every layer of the network. This approach involves repeatedly mapping to and from hyperbolic space, which makes these networks complicated to implement, computationally expensive to scale, and numerically unstable to train. In this paper, we propose a simpler approach: learn a hyperbolic embedding of the input, then map once from it to Euclidean space using a mapping that encodes geometric priors by respecting the isometries of hyperbolic space, and finish with a standard Euclidean network. The key insight is to use a random feature mapping via the eigenfunctions of the Laplace operator, which we show can approximate any isometry-invariant kernel on hyperbolic space. Our method can be used together with any graph neural networks: using even a linear graph model yields significant improvements in both efficiency and performance over other hyperbolic baselines in both transductive and inductive tasks. | cs.LG | 1 | ICLR-2023 | [
-1.3979318141937256,
-0.5921139121055603,
-0.040899720042943954,
-0.3177163302898407,
-0.3223423659801483,
-0.3177720606327057,
-0.03685048222541809,
0.3970574140548706,
-0.4477924108505249,
-0.3641487956047058,
-0.34556373953819275,
1.2356984615325928,
0.45331743359565735,
0.4287515878677... | [
0.04078194126486778,
1.2544338703155518,
-0.5263597369194031,
-0.017076537013053894,
-0.30055034160614014,
-0.23683349788188934,
0.04962766915559769,
-0.3299592137336731,
-0.320221483707428,
-0.34124523401260376,
0.6491957902908325,
0.12957699596881866,
0.02451619878411293,
-0.329343527555... | {} | null | null |
http://arxiv.org/abs/2202.07646v3 | 2022-02-15T00:00:00 | Quantifying Memorization Across Neural Language Models | Nicholas Carlini; Daphne Ippolito; Matthew Jagielski; Katherine Lee; Florian Tramer; Chiyuan Zhang | Large language models (LMs) have been shown to memorize parts of their training data, and when prompted appropriately, they will emit the memorized training data verbatim. This is undesirable because memorization violates privacy (exposing user data), degrades utility (repeated easy-to-memorize text is often low quality), and hurts fairness (some texts are memorized over others). We describe three log-linear relationships that quantify the degree to which LMs emit memorized training data. Memorization significantly grows as we increase (1) the capacity of a model, (2) the number of times an example has been duplicated, and (3) the number of tokens of context used to prompt the model. Surprisingly, we find the situation becomes more complicated when generalizing these results across model families. On the whole, we find that memorization in LMs is more prevalent than previously believed and will likely get worse as models continues to scale, at least without active mitigations. | cs.LG; cs.CL | 1 | ICLR-2023 | [
-0.00798702985048294,
-0.5372389554977417,
-0.5793280601501465,
-1.059024453163147,
0.36004284024238586,
-0.5869627594947815,
0.9351333379745483,
0.781273365020752,
-0.87441086769104,
-1.0448126792907715,
-0.23360060155391693,
0.1811702698469162,
1.1172983646392822,
0.5594529509544373,
-... | [
-0.004626907408237457,
0.643699586391449,
-0.477456659078598,
-0.27551111578941345,
-0.4399971663951874,
-0.31889820098876953,
1.0368074178695679,
-0.11534605175256729,
-0.7793153524398804,
-0.13467688858509064,
0.2597622871398926,
-0.14021722972393036,
0.3795054852962494,
0.31852582097053... | {} | null | null |
http://arxiv.org/abs/2202.09931v2 | 2022-02-20T00:00:00 | Deconstructing Distributions: A Pointwise Framework of Learning | Gal Kaplun; Nikhil Ghosh; Saurabh Garg; Boaz Barak; Preetum Nakkiran | In machine learning, we traditionally evaluate the performance of a single model, averaged over a collection of test inputs. In this work, we propose a new approach: we measure the performance of a collection of models when evaluated on a $\textit{single input point}$. Specifically, we study a point's $\textit{profile}$: the relationship between models' average performance on the test distribution and their pointwise performance on this individual point. We find that profiles can yield new insights into the structure of both models and data -- in and out-of-distribution. For example, we empirically show that real data distributions consist of points with qualitatively different profiles. On one hand, there are "compatible" points with strong correlation between the pointwise and average performance. On the other hand, there are points with weak and even $\textit{negative}$ correlation: cases where improving overall model accuracy actually $\textit{hurts}$ performance on these inputs. We prove that these experimental observations are inconsistent with the predictions of several simplified models of learning proposed in prior work. As an application, we use profiles to construct a dataset we call CIFAR-10-NEG: a subset of CINIC-10 such that for standard models, accuracy on CIFAR-10-NEG is $\textit{negatively correlated}$ with accuracy on CIFAR-10 test. This illustrates, for the first time, an OOD dataset that completely inverts "accuracy-on-the-line" (Miller, Taori, Raghunathan, Sagawa, Koh, Shankar, Liang, Carmon, and Schmidt 2021) | cs.LG; cs.AI; cs.CV; stat.ML | 1 | ICLR-2023 | [
-0.8299001455307007,
-1.3021061420440674,
0.2768462002277374,
-0.23190659284591675,
-0.821273684501648,
-0.16879242658615112,
0.47556135058403015,
0.6620385646820068,
-1.1894164085388184,
0.14070096611976624,
-0.26587751507759094,
1.164312720298767,
0.8598451614379883,
0.7058405876159668,
... | [
0.07098952680826187,
0.6058815717697144,
-0.8373878598213196,
-0.04175657033920288,
-0.6998322606086731,
0.01015053316950798,
0.8364899754524231,
-0.4790632426738739,
-0.25346481800079346,
0.19491887092590332,
0.0953921526670456,
-0.21349437534809113,
-0.30091843008995056,
0.31429415941238... | {} | null | null |
http://arxiv.org/abs/2202.11202v3 | 2022-02-22T00:00:00 | Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning | Hao He; Kaiwen Zha; Dina Katabi | Indiscriminate data poisoning attacks are quite effective against supervised learning. However, not much is known about their impact on unsupervised contrastive learning (CL). This paper is the first to consider indiscriminate poisoning attacks of contrastive learning. We propose Contrastive Poisoning (CP), the first effective such attack on CL. We empirically show that Contrastive Poisoning, not only drastically reduces the performance of CL algorithms, but also attacks supervised learning models, making it the most generalizable indiscriminate poisoning attack. We also show that CL algorithms with a momentum encoder are more robust to indiscriminate poisoning, and propose a new countermeasure based on matrix completion. Code is available at: https://github.com/kaiwenzha/contrastive-poisoning. | cs.LG; cs.AI; cs.CR; cs.CV | 1 | ICLR-2023 | [
-1.295577049255371,
-0.8960601687431335,
-0.16922858357429504,
-0.4859643578529358,
-0.8698145747184753,
-0.1314086765050888,
0.5521918535232544,
0.4030759930610657,
-0.9177237153053284,
-0.12333865463733673,
-0.7803640961647034,
1.5016855001449585,
0.24364258348941803,
0.5812285542488098,... | [
-0.206841841340065,
0.3127732276916504,
-0.39318469166755676,
0.24890369176864624,
0.03851073607802391,
-0.4892517030239105,
1.0348665714263916,
-0.14729350805282593,
-0.7073853611946106,
-0.320497989654541,
-0.01572578027844429,
0.07373683899641037,
0.5030230283737183,
0.37569451332092285... | {} | null | null |
http://arxiv.org/abs/2202.11672v2 | 2022-02-23T00:00:00 | Learning Fast and Slow for Online Time Series Forecasting | Quang Pham; Chenghao Liu; Doyen Sahoo; Steven C. H. Hoi | The fast adaptation capability of deep neural networks in non-stationary environments is critical for online time series forecasting. Successful solutions require handling changes to new and recurring patterns. However, training deep neural forecaster on the fly is notoriously challenging because of their limited ability to adapt to non-stationary environments and the catastrophic forgetting of old knowledge. In this work, inspired by the Complementary Learning Systems (CLS) theory, we propose Fast and Slow learning Networks (FSNet), a holistic framework for online time-series forecasting to simultaneously deal with abrupt changing and repeating patterns. Particularly, FSNet improves the slowly-learned backbone by dynamically balancing fast adaptation to recent changes and retrieving similar old knowledge. FSNet achieves this mechanism via an interaction between two complementary components of an adapter to monitor each layer's contribution to the lost, and an associative memory to support remembering, updating, and recalling repeating events. Extensive experiments on real and synthetic datasets validate FSNet's efficacy and robustness to both new and recurring patterns. Our code is available at \url{https://github.com/salesforce/fsnet}. | cs.LG; stat.ML | 1 | ICLR-2023 | [
-0.9935519695281982,
-1.2966347932815552,
0.08430751413106918,
-0.7823961973190308,
0.1381990909576416,
0.11534154415130615,
0.40971729159355164,
-0.2137313336133957,
-0.24764224886894226,
-0.41384267807006836,
-0.5195231437683105,
1.080233097076416,
0.9762295484542847,
0.17899206280708313... | [
-0.13127683103084564,
0.17801953852176666,
-0.16325239837169647,
-0.0007965117692947388,
0.5644697546958923,
-0.05243845283985138,
0.7255095839500427,
-0.14348219335079193,
-0.28737905621528625,
-0.012398943305015564,
-0.03308573737740517,
-0.021721094846725464,
0.410014808177948,
-0.17154... | {} | null | null |
http://arxiv.org/abs/2202.13013v4 | 2022-02-25T00:00:00 | Sign and Basis Invariant Networks for Spectral Graph Representation Learning | Derek Lim; Joshua Robinson; Lingxiao Zhao; Tess Smidt; Suvrit Sra; Haggai Maron; Stefanie Jegelka | We introduce SignNet and BasisNet -- new neural architectures that are invariant to two key symmetries displayed by eigenvectors: (i) sign flips, since if $v$ is an eigenvector then so is $-v$; and (ii) more general basis symmetries, which occur in higher dimensional eigenspaces with infinitely many choices of basis eigenvectors. We prove that under certain conditions our networks are universal, i.e., they can approximate any continuous function of eigenvectors with the desired invariances. When used with Laplacian eigenvectors, our networks are provably more expressive than existing spectral methods on graphs; for instance, they subsume all spectral graph convolutions, certain spectral graph invariants, and previously proposed graph positional encodings as special cases. Experiments show that our networks significantly outperform existing baselines on molecular graph regression, learning expressive graph representations, and learning neural fields on triangle meshes. Our code is available at https://github.com/cptq/SignNet-BasisNet . | cs.LG; stat.ML | 1 | ICLR-2023 | [
-0.8307783007621765,
-0.35963767766952515,
0.0911407321691513,
-0.029779069125652313,
0.12656815350055695,
-0.6320818066596985,
-0.7497109770774841,
0.5872922539710999,
-0.6721531748771667,
-0.03996411710977554,
0.013709768652915955,
1.9602011442184448,
0.1691092699766159,
0.22394017875194... | [
0.6764289140701294,
0.5305372476577759,
-0.6048856377601624,
0.12337233126163483,
0.672681987285614,
0.038779087364673615,
0.32656559348106384,
-0.6535322666168213,
-0.058332763612270355,
-0.0798269510269165,
0.3648660480976105,
0.07498470693826675,
0.17139223217964172,
-0.3857075273990631... | {} | null | null |
http://arxiv.org/abs/2202.13248v4 | 2022-02-26T00:00:00 | Automated Data Augmentations for Graph Classification | Youzhi Luo; Michael McThrow; Wing Yee Au; Tao Komikado; Kanji Uchino; Koji Maruhashi; Shuiwang Ji | Data augmentations are effective in improving the invariance of learning machines. We argue that the core challenge of data augmentations lies in designing data transformations that preserve labels. This is relatively straightforward for images, but much more challenging for graphs. In this work, we propose GraphAug, a novel automated data augmentation method aiming at computing label-invariant augmentations for graph classification. Instead of using uniform transformations as in existing studies, GraphAug uses an automated augmentation model to avoid compromising critical label-related information of the graph, thereby producing label-invariant augmentations at most times. To ensure label-invariance, we develop a training method based on reinforcement learning to maximize an estimated label-invariance probability. Experiments show that GraphAug outperforms previous graph augmentation methods on various graph classification tasks. | cs.LG; cs.AI | 1 | ICLR-2023 | [
-1.3357479572296143,
-0.5261874794960022,
-0.08400003612041473,
-0.2462136447429657,
-0.2766549289226532,
-0.8089844584465027,
0.16699086129665375,
0.5699379444122314,
-0.5926612019538879,
-0.5181834101676941,
-0.39848747849464417,
1.79346764087677,
0.6381741762161255,
0.2733670473098755,
... | [
0.3424360156059265,
0.7300700545310974,
-0.5234146118164062,
0.012262057512998581,
0.18337512016296387,
-0.33849793672561646,
0.7714265584945679,
0.14450831711292267,
-0.24232245981693268,
-0.12766915559768677,
0.39521658420562744,
0.23081491887569427,
0.6318384408950806,
-0.06289932131767... | {} | null | null |
http://arxiv.org/abs/2203.02006v1 | 2022-03-03T00:00:00 | Why adversarial training can hurt robust accuracy | Jacob Clarysse; Julia Hörrmann; Fanny Yang | Machine learning classifiers with high test accuracy often perform poorly under adversarial attacks. It is commonly believed that adversarial training alleviates this issue. In this paper, we demonstrate that, surprisingly, the opposite may be true -- Even though adversarial training helps when enough data is available, it may hurt robust generalization in the small sample size regime. We first prove this phenomenon for a high-dimensional linear classification setting with noiseless observations. Our proof provides explanatory insights that may also transfer to feature learning models. Further, we observe in experiments on standard image datasets that the same behavior occurs for perceptible attacks that effectively reduce class information such as mask attacks and object corruptions. | cs.LG; cs.CR; cs.CV; stat.ML | 1 | ICLR-2023 | [
-1.287874698638916,
-0.7655717134475708,
-0.4500606060028076,
-0.3497002124786377,
-1.4788165092468262,
-0.1463606357574463,
0.9709885120391846,
0.16656002402305603,
-0.7204111814498901,
-0.402341365814209,
-1.125320553779602,
1.280240535736084,
0.5190349817276001,
0.6148190498352051,
-0... | [
0.05517272278666496,
0.722626268863678,
-0.32564225792884827,
0.3190895915031433,
-0.9320188760757446,
-0.3345160484313965,
0.6582305431365967,
-0.39667776226997375,
-0.1593596339225769,
-0.25852614641189575,
-0.150980606675148,
0.020391151309013367,
0.8963404297828674,
-0.0646496340632438... | {} | null | null |
http://arxiv.org/abs/2203.01629v5 | 2022-03-03T00:00:00 | Learning Group Importance using the Differentiable Hypergeometric Distribution | Thomas M. Sutter; Laura Manduchi; Alain Ryser; Julia E. Vogt | Partitioning a set of elements into subsets of a priori unknown sizes is essential in many applications. These subset sizes are rarely explicitly learned - be it the cluster sizes in clustering applications or the number of shared versus independent generative latent factors in weakly-supervised learning. Probability distributions over correct combinations of subset sizes are non-differentiable due to hard constraints, which prohibit gradient-based optimization. In this work, we propose the differentiable hypergeometric distribution. The hypergeometric distribution models the probability of different group sizes based on their relative importance. We introduce reparameterizable gradients to learn the importance between groups and highlight the advantage of explicitly learning the size of subsets in two typical applications: weakly-supervised learning and clustering. In both applications, we outperform previous approaches, which rely on suboptimal heuristics to model the unknown size of groups. | cs.LG; stat.CO; stat.ML | 1 | ICLR-2023 | [
-1.4241846799850464,
-0.5527954697608948,
0.11895406246185303,
-0.6046295166015625,
-1.0556524991989136,
-0.19388042390346527,
0.48362213373184204,
0.31630149483680725,
-0.4940122365951538,
-0.04928167164325714,
0.02331787720322609,
1.5579687356948853,
0.0470881350338459,
0.575471401214599... | [
0.5580165982246399,
1.0696837902069092,
-0.3174440264701843,
-0.35135212540626526,
-0.5212618708610535,
0.10136957466602325,
0.49071213603019714,
-0.28318941593170166,
-0.010750332847237587,
0.07298919558525085,
0.8089581727981567,
0.7591930031776428,
-0.5564678907394409,
-0.12071426212787... | {} | null | null |
http://arxiv.org/abs/2203.03771v1 | 2022-03-07T00:00:00 | Static Prediction of Runtime Errors by Learning to Execute Programs with External Resource Descriptions | David Bieber; Rishab Goel; Daniel Zheng; Hugo Larochelle; Daniel Tarlow | The execution behavior of a program often depends on external resources, such as program inputs or file contents, and so cannot be run in isolation. Nevertheless, software developers benefit from fast iteration loops where automated tools identify errors as early as possible, even before programs can be compiled and run. This presents an interesting machine learning challenge: can we predict runtime errors in a "static" setting, where program execution is not possible? Here, we introduce a real-world dataset and task for predicting runtime errors, which we show is difficult for generic models like Transformers. We approach this task by developing an interpreter-inspired architecture with an inductive bias towards mimicking program executions, which models exception handling and "learns to execute" descriptions of the contents of external resources. Surprisingly, we show that the model can also predict the location of the error, despite being trained only on labels indicating the presence/absence and kind of error. In total, we present a practical and difficult-yet-approachable challenge problem related to learning program execution and we demonstrate promising new capabilities of interpreter-inspired machine learning models for code. | cs.LG; cs.PL | 1 | ICLR-2023 | [
-1.6137081384658813,
-0.8179593682289124,
-1.354373812675476,
-0.4698612689971924,
-0.5550547242164612,
-1.6020991802215576,
-0.22679263353347778,
1.944466233253479,
-0.05581169202923775,
-0.5919790863990784,
-0.8096708655357361,
-0.15294420719146729,
1.215543508529663,
-0.3473819494247436... | [
0.2611442506313324,
0.4220920205116272,
-1.0792909860610962,
0.07419480383396149,
-0.21303048729896545,
-0.411498099565506,
0.5296471118927002,
0.20468808710575104,
-0.07709113508462906,
-0.4923643469810486,
0.18423378467559814,
-0.6737516522407532,
0.5651165246963501,
-0.07475664466619492... | {} | null | null |
http://arxiv.org/abs/2203.06125v5 | 2022-03-11T00:00:00 | Protein Representation Learning by Geometric Structure Pretraining | Zuobai Zhang; Minghao Xu; Arian Jamasb; Vijil Chenthamarakshan; Aurelie Lozano; Payel Das; Jian Tang | Learning effective protein representations is critical in a variety of tasks in biology such as predicting protein function or structure. Existing approaches usually pretrain protein language models on a large number of unlabeled amino acid sequences and then finetune the models with some labeled data in downstream tasks. Despite the effectiveness of sequence-based approaches, the power of pretraining on known protein structures, which are available in smaller numbers only, has not been explored for protein property prediction, though protein structures are known to be determinants of protein function. In this paper, we propose to pretrain protein representations according to their 3D structures. We first present a simple yet effective encoder to learn the geometric features of a protein. We pretrain the protein graph encoder by leveraging multiview contrastive learning and different self-prediction tasks. Experimental results on both function prediction and fold classification tasks show that our proposed pretraining methods outperform or are on par with the state-of-the-art sequence-based methods, while using much less pretraining data. Our implementation is available at https://github.com/DeepGraphLearning/GearNet. | cs.LG | 1 | ICLR-2023 | [
0.04863379895687103,
0.30386292934417725,
-0.12049973756074905,
-0.7873343825340271,
0.514476478099823,
-0.4272978901863098,
-1.1874278783798218,
0.717613160610199,
-1.188779354095459,
0.6571991443634033,
-0.5528535842895508,
1.0725398063659668,
-0.34197798371315,
0.48139050602912903,
-0... | [
0.28699469566345215,
0.886253833770752,
-0.12911370396614075,
-0.5414612889289856,
0.3168559670448303,
-0.052271563559770584,
0.16280566155910492,
0.020746715366840363,
-0.3165655732154846,
0.1758144497871399,
0.5284422039985657,
0.10693450272083282,
0.3066084682941437,
0.13353174924850464... | {} | null | null |
http://arxiv.org/abs/2203.08216v1 | 2022-03-15T00:00:00 | Interactive Portrait Harmonization | Jeya Maria Jose Valanarasu; He Zhang; Jianming Zhang; Yilin Wang; Zhe Lin; Jose Echevarria; Yinglan Ma; Zijun Wei; Kalyan Sunkavalli; Vishal M. Patel | Current image harmonization methods consider the entire background as the guidance for harmonization. However, this may limit the capability for user to choose any specific object/person in the background to guide the harmonization. To enable flexible interaction between user and harmonization, we introduce interactive harmonization, a new setting where the harmonization is performed with respect to a selected \emph{region} in the reference image instead of the entire background. A new flexible framework that allows users to pick certain regions of the background image and use it to guide the harmonization is proposed. Inspired by professional portrait harmonization users, we also introduce a new luminance matching loss to optimally match the color/luminance conditions between the composite foreground and select reference region. This framework provides more control to the image harmonization pipeline achieving visually pleasing portrait edits. Furthermore, we also introduce a new dataset carefully curated for validating portrait harmonization. Extensive experiments on both synthetic and real-world datasets show that the proposed approach is efficient and robust compared to previous harmonization baselines, especially for portraits. Project Webpage at \href{https://jeya-maria-jose.github.io/IPH-web/}{https://jeya-maria-jose.github.io/IPH-web/} | cs.CV | 1 | ICLR-2023 | [
-0.8282214999198914,
-0.394700288772583,
-0.4177410304546356,
-0.36698949337005615,
-0.3104376792907715,
0.18542876839637756,
0.8350067138671875,
-0.2616482973098755,
-0.006970192305743694,
-1.06143057346344,
-0.619223952293396,
1.5949304103851318,
0.6577146053314209,
-0.391708642244339,
... | [
0.22096985578536987,
0.11723414063453674,
-0.26636049151420593,
0.31831875443458557,
-0.23611922562122345,
0.14645501971244812,
0.23473739624023438,
-0.595409095287323,
0.11182240396738052,
-0.5495949983596802,
0.3148663341999054,
1.015995740890503,
-0.16146881878376007,
-0.530063033103942... | {} | null | null |
http://arxiv.org/abs/2203.10258v3 | 2022-03-19T00:00:00 | TDR-CL: Targeted Doubly Robust Collaborative Learning for Debiased Recommendations | Haoxuan Li; Yan Lyu; Chunyuan Zheng; Peng Wu | Bias is a common problem inherent in recommender systems, which is entangled with users' preferences and poses a great challenge to unbiased learning. For debiasing tasks, the doubly robust (DR) method and its variants show superior performance due to the double robustness property, that is, DR is unbiased when either imputed errors or learned propensities are accurate. However, our theoretical analysis reveals that DR usually has a large variance. Meanwhile, DR would suffer unexpectedly large bias and poor generalization caused by inaccurate imputed errors and learned propensities, which usually occur in practice. In this paper, we propose a principled approach that can effectively reduce bias and variance simultaneously for existing DR approaches when the error imputation model is misspecified. In addition, we further propose a novel semi-parametric collaborative learning approach that decomposes imputed errors into parametric and nonparametric parts and updates them collaboratively, resulting in more accurate predictions. Both theoretical analysis and experiments demonstrate the superiority of the proposed methods compared with existing debiasing methods. | cs.IR; cs.LG; stat.ML | 1 | ICLR-2023 | [
-1.990077018737793,
-1.1386826038360596,
-0.6276828646659851,
-0.739074170589447,
-0.6043890714645386,
0.33276522159576416,
0.491120308637619,
0.407185822725296,
-0.5757206082344055,
0.10486559569835663,
-0.855510413646698,
1.0998514890670776,
-0.26324912905693054,
0.15113413333892822,
0... | [
-0.19179348647594452,
0.7097234129905701,
-0.9533653855323792,
-0.09200350195169449,
-0.6265279650688171,
0.11885617673397064,
0.21537823975086212,
-0.41707512736320496,
0.05978047475218773,
0.5651493668556213,
0.35917845368385315,
0.20111311972141266,
-0.21948876976966858,
0.0565281696617... | {} | null | null |
http://arxiv.org/abs/2203.10991v3 | 2022-03-21T00:00:00 | Minimum Variance Unbiased N:M Sparsity for the Neural Gradients | Brian Chmiel; Itay Hubara; Ron Banner; Daniel Soudry | In deep learning, fine-grained N:M sparsity reduces the data footprint and bandwidth of a General Matrix multiply (GEMM) up to x2, and doubles throughput by skipping computation of zero values. So far, it was mainly only used to prune weights to accelerate the forward and backward phases. We examine how this method can be used also for the neural gradients (i.e., loss gradients with respect to the intermediate neural layer outputs). To this end, we first establish a tensor-level optimality criteria. Previous works aimed to minimize the mean-square-error (MSE) of each pruned block. We show that while minimization of the MSE works fine for pruning the weights and activations, it catastrophically fails for the neural gradients. Instead, we show that accurate pruning of the neural gradients requires an unbiased minimum-variance pruning mask. We design such specialized masks, and find that in most cases, 1:2 sparsity is sufficient for training, and 2:4 sparsity is usually enough when this is not the case. Further, we suggest combining several such methods together in order to potentially speed up training even more. | cs.LG; cs.AI | 1 | ICLR-2023 | [
-0.3233259916305542,
-0.7538485527038574,
0.5795115232467651,
-0.2244352102279663,
-0.19929224252700806,
-0.08191151916980743,
0.1306394338607788,
0.006928444840013981,
-0.07754181325435638,
-0.4074515700340271,
-0.22506068646907806,
0.9940258264541626,
0.912511944770813,
0.570069968700408... | [
0.33246731758117676,
0.6927707195281982,
-0.16740897297859192,
0.11884705722332001,
0.09968869388103485,
0.6660856008529663,
0.2364569753408432,
-0.8557797074317932,
-0.3251436650753021,
-0.34080708026885986,
0.42600515484809875,
-0.07806918025016785,
0.466770738363266,
0.2781476080417633,... | {} | null | null |
http://arxiv.org/abs/2203.12023v6 | 2022-03-22T00:00:00 | Generative Modeling Helps Weak Supervision (and Vice Versa) | Benedikt Boecking; Nicholas Roberts; Willie Neiswanger; Stefano Ermon; Frederic Sala; Artur Dubrawski | Many promising applications of supervised machine learning face hurdles in the acquisition of labeled data in sufficient quantity and quality, creating an expensive bottleneck. To overcome such limitations, techniques that do not depend on ground truth labels have been studied, including weak supervision and generative modeling. While these techniques would seem to be usable in concert, improving one another, how to build an interface between them is not well-understood. In this work, we propose a model fusing programmatic weak supervision and generative adversarial networks and provide theoretical justification motivating this fusion. The proposed approach captures discrete latent variables in the data alongside the weak supervision derived label estimate. Alignment of the two allows for better modeling of sample-dependent accuracies of the weak supervision sources, improving the estimate of unobserved labels. It is the first approach to enable data augmentation through weakly supervised synthetic images and pseudolabels. Additionally, its learned latent variables can be inspected qualitatively. The model outperforms baseline weak supervision label models on a number of multiclass image classification datasets, improves the quality of generated images, and further improves end-model performance through data augmentation with synthetic samples. | cs.LG; cs.AI; cs.CV; stat.ML; I.2.0; I.4.m | 1 | ICLR-2023 | [
-0.649019718170166,
-0.6620253920555115,
-0.022432155907154083,
-0.2224065661430359,
-0.41697928309440613,
-0.04323088377714157,
0.7758244276046753,
-0.046736765652894974,
-0.6908044219017029,
-0.7872518301010132,
-0.7048059105873108,
1.5778032541275024,
0.6963891983032227,
0.7043319344520... | [
0.415999174118042,
0.8680536150932312,
-0.5131451487541199,
-0.19117632508277893,
-0.3776499629020691,
-0.2086603343486786,
1.095829725265503,
-0.8810846209526062,
0.07767996937036514,
-0.32992905378341675,
0.24392376840114594,
0.2560683488845825,
0.6125138998031616,
0.07508546859025955,
... | {} | null | null |
http://arxiv.org/abs/2204.02965v1 | 2022-04-06T00:00:00 | LilNetX: Lightweight Networks with EXtreme Model Compression and Structured Sparsification | Sharath Girish; Kamal Gupta; Saurabh Singh; Abhinav Shrivastava | We introduce LilNetX, an end-to-end trainable technique for neural networks that enables learning models with specified accuracy-rate-computation trade-off. Prior works approach these problems one at a time and often require post-processing or multistage training which become less practical and do not scale very well for large datasets or architectures. Our method constructs a joint training objective that penalizes the self-information of network parameters in a reparameterized latent space to encourage small model size while also introducing priors to increase structured sparsity in the parameter space to reduce computation. We achieve up to 50% smaller model size and 98% model sparsity on ResNet-20 while retaining the same accuracy on the CIFAR-10 dataset as well as 35% smaller model size and 42% structured sparsity on ResNet-50 trained on ImageNet, when compared to existing state-of-the-art model compression methods. Code is available at https://github.com/Sharath-girish/LilNetX. | cs.CV; cs.LG | 1 | ICLR-2023 | [
-0.06820662319660187,
-1.0984845161437988,
-0.06361155956983566,
-0.603026270866394,
-0.23875951766967773,
0.007195927202701569,
0.3279552757740021,
0.061127111315727234,
-0.5067921280860901,
-0.2659375071525574,
-0.18321819603443146,
0.9382642507553101,
1.0218853950500488,
0.7533720135688... | [
0.6346471309661865,
0.8668752312660217,
-0.594861626625061,
0.03725874051451683,
-0.010960880666971207,
0.43102550506591797,
0.3150370717048645,
-0.40239477157592773,
-0.5617243051528931,
-0.18209697306156158,
0.5386766791343689,
0.058789320290088654,
0.2866811156272888,
0.2632639408111572... | {} | null | null |
http://arxiv.org/abs/2204.04875v2 | 2022-04-11T00:00:00 | Learning to Induce Causal Structure | Nan Rosemary Ke; Silvia Chiappa; Jane Wang; Anirudh Goyal; Jorg Bornschein; Melanie Rey; Theophane Weber; Matthew Botvinic; Michael Mozer; Danilo Jimenez Rezende | The fundamental challenge in causal induction is to infer the underlying graph structure given observational and/or interventional data. Most existing causal induction algorithms operate by generating candidate graphs and evaluating them using either score-based methods (including continuous optimization) or independence tests. In our work, we instead treat the inference process as a black box and design a neural network architecture that learns the mapping from both observational and interventional data to graph structures via supervised training on synthetic graphs. The learned model generalizes to new synthetic graphs, is robust to train-test distribution shifts, and achieves state-of-the-art performance on naturalistic graphs for low sample complexity. | stat.ML; cs.LG | 1 | ICLR-2023 | [
-0.9454687833786011,
0.20462319254875183,
-0.5457444190979004,
-0.29745808243751526,
-0.16362300515174866,
-0.6034272909164429,
1.2347112894058228,
-0.09074396640062332,
-0.08069424331188202,
-0.03945823013782501,
-0.13132116198539734,
0.8304007053375244,
-0.09377546608448029,
0.1619200706... | [
0.24863916635513306,
0.8872767090797424,
-0.7530733942985535,
-0.0087595134973526,
0.048353444784879684,
-0.39905834197998047,
0.9565430283546448,
-0.20179861783981323,
0.14084820449352264,
0.5452331304550171,
0.4681228995323181,
-0.3404238224029541,
-0.6127970814704895,
0.0351236499845981... | {} | null | null |
http://arxiv.org/abs/2204.05999v3 | 2022-04-12T00:00:00 | InCoder: A Generative Model for Code Infilling and Synthesis | Daniel Fried; Armen Aghajanyan; Jessy Lin; Sida Wang; Eric Wallace; Freda Shi; Ruiqi Zhong; Wen-tau Yih; Luke Zettlemoyer; Mike Lewis | Code is seldom written in a single left-to-right pass and is instead repeatedly edited and refined. We introduce InCoder, a unified generative model that can perform program synthesis (via left-to-right generation) as well as editing (via infilling). InCoder is trained to generate code files from a large corpus of permissively licensed code, where regions of code have been randomly masked and moved to the end of each file, allowing code infilling with bidirectional context. Our model is the first generative model that is able to directly perform zero-shot code infilling, which we evaluate on challenging tasks such as type inference, comment generation, and variable re-naming. We find that the ability to condition on bidirectional context substantially improves performance on these tasks, while still performing comparably on standard program synthesis benchmarks in comparison to left-to-right only models pretrained at similar scale. The InCoder models and code are publicly released. https://sites.google.com/view/incoder-code-models | cs.SE; cs.CL; cs.LG | 1 | ICLR-2023 | [
-1.0330804586410522,
-0.4899928569793701,
-0.9270790219306946,
-0.6525288224220276,
-0.47161003947257996,
-1.914185643196106,
0.17318516969680786,
1.538714051246643,
0.2727460563182831,
-0.24378807842731476,
-0.2812592089176178,
0.18659108877182007,
1.3192086219787598,
-0.15151146054267883... | [
0.16759110987186432,
0.7467421293258667,
-0.31453651189804077,
0.19158008694648743,
0.06913307309150696,
-0.7051177620887756,
1.0611226558685303,
-0.20564329624176025,
0.39125198125839233,
-0.2976837158203125,
0.6228329539299011,
-0.6163997650146484,
0.5283232927322388,
-0.1507986932992935... | {} | null | null |
http://arxiv.org/abs/2204.10965v5 | 2022-04-23T00:00:00 | CLIP-Dissect: Automatic Description of Neuron Representations in Deep Vision Networks | Tuomas Oikarinen; Tsui-Wei Weng | In this paper, we propose CLIP-Dissect, a new technique to automatically describe the function of individual hidden neurons inside vision networks. CLIP-Dissect leverages recent advances in multimodal vision/language models to label internal neurons with open-ended concepts without the need for any labeled data or human examples. We show that CLIP-Dissect provides more accurate descriptions than existing methods for last layer neurons where the ground-truth is available as well as qualitatively good descriptions for hidden layer neurons. In addition, our method is very flexible: it is model agnostic, can easily handle new concepts and can be extended to take advantage of better multimodal models in the future. Finally CLIP-Dissect is computationally efficient and can label all neurons from five layers of ResNet-50 in just 4 minutes, which is more than 10 times faster than existing methods. Our code is available at https://github.com/Trustworthy-ML-Lab/CLIP-dissect. Finally, crowdsourced user study results are available at Appendix B to further support the effectiveness of our method. | cs.CV; cs.AI; cs.LG | 1 | ICLR-2023 | [
-0.21500255167484283,
-0.8552345037460327,
-0.12056567519903183,
-0.6696346998214722,
0.105242058634758,
-0.11456716060638428,
0.23810184001922607,
0.34292715787887573,
-0.7020538449287415,
-0.519213855266571,
0.2892148792743683,
1.2788065671920776,
1.3588627576828003,
0.522648811340332,
... | [
0.08104971796274185,
0.7510534524917603,
-0.35673850774765015,
-0.07249992340803146,
-0.021579135209321976,
-0.15885426104068756,
0.8248361349105835,
-0.4757944345474243,
-0.5121773481369019,
-0.3280980587005615,
0.7431472539901733,
0.4603096842765808,
0.8417800068855286,
0.266804277896881... | {} | null | null |
http://arxiv.org/abs/2204.11144v2 | 2022-04-23T00:00:00 | Competitive Physics Informed Networks | Qi Zeng; Yash Kothari; Spencer H. Bryngelson; Florian Schäfer | Neural networks can be trained to solve partial differential equations (PDEs) by using the PDE residual as the loss function. This strategy is called "physics-informed neural networks" (PINNs), but it currently cannot produce high-accuracy solutions, typically attaining about $0.1\%$ relative error. We present an adversarial approach that overcomes this limitation, which we call competitive PINNs (CPINNs). CPINNs train a discriminator that is rewarded for predicting mistakes the PINN makes. The discriminator and PINN participate in a zero-sum game with the exact PDE solution as an optimal strategy. This approach avoids squaring the large condition numbers of PDE discretizations, which is the likely reason for failures of previous attempts to decrease PINN errors even on benign problems. Numerical experiments on a Poisson problem show that CPINNs achieve errors four orders of magnitude smaller than the best-performing PINN. We observe relative errors on the order of single-precision accuracy, consistently decreasing with each epoch. To the authors' knowledge, this is the first time this level of accuracy and convergence behavior has been achieved. Additional experiments on the nonlinear Schr\"odinger, Burgers', and Allen-Cahn equation show that the benefits of CPINNs are not limited to linear problems. | cs.LG; cs.MA; cs.NA; math.NA; math.OC | 1 | ICLR-2023 | [
-1.0344338417053223,
-0.38696223497390747,
0.7675662636756897,
0.6724591851234436,
-0.8405241370201111,
0.08316464722156525,
-0.5233080983161926,
0.6314628720283508,
-0.3882957696914673,
1.0284806489944458,
-1.192084550857544,
1.3134773969650269,
-0.7378448843955994,
0.34658631682395935,
... | [
0.2799682915210724,
0.6277701258659363,
0.017851252108812332,
0.6821264624595642,
-0.44313618540763855,
-0.13209065794944763,
0.4359256625175476,
-0.4868790805339813,
-0.5542749166488647,
0.06003003567457199,
-0.318989634513855,
0.08908408880233765,
0.02334148809313774,
-0.1405837684869766... | {} | null | null |
http://arxiv.org/abs/2204.13902v4 | 2022-04-29T00:00:00 | Fast Sampling of Diffusion Models with Exponential Integrator | Qinsheng Zhang; Yongxin Chen | The past few years have witnessed the great success of Diffusion models~(DMs) in generating high-fidelity samples in generative modeling tasks. A major limitation of the DM is its notoriously slow sampling procedure which normally requires hundreds to thousands of time discretization steps of the learned diffusion process to reach the desired accuracy. Our goal is to develop a fast sampling method for DMs with a much less number of steps while retaining high sample quality. To this end, we systematically analyze the sampling procedure in DMs and identify key factors that affect the sample quality, among which the method of discretization is most crucial. By carefully examining the learned diffusion process, we propose Diffusion Exponential Integrator Sampler~(DEIS). It is based on the Exponential Integrator designed for discretizing ordinary differential equations (ODEs) and leverages a semilinear structure of the learned diffusion process to reduce the discretization error. The proposed method can be applied to any DMs and can generate high-fidelity samples in as few as 10 steps. In our experiments, it takes about 3 minutes on one A6000 GPU to generate $50k$ images from CIFAR10. Moreover, by directly using pre-trained DMs, we achieve the state-of-art sampling performance when the number of score function evaluation~(NFE) is limited, e.g., 4.17 FID with 10 NFEs, 3.37 FID, and 9.74 IS with only 15 NFEs on CIFAR10. Code is available at https://github.com/qsh-zh/deis | cs.LG | 1 | ICLR-2023 | [
-0.7939181327819824,
-1.015225887298584,
0.6866792440414429,
-0.38850730657577515,
-0.4298851191997528,
0.032985933125019073,
1.1078275442123413,
-0.27170249819755554,
0.017354989424347878,
-0.22425778210163116,
-0.2140137404203415,
1.0857689380645752,
1.1408381462097168,
0.359395951032638... | [
0.21998648345470428,
0.3208577632904053,
-0.0407433956861496,
0.12262469530105591,
-0.15975119173526764,
0.01511039212346077,
0.7786149382591248,
-0.5600318908691406,
-0.5160043835639954,
-0.6839432120323181,
0.05720740184187889,
0.38302212953567505,
0.137872576713562,
0.10956376791000366,... | {} | null | null |
http://arxiv.org/abs/2205.04701v3 | 2022-05-10T00:00:00 | StableDR: Stabilized Doubly Robust Learning for Recommendation on Data Missing Not at Random | Haoxuan Li; Chunyuan Zheng; Peng Wu | In recommender systems, users always choose the favorite items to rate, which leads to data missing not at random and poses a great challenge for unbiased evaluation and learning of prediction models. Currently, the doubly robust (DR) methods have been widely studied and demonstrate superior performance. However, in this paper, we show that DR methods are unstable and have unbounded bias, variance, and generalization bounds to extremely small propensities. Moreover, the fact that DR relies more on extrapolation will lead to suboptimal performance. To address the above limitations while retaining double robustness, we propose a stabilized doubly robust (StableDR) learning approach with a weaker reliance on extrapolation. Theoretical analysis shows that StableDR has bounded bias, variance, and generalization error bound simultaneously under inaccurate imputed errors and arbitrarily small propensities. In addition, we propose a novel learning approach for StableDR that updates the imputation, propensity, and prediction models cyclically, achieving more stable and accurate predictions. Extensive experiments show that our approaches significantly outperform the existing methods. | cs.LG; stat.ML | 1 | ICLR-2023 | [
-2.0464024543762207,
-1.11536705493927,
-0.8307123780250549,
-0.8943681120872498,
-0.5114087462425232,
0.484091579914093,
0.6034871935844421,
0.4367031455039978,
-0.5112260580062866,
0.20076043903827667,
-0.7980443835258484,
1.3165878057479858,
-0.09069201350212097,
0.04558746516704559,
... | [
0.01707473024725914,
0.8253616690635681,
-1.3476786613464355,
-0.00010631978511810303,
-0.5566762685775757,
0.2701958417892456,
0.10580414533615112,
-0.1301034390926361,
-0.12445516139268875,
0.5179376602172852,
0.6063376069068909,
0.18975746631622314,
0.018807459622621536,
0.0049726869910... | {} | null | null |
http://arxiv.org/abs/2205.05131v3 | 2022-05-10T00:00:00 | UL2: Unifying Language Learning Paradigms | Yi Tay; Mostafa Dehghani; Vinh Q. Tran; Xavier Garcia; Jason Wei; Xuezhi Wang; Hyung Won Chung; Siamak Shakeri; Dara Bahri; Tal Schuster; Huaixiu Steven Zheng; Denny Zhou; Neil Houlsby; Donald Metzler | Existing pre-trained models are generally geared towards a particular class of problems. To date, there seems to be still no consensus on what the right architecture and pre-training setup should be. This paper presents a unified framework for pre-training models that are universally effective across datasets and setups. We begin by disentangling architectural archetypes with pre-training objectives -- two concepts that are commonly conflated. Next, we present a generalized & unified perspective for self-supervision in NLP and show how different pre-training objectives can be cast as one another and how interpolating between different objectives can be effective. We then propose Mixture-of-Denoisers (MoD), a pre-training objective that combines diverse pre-training paradigms together. We furthermore introduce a notion of mode switching, wherein downstream fine-tuning is associated with specific pre-training schemes. We conduct extensive ablative experiments to compare multiple pre-training objectives and find that our method pushes the Pareto-frontier by outperforming T5 & GPT-like models across multiple diverse setups. By scaling our model up to 20B parameters, we achieve SOTA performance on 50 well-established supervised finetuning based NLP tasks. Our model also achieve strong results at in-context learning, outperforming 175B GPT-3 on zero-shot SuperGLUE and tripling the performance of T5-XXL on one-shot summarization. On 0-shot MMLU, UL2 20B outperforms T0 and T5 models. UL2 20B also works well with chain-of-thought prompting and reasoning, making it an appealing choice for research into reasoning at a small to medium scale of 20B parameters. Finally, we apply FLAN instruction tuning to the UL2 20B model, achieving MMLU and Big-Bench scores competitive to FLAN-PaLM 62B. We release Flax-based T5X checkpoints for the UL2 20B & Flan-UL2 20B. | cs.CL | 1 | ICLR-2023 | [
-0.26124656200408936,
-0.22288420796394348,
0.3709166646003723,
-0.8386693000793457,
0.24270419776439667,
-0.8401743173599243,
1.0575271844863892,
0.6464971899986267,
-1.4516723155975342,
-0.46379709243774414,
0.1693217009305954,
0.2201184779405594,
0.7685796618461609,
1.0238186120986938,
... | [
0.41959744691848755,
0.7907029390335083,
-0.618532657623291,
-0.14786297082901,
-0.3543779253959656,
-0.4979797601699829,
0.6295702457427979,
0.04636470228433609,
-0.6008977293968201,
0.12568071484565735,
0.8217636346817017,
-0.3246955871582031,
0.2625664472579956,
0.16838650405406952,
0... | {} | null | null |
http://arxiv.org/abs/2205.05869v2 | 2022-05-12T00:00:00 | View Synthesis with Sculpted Neural Points | Yiming Zuo; Jia Deng | We address the task of view synthesis, generating novel views of a scene given a set of images as input. In many recent works such as NeRF (Mildenhall et al., 2020), the scene geometry is parameterized using neural implicit representations (i.e., MLPs). Implicit neural representations have achieved impressive visual quality but have drawbacks in computational efficiency. In this work, we propose a new approach that performs view synthesis using point clouds. It is the first point-based method that achieves better visual quality than NeRF while being 100x faster in rendering speed. Our approach builds on existing works on differentiable point-based rendering but introduces a novel technique we call "Sculpted Neural Points (SNP)", which significantly improves the robustness to errors and holes in the reconstructed point cloud. We further propose to use view-dependent point features based on spherical harmonics to capture non-Lambertian surfaces, and new designs in the point-based rendering pipeline that further boost the performance. Finally, we show that our system supports fine-grained scene editing. Code is available at https://github.com/princeton-vl/SNP. | cs.CV | 1 | ICLR-2023 | [
-0.43811899423599243,
-1.2654798030853271,
-0.3248826861381531,
-0.4956558346748352,
0.17220720648765564,
-0.06517499685287476,
0.10396263003349304,
0.21388369798660278,
0.24915994703769684,
-0.9681222438812256,
0.09730403125286102,
1.53224515914917,
0.7780759930610657,
-0.7026354670524597... | [
0.29910412430763245,
0.3026171326637268,
-0.30870258808135986,
0.14219078421592712,
-0.02645868808031082,
-0.27218693494796753,
0.8540298938751221,
-0.8927921652793884,
0.11047030240297318,
-0.569256067276001,
0.5397982001304626,
0.7954294085502625,
-0.058719925582408905,
-0.57378017902374... | {} | null | null |
http://arxiv.org/abs/2205.08534v4 | 2022-05-17T00:00:00 | Vision Transformer Adapter for Dense Predictions | Zhe Chen; Yuchen Duan; Wenhai Wang; Junjun He; Tong Lu; Jifeng Dai; Yu Qiao | This work investigates a simple yet powerful dense prediction task adapter for Vision Transformer (ViT). Unlike recently advanced variants that incorporate vision-specific inductive biases into their architectures, the plain ViT suffers inferior performance on dense predictions due to weak prior assumptions. To address this issue, we propose the ViT-Adapter, which allows plain ViT to achieve comparable performance to vision-specific transformers. Specifically, the backbone in our framework is a plain ViT that can learn powerful representations from large-scale multi-modal data. When transferring to downstream tasks, a pre-training-free adapter is used to introduce the image-related inductive biases into the model, making it suitable for these tasks. We verify ViT-Adapter on multiple dense prediction tasks, including object detection, instance segmentation, and semantic segmentation. Notably, without using extra detection data, our ViT-Adapter-L yields state-of-the-art 60.9 box AP and 53.0 mask AP on COCO test-dev. We hope that the ViT-Adapter could serve as an alternative for vision-specific transformers and facilitate future research. The code and models will be released at https://github.com/czczup/ViT-Adapter. | cs.CV | 1 | ICLR-2023 | [
-0.4837333559989929,
-0.4754077196121216,
-0.12280922383069992,
-0.420337051153183,
0.1334199756383896,
-0.02705150842666626,
0.5636455416679382,
0.3206283152103424,
-0.6792359352111816,
-0.9247504472732544,
-0.05127803608775139,
1.8013378381729126,
1.1360642910003662,
0.24390345811843872,... | [
0.6943543553352356,
1.011267900466919,
-0.4119434952735901,
0.14643631875514984,
-0.27094241976737976,
0.04165198653936386,
0.6042537093162537,
-0.47920018434524536,
-0.47168609499931335,
-0.875304102897644,
0.0794246718287468,
0.29158520698547363,
0.7350335717201233,
-0.011912092566490173... | {} | null | null |
http://arxiv.org/abs/2205.09244v4 | 2022-05-18T00:00:00 | Riemannian Metric Learning via Optimal Transport | Christopher Scarvelis; Justin Solomon | We introduce an optimal transport-based model for learning a metric tensor from cross-sectional samples of evolving probability measures on a common Riemannian manifold. We neurally parametrize the metric as a spatially-varying matrix field and efficiently optimize our model's objective using a simple alternating scheme. Using this learned metric, we can nonlinearly interpolate between probability measures and compute geodesics on the manifold. We show that metrics learned using our method improve the quality of trajectory inference on scRNA and bird migration data at the cost of little additional cross-sectional data. | cs.LG; stat.ML | 1 | ICLR-2023 | [
-1.3008289337158203,
-1.2495696544647217,
-0.05481167882680893,
-0.6207829713821411,
-0.3645009398460388,
-0.45224955677986145,
0.478880912065506,
0.3141818642616272,
-0.7308603525161743,
0.11811483651399612,
-0.2828054428100586,
0.9442547559738159,
0.2666769027709961,
0.07181628048419952,... | [
-0.0318353995680809,
-0.033808667212724686,
-0.28433769941329956,
-0.17859303951263428,
0.0461614616215229,
-0.09150852262973785,
0.8020251989364624,
-0.5508381128311157,
-0.6437441110610962,
-0.0360005721449852,
0.192191481590271,
0.08124243468046188,
-0.3854823112487793,
0.15118040144443... | {} | null | null |
http://arxiv.org/abs/2205.09235v4 | 2022-05-18T00:00:00 | GRACE-C: Generalized Rate Agnostic Causal Estimation via Constraints | Mohammadsajad Abavisani; David Danks; Sergey Plis | Graphical structures estimated by causal learning algorithms from time series data can provide misleading causal information if the causal timescale of the generating process fails to match the measurement timescale of the data. Existing algorithms provide limited resources to respond to this challenge, and so researchers must either use models that they know are likely misleading, or else forego causal learning entirely. Existing methods face up-to-four distinct shortfalls, as they might 1) require that the difference between causal and measurement timescales is known; 2) only handle very small number of random variables when the timescale difference is unknown; 3) only apply to pairs of variables; or 4) be unable to find a solution given statistical noise in the data. This research addresses these challenges. Our approach combines constraint programming with both theoretical insights into the problem structure and prior information about admissible causal interactions to achieve multiple orders of magnitude in speed-up. The resulting system maintains theoretical guarantees while scaling to significantly larger sets of random variables (>100) without knowledge of timescale differences. This method is also robust to edge misidentification and can use parametric connection strengths, while optionally finding the optimal solution among many possible ones. | stat.ML; cs.AI; cs.LG | 1 | ICLR-2023 | [
-2.005613327026367,
-0.8709720373153687,
-0.03270687907934189,
-0.6022481322288513,
-0.5831789970397949,
-0.1917942762374878,
0.6596470475196838,
-0.3671891391277313,
0.43273308873176575,
0.4980223774909973,
-0.8158780336380005,
0.6998924016952515,
0.3573152720928192,
0.4667682647705078,
... | [
0.21791771054267883,
0.6032114624977112,
-0.455503910779953,
0.043045029044151306,
-0.12448897957801819,
-0.3793611526489258,
0.7287164926528931,
-0.6717031598091125,
0.1543818563222885,
0.06368376314640045,
0.1985057294368744,
0.023265216499567032,
-0.23586371541023254,
0.1518574059009552... | {} | null | null |
http://arxiv.org/abs/2205.09616v2 | 2022-05-19T00:00:00 | Masked Image Modeling with Denoising Contrast | Kun Yi; Yixiao Ge; Xiaotong Li; Shusheng Yang; Dian Li; Jianping Wu; Ying Shan; Xiaohu Qie | Since the development of self-supervised visual representation learning from contrastive learning to masked image modeling (MIM), there is no significant difference in essence, that is, how to design proper pretext tasks for vision dictionary look-up. MIM recently dominates this line of research with state-of-the-art performance on vision Transformers (ViTs), where the core is to enhance the patch-level visual context capturing of the network via denoising auto-encoding mechanism. Rather than tailoring image tokenizers with extra training stages as in previous works, we unleash the great potential of contrastive learning on denoising auto-encoding and introduce a pure MIM method, ConMIM, to produce simple intra-image inter-patch contrastive constraints as the sole learning objectives for masked patch prediction. We further strengthen the denoising mechanism with asymmetric designs, including image perturbations and model progress rates, to improve the network pre-training. ConMIM-pretrained models with various scales achieve competitive results on downstream image classification, semantic segmentation, object detection, and instance segmentation tasks, e.g., on ImageNet-1K classification, we achieve 83.9% top-1 accuracy with ViT-Small and 85.3% with ViT-Base without extra data for pre-training. | cs.CV | 1 | ICLR-2023 | [
-0.2436799854040146,
-0.5472041368484497,
-0.14314253628253937,
-0.2522660791873932,
0.0532694049179554,
0.21105551719665527,
0.5080257654190063,
0.08716452866792679,
-0.5115353465080261,
-0.8258450031280518,
-0.658671498298645,
1.6382688283920288,
0.8519162535667419,
0.03737081587314606,
... | [
0.6874184608459473,
0.7146569490432739,
-0.5382397770881653,
0.40495094656944275,
-0.3426552414894104,
0.23220889270305634,
0.6600020527839661,
-0.23781323432922363,
-0.46573999524116516,
-0.7471626400947571,
0.3229850232601166,
0.5764110088348389,
0.8134656548500061,
-0.04810771346092224,... | {} | null | null |
http://arxiv.org/abs/2205.09329v2 | 2022-05-19T00:00:00 | Dataset Pruning: Reducing Training Data by Examining Generalization Influence | Shuo Yang; Zeke Xie; Hanyu Peng; Min Xu; Mingming Sun; Ping Li | The great success of deep learning heavily relies on increasingly larger training data, which comes at a price of huge computational and infrastructural costs. This poses crucial questions that, do all training data contribute to model's performance? How much does each individual training sample or a sub-training-set affect the model's generalization, and how to construct the smallest subset from the entire training data as a proxy training set without significantly sacrificing the model's performance? To answer these, we propose dataset pruning, an optimization-based sample selection method that can (1) examine the influence of removing a particular set of training samples on model's generalization ability with theoretical guarantee, and (2) construct the smallest subset of training data that yields strictly constrained generalization gap. The empirically observed generalization gap of dataset pruning is substantially consistent with our theoretical expectations. Furthermore, the proposed method prunes 40% training examples on the CIFAR-10 dataset, halves the convergence time with only 1.3% test accuracy decrease, which is superior to previous score-based sample selection methods. | cs.LG | 1 | ICLR-2023 | [
-0.7491217851638794,
-1.5105501413345337,
0.32590773701667786,
-0.4244077801704407,
-0.055119603872299194,
0.2294408529996872,
0.3521679937839508,
-0.5612671971321106,
-0.9746782779693604,
-0.24848176538944244,
-0.7233465909957886,
1.1018577814102173,
0.9835257530212402,
0.639063835144043,... | [
-0.08423161506652832,
0.43060848116874695,
-0.38331809639930725,
0.0970887765288353,
-0.4538913369178772,
0.6658089756965637,
0.048400189727544785,
-0.41740337014198303,
-0.6250202655792236,
0.15633933246135712,
-0.16404356062412262,
0.08416531980037689,
0.12762990593910217,
0.117476552724... | {} | null | null |
http://arxiv.org/abs/2205.09546v5 | 2022-05-19T00:00:00 | Deterministic training of generative autoencoders using invertible layers | Gianluigi Silvestri; Daan Roos; Luca Ambrogioni | In this work, we provide a deterministic alternative to the stochastic variational training of generative autoencoders. We refer to these new generative autoencoders as AutoEncoders within Flows (AEF), since the encoder and decoder are defined as affine layers of an overall invertible architecture. This results in a deterministic encoding of the data, as opposed to the stochastic encoding of VAEs. The paper introduces two related families of AEFs. The first family relies on a partition of the ambient space and is trained by exact maximum-likelihood. The second family exploits a deterministic expansion of the ambient space and is trained by maximizing the log-probability in this extended space. This latter case leaves complete freedom in the choice of encoder, decoder and prior architectures, making it a drop-in replacement for the training of existing VAEs and VAE-style models. We show that these AEFs can have strikingly higher performance than architecturally identical VAEs in terms of log-likelihood and sample quality, especially for low dimensional latent spaces. Importantly, we show that AEF samples are substantially sharper than VAE samples. | stat.ML; cs.LG | 1 | ICLR-2023 | [
-1.321488380432129,
-0.7870983481407166,
0.015129717998206615,
-0.45126473903656006,
-1.1126192808151245,
-0.20339174568653107,
0.4829513132572174,
0.11293196678161621,
-0.05512002855539322,
-0.7004122734069824,
-0.03787263110280037,
1.0052629709243774,
0.7691907286643982,
0.78292632102966... | [
0.17136624455451965,
0.8604024648666382,
-0.05904634669423103,
-0.24791911244392395,
-0.16731561720371246,
-0.4429583251476288,
0.8402218818664551,
-0.8465141654014587,
0.17868025600910187,
-0.705765962600708,
0.7147716879844666,
0.18397438526153564,
0.10723499953746796,
-0.214185714721679... | {} | null | null |
http://arxiv.org/abs/2205.09712v1 | 2022-05-19T00:00:00 | Selection-Inference: Exploiting Large Language Models for Interpretable Logical Reasoning | Antonia Creswell; Murray Shanahan; Irina Higgins | Large language models (LLMs) have been shown to be capable of impressive few-shot generalisation to new tasks. However, they still tend to perform poorly on multi-step logical reasoning problems. Here we carry out a comprehensive evaluation of LLMs on 50 tasks that probe different aspects of logical reasoning. We show that language models tend to perform fairly well at single step inference or entailment tasks, but struggle to chain together multiple reasoning steps to solve more complex problems. In light of this, we propose a Selection-Inference (SI) framework that exploits pre-trained LLMs as general processing modules, and alternates between selection and inference to generate a series of interpretable, casual reasoning steps leading to the final answer. We show that a 7B parameter LLM used within the SI framework in a 5-shot generalisation setting, with no fine-tuning, yields a performance improvement of over 100% compared to an equivalent vanilla baseline on a suite of 10 logical reasoning tasks. The same model in the same setting even outperforms a significantly larger 280B parameter baseline on the same suite of tasks. Moreover, answers produced by the SI framework are accompanied by a causal natural-language-based reasoning trace, which has important implications for the safety and trustworthiness of the system. | cs.AI; cs.CL | 1 | ICLR-2023 | [
-0.8185402750968933,
0.06445968151092529,
0.6509789228439331,
-0.7203312516212463,
0.49566298723220825,
-0.46604931354522705,
1.125470757484436,
0.798783004283905,
-1.066815972328186,
-0.5645661354064941,
-0.15596361458301544,
0.6099231243133545,
1.1112422943115234,
0.1577642560005188,
0... | [
0.10414261370897293,
0.7920873761177063,
-0.08189467340707779,
-0.10493891686201096,
-0.3092542886734009,
-0.021932242438197136,
0.7813843488693237,
-0.22274340689182281,
-0.2958620488643646,
0.13138723373413086,
0.4422321021556854,
-0.25437498092651367,
0.009015414863824844,
-0.1267384141... | {} | null | null |
http://arxiv.org/abs/2205.11027v3 | 2022-05-23T00:00:00 | When Data Geometry Meets Deep Function: Generalizing Offline Reinforcement Learning | Jianxiong Li; Xianyuan Zhan; Haoran Xu; Xiangyu Zhu; Jingjing Liu; Ya-Qin Zhang | In offline reinforcement learning (RL), one detrimental issue to policy learning is the error accumulation of deep Q function in out-of-distribution (OOD) areas. Unfortunately, existing offline RL methods are often over-conservative, inevitably hurting generalization performance outside data distribution. In our study, one interesting observation is that deep Q functions approximate well inside the convex hull of training data. Inspired by this, we propose a new method, DOGE (Distance-sensitive Offline RL with better GEneralization). DOGE marries dataset geometry with deep function approximators in offline RL, and enables exploitation in generalizable OOD areas rather than strictly constraining policy within data distribution. Specifically, DOGE trains a state-conditioned distance function that can be readily plugged into standard actor-critic methods as a policy constraint. Simple yet elegant, our algorithm enjoys better generalization compared to state-of-the-art methods on D4RL benchmarks. Theoretical analysis demonstrates the superiority of our approach to existing methods that are solely based on data distribution or support constraints. | cs.LG; cs.AI; cs.RO | 1 | ICLR-2023 | [
-1.6171939373016357,
-0.9154457449913025,
-0.30351611971855164,
-0.13697969913482666,
-0.8493943810462952,
-0.06995021551847458,
-0.24007979035377502,
0.29221680760383606,
-0.49716323614120483,
-0.10841065645217896,
-0.18438129127025604,
1.3222179412841797,
0.6176986694335938,
0.5814117193... | [
-0.5755909085273743,
0.6657952070236206,
-0.9345464706420898,
0.27092212438583374,
-0.2890593707561493,
-0.12232312560081482,
0.1755710393190384,
0.2783573269844055,
-0.7141944169998169,
0.23497895896434784,
0.21627898514270782,
-0.4153297543525696,
0.19226236641407013,
-0.0509969517588615... | {} | null | null |
http://arxiv.org/abs/2205.11156v2 | 2022-05-23T00:00:00 | Squeeze Training for Adversarial Robustness | Qizhang Li; Yiwen Guo; Wangmeng Zuo; Hao Chen | The vulnerability of deep neural networks (DNNs) to adversarial examples has attracted great attention in the machine learning community. The problem is related to non-flatness and non-smoothness of normally obtained loss landscapes. Training augmented with adversarial examples (a.k.a., adversarial training) is considered as an effective remedy. In this paper, we highlight that some collaborative examples, nearly perceptually indistinguishable from both adversarial and benign examples yet show extremely lower prediction loss, can be utilized to enhance adversarial training. A novel method is therefore proposed to achieve new state-of-the-arts in adversarial robustness. Code: https://github.com/qizhangli/ST-AT. | cs.LG; cs.CR; cs.CV | 1 | ICLR-2023 | [
-0.7135930061340332,
-0.8394380211830139,
-0.08724179863929749,
-0.403725266456604,
-0.4162083566188812,
-0.2236325591802597,
0.2555602788925171,
-0.10338190943002701,
-0.36491966247558594,
-0.644119918346405,
-0.39746415615081787,
1.3078064918518066,
0.8354100584983826,
0.5675923228263855... | [
0.08598459511995316,
0.8489477634429932,
-0.23406077921390533,
0.2637292146682739,
-0.36432990431785583,
-0.3268123269081116,
0.5546681880950928,
-0.44058293104171753,
-0.15191501379013062,
-0.582122802734375,
0.054905567318201065,
0.23367728292942047,
0.43628057837486267,
-0.0045010522007... | {} | null | null |
http://arxiv.org/abs/2205.11787v3 | 2022-05-24T00:00:00 | Quadratic models for understanding catapult dynamics of neural networks | Libin Zhu; Chaoyue Liu; Adityanarayanan Radhakrishnan; Mikhail Belkin | While neural networks can be approximated by linear models as their width increases, certain properties of wide neural networks cannot be captured by linear models. In this work we show that recently proposed Neural Quadratic Models can exhibit the "catapult phase" [Lewkowycz et al. 2020] that arises when training such models with large learning rates. We then empirically show that the behaviour of neural quadratic models parallels that of neural networks in generalization, especially in the catapult phase regime. Our analysis further demonstrates that quadratic models can be an effective tool for analysis of neural networks. | cs.LG; math.OC; stat.ML | 1 | ICLR-2024 | [
-0.7551881074905396,
-0.6194777488708496,
-0.1609601229429245,
-0.4255017340183258,
-0.514928936958313,
-0.15652300417423248,
-0.05850202590227127,
0.5744487643241882,
-0.31932538747787476,
0.9355387091636658,
-1.017313838005066,
0.5625415444374084,
-0.03223859518766403,
0.4458960294723511... | [
-0.09022806584835052,
0.9946345090866089,
-0.33255067467689514,
0.3877604007720947,
0.23019476234912872,
-0.10142508149147034,
0.6462453007698059,
-0.44166144728660583,
-0.3354455530643463,
-0.0705251693725586,
0.019559182226657867,
-0.1204872652888298,
-0.007362879812717438,
-0.0097590982... | {"http://arxiv.org/abs/2210.14064v3": 0.9315376877784729, "http://arxiv.org/abs/2306.17499v1": 0.9301252365112305, "http://arxiv.org/abs/2210.14891v17": 0.9298977851867676, "http://arxiv.org/abs/2211.14503v1": 0.9290356040000916, "http://arxiv.org/abs/2304.02840v1": 0.9252371788024902, "http://arxiv.org/abs/2211.08403v3": 0.9246533513069153, "http://arxiv.org/abs/2210.01019v2": 0.9234210252761841, "http://arxiv.org/abs/2209.04836v6": 0.9227793216705322, "http://arxiv.org/abs/2303.03382v1": 0.9223575592041016, "http://arxiv.org/abs/2207.12316v2": 0.9221024513244629} | 0.931538 | 0.926115 |
http://arxiv.org/abs/2205.12411v5 | 2022-05-24T00:00:00 | Linear Connectivity Reveals Generalization Strategies | Jeevesh Juneja; Rachit Bansal; Kyunghyun Cho; João Sedoc; Naomi Saphra | It is widely accepted in the mode connectivity literature that when two neural networks are trained similarly on the same data, they are connected by a path through parameter space over which test set accuracy is maintained. Under some circumstances, including transfer learning from pretrained models, these paths are presumed to be linear. In contrast to existing results, we find that among text classifiers (trained on MNLI, QQP, and CoLA), some pairs of finetuned models have large barriers of increasing loss on the linear paths between them. On each task, we find distinct clusters of models which are linearly connected on the test loss surface, but are disconnected from models outside the cluster -- models that occupy separate basins on the surface. By measuring performance on specially-crafted diagnostic datasets, we find that these clusters correspond to different generalization strategies: one cluster behaves like a bag of words model under domain shift, while another cluster uses syntactic heuristics. Our work demonstrates how the geometry of the loss surface can guide models towards different heuristic functions. | cs.LG; cs.CL | 1 | ICLR-2023 | [
-0.7197425961494446,
-0.6901289224624634,
0.055553138256073,
-0.6503341197967529,
-0.1861478090286255,
-0.8201038241386414,
0.5745344161987305,
0.5459933280944824,
-1.1753722429275513,
-0.9225051999092102,
-0.5844964385032654,
1.0854769945144653,
0.4823956787586212,
0.8999975919723511,
0... | [
-0.12781234085559845,
0.6461681127548218,
-0.5364290475845337,
-0.23359811305999756,
-0.26053372025489807,
-0.16749095916748047,
0.4226406216621399,
-0.3371264338493347,
-0.43719547986984253,
-0.1966162472963333,
-0.026634138077497482,
-0.4375848174095154,
-0.14888229966163635,
0.189819559... | {} | null | null |
http://arxiv.org/abs/2205.12532v2 | 2022-05-25T00:00:00 | Skill Machines: Temporal Logic Skill Composition in Reinforcement Learning | Geraud Nangue Tasse; Devon Jarvis; Steven James; Benjamin Rosman | It is desirable for an agent to be able to solve a rich variety of problems that can be specified through language in the same environment. A popular approach towards obtaining such agents is to reuse skills learned in prior tasks to generalise compositionally to new ones. However, this is a challenging problem due to the curse of dimensionality induced by the combinatorially large number of ways high-level goals can be combined both logically and temporally in language. To address this problem, we propose a framework where an agent first learns a sufficient set of skill primitives to achieve all high-level goals in its environment. The agent can then flexibly compose them both logically and temporally to provably achieve temporal logic specifications in any regular language, such as regular fragments of linear temporal logic. This provides the agent with the ability to map from complex temporal logic task specifications to near-optimal behaviours zero-shot. We demonstrate this experimentally in a tabular setting, as well as in a high-dimensional video game and continuous control environment. Finally, we also demonstrate that the performance of skill machines can be improved with regular off-policy reinforcement learning algorithms when optimal behaviours are desired. | cs.LG; cs.LO | 1 | ICLR-2024 | [
-0.7212854027748108,
-0.6028793454170227,
0.44707775115966797,
-0.8107276558876038,
-0.21063527464866638,
0.11410379409790039,
0.3973718285560608,
1.166725754737854,
-0.49868881702423096,
-0.10827299952507019,
-1.3582549095153809,
0.9242770075798035,
0.4392755627632141,
-0.3560832142829895... | [
0.2823483943939209,
0.5841145515441895,
-0.39620253443717957,
0.508872926235199,
-0.31023409962654114,
-0.40412217378616333,
0.5925852060317993,
-0.3977239429950714,
-0.1895177811384201,
0.13353180885314941,
0.17504039406776428,
-0.04893704131245613,
-0.2780900299549103,
0.1270028054714203... | {"http://arxiv.org/abs/2303.00001v1": 0.9399707317352295, "http://arxiv.org/abs/2211.10445v3": 0.9348948001861572, "http://arxiv.org/abs/2210.15906v4": 0.930727481842041, "http://arxiv.org/abs/2211.13350v2": 0.9304581880569458, "http://arxiv.org/abs/2303.16189v1": 0.9300497770309448, "http://arxiv.org/abs/2207.08258v3": 0.9297852516174316, "http://arxiv.org/abs/2211.00863v3": 0.929509699344635, "http://arxiv.org/abs/2201.08115v2": 0.9289221167564392, "http://arxiv.org/abs/2305.04073v2": 0.928642213344574, "http://arxiv.org/abs/2303.01728v2": 0.9281070828437805} | 0.939971 | 0.931107 |
http://arxiv.org/abs/2205.12904v2 | 2022-05-25T00:00:00 | Analyzing Tree Architectures in Ensembles via Neural Tangent Kernel | Ryuichi Kanoh; Mahito Sugiyama | A soft tree is an actively studied variant of a decision tree that updates splitting rules using the gradient method. Although soft trees can take various architectures, their impact is not theoretically well known. In this paper, we formulate and analyze the Neural Tangent Kernel (NTK) induced by soft tree ensembles for arbitrary tree architectures. This kernel leads to the remarkable finding that only the number of leaves at each depth is relevant for the tree architecture in ensemble learning with an infinite number of trees. In other words, if the number of leaves at each depth is fixed, the training behavior in function space and the generalization performance are exactly the same across different tree architectures, even if they are not isomorphic. We also show that the NTK of asymmetric trees like decision lists does not degenerate when they get infinitely deep. This is in contrast to the perfect binary trees, whose NTK is known to degenerate and leads to worse generalization performance for deeper trees. | cs.LG; stat.ML | 1 | ICLR-2023 | [
-1.0906221866607666,
-1.177567958831787,
0.31314197182655334,
-0.5874701142311096,
-0.39415162801742554,
0.07468591630458832,
-0.3592616021633148,
0.1144590675830841,
-0.5568801760673523,
-0.21217699348926544,
-0.683611273765564,
1.0361932516098022,
0.38013434410095215,
0.4150477945804596,... | [
0.13477769494056702,
0.538673996925354,
-0.6037991046905518,
-0.38462552428245544,
0.03276838734745979,
0.06109899654984474,
-0.03447660058736801,
-0.404778391122818,
-0.02991269901394844,
-0.042513929307460785,
0.3331781029701233,
-0.11977718770503998,
-0.01412133127450943,
0.019521279260... | {} | null | null |
http://arxiv.org/abs/2205.13134v2 | 2022-05-26T00:00:00 | Symbolic Physics Learner: Discovering governing equations via Monte Carlo tree search | Fangzheng Sun; Yang Liu; Jian-Xun Wang; Hao Sun | Nonlinear dynamics is ubiquitous in nature and commonly seen in various science and engineering disciplines. Distilling analytical expressions that govern nonlinear dynamics from limited data remains vital but challenging. To tackle this fundamental issue, we propose a novel Symbolic Physics Learner (SPL) machine to discover the mathematical structure of nonlinear dynamics. The key concept is to interpret mathematical operations and system state variables by computational rules and symbols, establish symbolic reasoning of mathematical formulas via expression trees, and employ a Monte Carlo tree search (MCTS) agent to explore optimal expression trees based on measurement data. The MCTS agent obtains an optimistic selection policy through the traversal of expression trees, featuring the one that maps to the arithmetic expression of underlying physics. Salient features of the proposed framework include search flexibility and enforcement of parsimony for discovered equations. The efficacy and superiority of the SPL machine are demonstrated by numerical examples, compared with state-of-the-art baselines. | cs.AI; cs.LG; cs.SC; nlin.CD; physics.comp-ph | 1 | ICLR-2023 | [
-1.3596664667129517,
-0.3743826150894165,
0.5585790872573853,
0.3531583547592163,
-0.3796839416027069,
0.21798424422740936,
-0.022174667567014694,
-0.7779881954193115,
0.07394260168075562,
0.7473674416542053,
-1.2230368852615356,
0.7563616037368774,
-0.9822596311569214,
-0.0264329314231872... | [
-0.08421863615512848,
0.2921503484249115,
-0.3168520927429199,
0.3513951599597931,
-0.03412679582834244,
0.04053061828017235,
0.28555142879486084,
-0.6335893273353577,
-0.2960810363292694,
0.3127645254135132,
-0.2903277277946472,
-0.4963887333869934,
-0.3677537143230438,
0.0790409669280052... | {} | null | null |
http://arxiv.org/abs/2205.13452v2 | 2022-05-26T00:00:00 | Continual evaluation for lifelong learning: Identifying the stability gap | Matthias De Lange; Gido van de Ven; Tinne Tuytelaars | Time-dependent data-generating distributions have proven to be difficult for gradient-based training of neural networks, as the greedy updates result in catastrophic forgetting of previously learned knowledge. Despite the progress in the field of continual learning to overcome this forgetting, we show that a set of common state-of-the-art methods still suffers from substantial forgetting upon starting to learn new tasks, except that this forgetting is temporary and followed by a phase of performance recovery. We refer to this intriguing but potentially problematic phenomenon as the stability gap. The stability gap had likely remained under the radar due to standard practice in the field of evaluating continual learning models only after each task. Instead, we establish a framework for continual evaluation that uses per-iteration evaluation and we define a new set of metrics to quantify worst-case performance. Empirically we show that experience replay, constraint-based replay, knowledge-distillation, and parameter regularization methods are all prone to the stability gap; and that the stability gap can be observed in class-, task-, and domain-incremental learning benchmarks. Additionally, a controlled experiment shows that the stability gap increases when tasks are more dissimilar. Finally, by disentangling gradients into plasticity and stability components, we propose a conceptual explanation for the stability gap. | cs.LG; cs.AI; cs.CV | 1 | ICLR-2023 | [
-0.7146638631820679,
-1.3293198347091675,
0.2973570227622986,
-0.6207621693611145,
-0.3138584494590759,
-0.40549665689468384,
0.24191007018089294,
0.4052330553531647,
-0.957070529460907,
0.006195388734340668,
-1.0740106105804443,
0.858589768409729,
0.5117626786231995,
0.5690957307815552,
... | [
0.014703668653964996,
0.31096887588500977,
-0.28420504927635193,
-0.031337521970272064,
-0.09203311800956726,
-0.03447825461626053,
0.5716508030891418,
-0.3608984351158142,
-0.7880131006240845,
0.37475305795669556,
-0.440434992313385,
-0.23570598661899567,
0.15144513547420502,
0.2019073516... | {} | null | null |
http://arxiv.org/abs/2205.13589v3 | 2022-05-26T00:00:00 | Pessimism in the Face of Confounders: Provably Efficient Offline Reinforcement Learning in Partially Observable Markov Decision Processes | Miao Lu; Yifei Min; Zhaoran Wang; Zhuoran Yang | We study offline reinforcement learning (RL) in partially observable Markov decision processes. In particular, we aim to learn an optimal policy from a dataset collected by a behavior policy which possibly depends on the latent state. Such a dataset is confounded in the sense that the latent state simultaneously affects the action and the observation, which is prohibitive for existing offline RL algorithms. To this end, we propose the \underline{P}roxy variable \underline{P}essimistic \underline{P}olicy \underline{O}ptimization (\texttt{P3O}) algorithm, which addresses the confounding bias and the distributional shift between the optimal and behavior policies in the context of general function approximation. At the core of \texttt{P3O} is a coupled sequence of pessimistic confidence regions constructed via proximal causal inference, which is formulated as minimax estimation. Under a partial coverage assumption on the confounded dataset, we prove that \texttt{P3O} achieves a $n^{-1/2}$-suboptimality, where $n$ is the number of trajectories in the dataset. To our best knowledge, \texttt{P3O} is the first provably efficient offline RL algorithm for POMDPs with a confounded dataset. | cs.LG; cs.AI; math.ST; stat.ME; stat.ML; stat.TH | 1 | ICLR-2023 | [
-1.4243766069412231,
-0.00583784282207489,
-0.657760739326477,
-0.5435109734535217,
-0.5795608162879944,
0.44241178035736084,
0.13885876536369324,
1.2619298696517944,
-0.062324412167072296,
0.6544426083564758,
-1.1460225582122803,
0.7861517071723938,
-0.5226971507072449,
0.0314501710236072... | [
0.03532255068421364,
1.156632900238037,
-1.1739330291748047,
0.5336315035820007,
-0.4586728811264038,
-0.15403273701667786,
0.2946634888648987,
0.11393161118030548,
-0.49221324920654297,
0.5373330116271973,
0.512503981590271,
-0.1790286749601364,
-0.11751867830753326,
0.472131609916687,
... | {} | null | null |
http://arxiv.org/abs/2205.13577v2 | 2022-05-26T00:00:00 | Understanding new tasks through the lens of training data via exponential tilting | Subha Maity; Mikhail Yurochkin; Moulinath Banerjee; Yuekai Sun | Deploying machine learning models to new tasks is a major challenge despite the large size of the modern training datasets. However, it is conceivable that the training data can be reweighted to be more representative of the new (target) task. We consider the problem of reweighing the training samples to gain insights into the distribution of the target task. Specifically, we formulate a distribution shift model based on the exponential tilt assumption and learn train data importance weights minimizing the KL divergence between labeled train and unlabeled target datasets. The learned train data weights can then be used for downstream tasks such as target performance evaluation, fine-tuning, and model selection. We demonstrate the efficacy of our method on Waterbirds and Breeds benchmarks. | cs.LG; stat.ME; stat.ML | 1 | ICLR-2023 | [
-1.056154727935791,
-1.1886804103851318,
0.13030537962913513,
-0.7132370471954346,
-0.15553240478038788,
-0.2671620547771454,
0.38312631845474243,
0.04092337563633919,
-1.1600688695907593,
-0.2661728262901306,
-0.4682733118534088,
1.352581262588501,
0.8011150360107422,
0.5988891124725342,
... | [
0.08260403573513031,
0.368013471364975,
-0.5024916529655457,
-0.6628086566925049,
0.07212382555007935,
-0.043631717562675476,
0.4874752461910248,
-0.6470652222633362,
-0.7111029028892517,
-0.5060319900512695,
0.0895402655005455,
0.531801164150238,
0.21096919476985931,
0.3247218728065491,
... | {} | null | null |
http://arxiv.org/abs/2205.13349v4 | 2022-05-26T00:00:00 | Learning What and Where: Disentangling Location and Identity Tracking Without Supervision | Manuel Traub; Sebastian Otte; Tobias Menge; Matthias Karlbauer; Jannik Thümmel; Martin V. Butz | Our brain can almost effortlessly decompose visual data streams into background and salient objects. Moreover, it can anticipate object motion and interactions, which are crucial abilities for conceptual planning and reasoning. Recent object reasoning datasets, such as CATER, have revealed fundamental shortcomings of current vision-based AI systems, particularly when targeting explicit object representations, object permanence, and object reasoning. Here we introduce a self-supervised LOCation and Identity tracking system (Loci), which excels on the CATER tracking challenge. Inspired by the dorsal and ventral pathways in the brain, Loci tackles the binding problem by processing separate, slot-wise encodings of `what' and `where'. Loci's predictive coding-like processing encourages active error minimization, such that individual slots tend to encode individual objects. Interactions between objects and object dynamics are processed in the disentangled latent space. Truncated backpropagation through time combined with forward eligibility accumulation significantly speeds up learning and improves memory efficiency. Besides exhibiting superior performance in current benchmarks, Loci effectively extracts objects from video streams and separates them into location and Gestalt components. We believe that this separation offers a representation that will facilitate effective planning and reasoning on conceptual levels. | cs.CV | 1 | ICLR-2023 | [
0.19276325404644012,
-0.05307720974087715,
0.7127096056938171,
-0.7073192000389099,
0.6021245121955872,
0.2737361788749695,
0.5933112502098083,
0.30035915970802307,
-0.27201807498931885,
-0.39053672552108765,
-0.8330574631690979,
0.46635663509368896,
0.5654712319374084,
-0.0311737321317195... | [
0.005456820130348206,
0.5359105467796326,
-0.6936123371124268,
0.07035473734140396,
0.3637021780014038,
0.03966015949845314,
0.7743812799453735,
-0.1847369372844696,
-0.6363155841827393,
-0.4662218689918518,
0.3782869279384613,
0.42423683404922485,
-0.13011151552200317,
0.06060982495546341... | {} | null | null |
http://arxiv.org/abs/2205.13521v2 | 2022-05-26T00:00:00 | Discovering Policies with DOMiNO: Diversity Optimization Maintaining Near Optimality | Tom Zahavy; Yannick Schroecker; Feryal Behbahani; Kate Baumli; Sebastian Flennerhag; Shaobo Hou; Satinder Singh | Finding different solutions to the same problem is a key aspect of intelligence associated with creativity and adaptation to novel situations. In reinforcement learning, a set of diverse policies can be useful for exploration, transfer, hierarchy, and robustness. We propose DOMiNO, a method for Diversity Optimization Maintaining Near Optimality. We formalize the problem as a Constrained Markov Decision Process where the objective is to find diverse policies, measured by the distance between the state occupancies of the policies in the set, while remaining near-optimal with respect to the extrinsic reward. We demonstrate that the method can discover diverse and meaningful behaviors in various domains, such as different locomotion patterns in the DeepMind Control Suite. We perform extensive analysis of our approach, compare it with other multi-objective baselines, demonstrate that we can control both the quality and the diversity of the set via interpretable hyperparameters, and show that the discovered set is robust to perturbations. | cs.AI; cs.LG | 1 | ICLR-2023 | [
-0.937624454498291,
-1.0897635221481323,
0.009772456251084805,
-0.2639879882335663,
-0.23780620098114014,
-0.10107685625553131,
-0.15013176202774048,
0.5332886576652527,
-0.19774863123893738,
0.28305119276046753,
-0.4425467550754547,
1.094197392463684,
0.10945163667201996,
0.39411863684654... | [
0.0798763781785965,
-0.0774272084236145,
-0.7305448651313782,
0.5964268445968628,
0.3103058636188507,
-0.24902142584323883,
0.15932855010032654,
-0.38721784949302673,
-0.22638985514640808,
0.2909354865550995,
0.28270310163497925,
-0.08869050443172455,
-0.2759159803390503,
0.337050795555114... | {} | null | null |
http://arxiv.org/abs/2205.13531v2 | 2022-05-26T00:00:00 | Learning ReLU networks to high uniform accuracy is intractable | Julius Berner; Philipp Grohs; Felix Voigtlaender | Statistical learning theory provides bounds on the necessary number of training samples needed to reach a prescribed accuracy in a learning problem formulated over a given target class. This accuracy is typically measured in terms of a generalization error, that is, an expected value of a given loss function. However, for several applications -- for example in a security-critical context or for problems in the computational sciences -- accuracy in this sense is not sufficient. In such cases, one would like to have guarantees for high accuracy on every input value, that is, with respect to the uniform norm. In this paper we precisely quantify the number of training samples needed for any conceivable training algorithm to guarantee a given uniform accuracy on any learning problem formulated over target classes containing (or consisting of) ReLU neural networks of a prescribed architecture. We prove that, under very general assumptions, the minimal number of training samples for this task scales exponentially both in the depth and the input dimension of the network architecture. | cs.LG; stat.ML | 1 | ICLR-2023 | [
-0.9983808994293213,
-0.5532786846160889,
-0.27510762214660645,
-0.17434796690940857,
-0.9394876956939697,
0.04690145328640938,
-0.030361123383045197,
0.9417688846588135,
-0.33831527829170227,
-0.03918128460645676,
-1.176712155342102,
0.7562044858932495,
0.34944948554039,
0.479327887296676... | [
0.6320472359657288,
0.9111616611480713,
-0.7182450890541077,
-0.032951951026916504,
-0.4633636772632599,
0.4793206751346588,
-0.005113903433084488,
0.04660709202289581,
-0.40672066807746887,
0.29762735962867737,
-0.49303528666496277,
-0.1003122627735138,
0.03406770899891853,
-0.11671558022... | {} | null | null |
http://arxiv.org/abs/2205.13684v2 | 2022-05-27T00:00:00 | Learning with Stochastic Orders | Carles Domingo-Enrich; Yair Schiff; Youssef Mroueh | Learning high-dimensional distributions is often done with explicit likelihood modeling or implicit modeling via minimizing integral probability metrics (IPMs). In this paper, we expand this learning paradigm to stochastic orders, namely, the convex or Choquet order between probability measures. Towards this end, exploiting the relation between convex orders and optimal transport, we introduce the Choquet-Toland distance between probability measures, that can be used as a drop-in replacement for IPMs. We also introduce the Variational Dominance Criterion (VDC) to learn probability measures with dominance constraints, that encode the desired stochastic order between the learned measure and a known baseline. We analyze both quantities and show that they suffer from the curse of dimensionality and propose surrogates via input convex maxout networks (ICMNs), that enjoy parametric rates. We provide a min-max framework for learning with stochastic orders and validate it experimentally on synthetic and high-dimensional image generation, with promising results. Finally, our ICMNs class of convex functions and its derived Rademacher Complexity are of independent interest beyond their application in convex orders. | stat.ML; cs.LG; math.PR; math.ST; stat.TH | 1 | ICLR-2023 | [
-1.1337770223617554,
-0.18734490871429443,
-0.13876459002494812,
0.13203030824661255,
-1.466272234916687,
0.18416503071784973,
0.2134227603673935,
0.19316494464874268,
0.04304305464029312,
-0.31498318910598755,
-0.7380477786064148,
1.1345127820968628,
0.028402943164110184,
0.30703431367874... | [
0.29112571477890015,
0.8144676089286804,
-0.46526041626930237,
0.19888827204704285,
-0.675159752368927,
-0.05740588530898094,
0.40869423747062683,
-0.30936670303344727,
-0.14031465351581573,
-0.31984519958496094,
-0.0019249245524406433,
0.43270784616470337,
-0.7720184326171875,
0.140605032... | {} | null | null |
http://arxiv.org/abs/2205.14284v2 | 2022-05-28T00:00:00 | Provably Auditing Ordinary Least Squares in Low Dimensions | Ankur Moitra; Dhruv Rohatgi | Measuring the stability of conclusions derived from Ordinary Least Squares linear regression is critically important, but most metrics either only measure local stability (i.e. against infinitesimal changes in the data), or are only interpretable under statistical assumptions. Recent work proposes a simple, global, finite-sample stability metric: the minimum number of samples that need to be removed so that rerunning the analysis overturns the conclusion, specifically meaning that the sign of a particular coefficient of the estimated regressor changes. However, besides the trivial exponential-time algorithm, the only approach for computing this metric is a greedy heuristic that lacks provable guarantees under reasonable, verifiable assumptions; the heuristic provides a loose upper bound on the stability and also cannot certify lower bounds on it. We show that in the low-dimensional regime where the number of covariates is a constant but the number of samples is large, there are efficient algorithms for provably estimating (a fractional version of) this metric. Applying our algorithms to the Boston Housing dataset, we exhibit regression analyses where we can estimate the stability up to a factor of $3$ better than the greedy heuristic, and analyses where we can certify stability to dropping even a majority of the samples. | stat.ML; cs.DS; cs.LG; econ.EM | 1 | ICLR-2023 | [
-2.165710926055908,
-0.8084642291069031,
-0.46500012278556824,
-0.9494779109954834,
-1.2582461833953857,
-0.06278927624225616,
0.023201605305075645,
0.2023792266845703,
-0.33566054701805115,
0.5881240367889404,
-0.592362105846405,
0.923545777797699,
0.4049126207828522,
-0.03278111666440964... | [
-0.09477119147777557,
0.9881699681282043,
-0.4553230106830597,
0.07586663216352463,
-0.6143300533294678,
0.03570239245891571,
0.11524062603712082,
-0.17820937931537628,
0.32421669363975525,
0.15260285139083862,
0.07830723375082016,
-0.22315643727779388,
0.4534130394458771,
0.28845971822738... | {} | null | null |
http://arxiv.org/abs/2205.14300v2 | 2022-05-28T00:00:00 | Tuning Frequency Bias in Neural Network Training with Nonuniform Data | Annan Yu; Yunan Yang; Alex Townsend | Small generalization errors of over-parameterized neural networks (NNs) can be partially explained by the frequency biasing phenomenon, where gradient-based algorithms minimize the low-frequency misfit before reducing the high-frequency residuals. Using the Neural Tangent Kernel (NTK), one can provide a theoretically rigorous analysis for training where data are drawn from constant or piecewise-constant probability densities. Since most training data sets are not drawn from such distributions, we use the NTK model and a data-dependent quadrature rule to theoretically quantify the frequency biasing of NN training given fully nonuniform data. By replacing the loss function with a carefully selected Sobolev norm, we can further amplify, dampen, counterbalance, or reverse the intrinsic frequency biasing in NN training. | cs.LG; 68T07, 68Q32 | 1 | ICLR-2023 | [
-0.7417062520980835,
-0.8231607675552368,
0.0751638188958168,
-0.7166264653205872,
-0.6328184008598328,
-0.14142510294914246,
-0.21370403468608856,
0.15307307243347168,
-0.4265042841434479,
0.057317234575748444,
-0.7629517316818237,
0.9435660243034363,
0.5144563317298889,
0.596741199493408... | [
0.21811836957931519,
1.095847725868225,
-0.745312511920929,
0.11245876550674438,
-0.43877750635147095,
0.4860333800315857,
-0.4287407398223877,
-0.5746639966964722,
-0.3081344664096832,
-0.432018518447876,
0.17576585710048676,
0.27130061388015747,
0.19306130707263947,
-0.05378407984972,
... | {} | null | null |
http://arxiv.org/abs/2205.14309v2 | 2022-05-28T00:00:00 | Federated Neural Bandits | Zhongxiang Dai; Yao Shu; Arun Verma; Flint Xiaofeng Fan; Bryan Kian Hsiang Low; Patrick Jaillet | Recent works on neural contextual bandits have achieved compelling performances due to their ability to leverage the strong representation power of neural networks (NNs) for reward prediction. Many applications of contextual bandits involve multiple agents who collaborate without sharing raw observations, thus giving rise to the setting of federated contextual bandits. Existing works on federated contextual bandits rely on linear or kernelized bandits, which may fall short when modeling complex real-world reward functions. So, this paper introduces the federated neural-upper confidence bound (FN-UCB) algorithm. To better exploit the federated setting, FN-UCB adopts a weighted combination of two UCBs: $\text{UCB}^{a}$ allows every agent to additionally use the observations from the other agents to accelerate exploration (without sharing raw observations), while $\text{UCB}^{b}$ uses an NN with aggregated parameters for reward prediction in a similar way to federated averaging for supervised learning. Notably, the weight between the two UCBs required by our theoretical analysis is amenable to an interesting interpretation, which emphasizes $\text{UCB}^{a}$ initially for accelerated exploration and relies more on $\text{UCB}^{b}$ later after enough observations have been collected to train the NNs for accurate reward prediction (i.e., reliable exploitation). We prove sub-linear upper bounds on both the cumulative regret and the number of communication rounds of FN-UCB, and empirically demonstrate its competitive performance. | cs.LG; cs.AI | 1 | ICLR-2023 | [
-1.016083002090454,
-0.7721009850502014,
-0.5955807566642761,
-0.4896470904350281,
-0.6142999529838562,
0.39405885338783264,
0.017745913937687874,
0.8677598834037781,
-0.41908884048461914,
0.537367582321167,
-0.6649105548858643,
1.0569980144500732,
-0.23870635032653809,
0.7753771543502808,... | [
0.3096797466278076,
0.5947978496551514,
-0.8415499925613403,
0.08475916087627411,
-0.635745644569397,
0.2772824466228485,
0.6936740875244141,
-0.2494802474975586,
-0.4874049127101898,
0.6112169623374939,
0.13376212120056152,
-0.09359581768512726,
-0.02502804435789585,
0.2240229994058609,
... | {} | null | null |
http://arxiv.org/abs/2205.14691v3 | 2022-05-29T00:00:00 | On the Robustness of Safe Reinforcement Learning under Observational Perturbations | Zuxin Liu; Zijian Guo; Zhepeng Cen; Huan Zhang; Jie Tan; Bo Li; Ding Zhao | Safe reinforcement learning (RL) trains a policy to maximize the task reward while satisfying safety constraints. While prior works focus on the performance optimality, we find that the optimal solutions of many safe RL problems are not robust and safe against carefully designed observational perturbations. We formally analyze the unique properties of designing effective observational adversarial attackers in the safe RL setting. We show that baseline adversarial attack techniques for standard RL tasks are not always effective for safe RL and propose two new approaches - one maximizes the cost and the other maximizes the reward. One interesting and counter-intuitive finding is that the maximum reward attack is strong, as it can both induce unsafe behaviors and make the attack stealthy by maintaining the reward. We further propose a robust training framework for safe RL and evaluate it via comprehensive experiments. This paper provides a pioneer work to investigate the safety and robustness of RL under observational attacks for future safe RL studies. Code is available at: \url{https://github.com/liuzuxin/safe-rl-robustness} | cs.LG; cs.AI; cs.RO | 1 | ICLR-2023 | [
-1.280143141746521,
-0.9209744930267334,
-0.24285578727722168,
-0.3769587278366089,
-0.7340792417526245,
-0.17527273297309875,
0.04752320423722267,
0.6227167248725891,
-0.4671068787574768,
0.10300923883914948,
-1.2942118644714355,
0.5842669010162354,
-0.028107158839702606,
0.21728321909904... | [
0.019267041236162186,
0.6318557262420654,
-0.5067551136016846,
0.5311591625213623,
-0.2857309877872467,
-0.3518381118774414,
0.7060613036155701,
-0.47756481170654297,
-0.4605151116847992,
-0.01705285906791687,
-0.20776213705539703,
-0.09703660011291504,
0.21274031698703766,
0.1058451980352... | {} | null | null |
http://arxiv.org/abs/2205.14583v2 | 2022-05-29T00:00:00 | Learning Locality and Isotropy in Dialogue Modeling | Han Wu; Haochen Tan; Mingjie Zhan; Gangming Zhao; Shaoqing Lu; Ding Liang; Linqi Song | Existing dialogue modeling methods have achieved promising performance on various dialogue tasks with the aid of Transformer and the large-scale pre-trained language models. However, some recent studies revealed that the context representations produced by these methods suffer the problem of anisotropy. In this paper, we find that the generated representations are also not conversational, losing the conversation structure information during the context modeling stage. To this end, we identify two properties in dialogue modeling, i.e., locality and isotropy, and present a simple method for dialogue representation calibration, namely SimDRC, to build isotropic and conversational feature spaces. Experimental results show that our approach significantly outperforms the current state-of-the-art models on three dialogue tasks across the automatic and human evaluation metrics. More in-depth analyses further confirm the effectiveness of our proposed approach. | cs.CL; cs.IR; cs.LG | 1 | ICLR-2023 | [
-0.8074068427085876,
0.029915861785411835,
-0.24289163947105408,
-1.2895584106445312,
0.21134893596172333,
-0.2772541642189026,
1.3341586589813232,
0.4807705581188202,
-0.9887701272964478,
-1.2154905796051025,
-0.10924968123435974,
0.31351470947265625,
1.0250564813613892,
1.167970657348632... | [
-0.0907333493232727,
0.8887692093849182,
-0.406791627407074,
-0.5930047631263733,
-0.2778764069080353,
-0.4712899625301361,
0.8055241107940674,
-0.5613390803337097,
0.12544062733650208,
-0.1874348372220993,
1.5045902729034424,
-0.12372604757547379,
0.2003561556339264,
0.011366365477442741,... | {} | null | null |
http://arxiv.org/abs/2205.14589v2 | 2022-05-29T00:00:00 | Masked Distillation with Receptive Tokens | Tao Huang; Yuan Zhang; Shan You; Fei Wang; Chen Qian; Jian Cao; Chang Xu | Distilling from the feature maps can be fairly effective for dense prediction tasks since both the feature discriminability and localization priors can be well transferred. However, not every pixel contributes equally to the performance, and a good student should learn from what really matters to the teacher. In this paper, we introduce a learnable embedding dubbed receptive token to localize those pixels of interests (PoIs) in the feature map, with a distillation mask generated via pixel-wise attention. Then the distillation will be performed on the mask via pixel-wise reconstruction. In this way, a distillation mask actually indicates a pattern of pixel dependencies within feature maps of teacher. We thus adopt multiple receptive tokens to investigate more sophisticated and informative pixel dependencies to further enhance the distillation. To obtain a group of masks, the receptive tokens are learned via the regular task loss but with teacher fixed, and we also leverage a Dice loss to enrich the diversity of learned masks. Our method dubbed MasKD is simple and practical, and needs no priors of tasks in application. Experiments show that our MasKD can achieve state-of-the-art performance consistently on object detection and semantic segmentation benchmarks. Code is available at: https://github.com/hunto/MasKD . | cs.CV; cs.LG | 1 | ICLR-2023 | [
-0.7596135139465332,
-0.21080535650253296,
-0.20571818947792053,
-0.6691800951957703,
0.03949170187115669,
0.1657438725233078,
0.21672701835632324,
0.5449299216270447,
-0.7686312198638916,
-0.8287451267242432,
-0.1983526200056076,
1.7623441219329834,
0.910415530204773,
0.20013676583766937,... | [
0.15424193441867828,
1.082475185394287,
-0.3637847900390625,
-0.06191565841436386,
-0.5796166062355042,
0.2164328694343567,
0.5639988780021667,
0.0287824347615242,
-0.5083324313163757,
-0.46276289224624634,
0.534385621547699,
0.582499623298645,
0.5530402064323425,
0.18581537902355194,
-0... | {} | null | null |
http://arxiv.org/abs/2205.15419v3 | 2022-05-30T00:00:00 | Fool SHAP with Stealthily Biased Sampling | Gabriel Laberge; Ulrich Aïvodji; Satoshi Hara; Mario Marchand.; Foutse Khomh | SHAP explanations aim at identifying which features contribute the most to the difference in model prediction at a specific input versus a background distribution. Recent studies have shown that they can be manipulated by malicious adversaries to produce arbitrary desired explanations. However, existing attacks focus solely on altering the black-box model itself. In this paper, we propose a complementary family of attacks that leave the model intact and manipulate SHAP explanations using stealthily biased sampling of the data points used to approximate expectations w.r.t the background distribution. In the context of fairness audit, we show that our attack can reduce the importance of a sensitive feature when explaining the difference in outcomes between groups while remaining undetected. More precisely, experiments performed on real-world datasets showed that our attack could yield up to a 90\% relative decrease in amplitude of the sensitive feature attribution. These results highlight the manipulability of SHAP explanations and encourage auditors to treat them with skepticism. | cs.LG | 1 | ICLR-2023 | [
-0.8703526258468628,
-0.4591097831726074,
-1.0737426280975342,
-0.8248974680900574,
-0.5076714754104614,
-0.4569315016269684,
0.6352505683898926,
0.5925588607788086,
-0.1857260763645172,
-0.35547345876693726,
-1.9708503484725952,
0.48022496700286865,
-0.354808509349823,
0.22727392613887787... | [
0.3175911605358124,
0.8347722887992859,
-0.9677122235298157,
0.10633450746536255,
-0.6342669725418091,
-0.10197018086910248,
0.19737093150615692,
-0.08079715073108673,
0.2712688744068146,
0.4214443564414978,
-0.04269477352499962,
-0.0764274150133133,
0.014639142900705338,
-0.02355918288230... | {} | null | null |
http://arxiv.org/abs/2205.15213v3 | 2022-05-30T00:00:00 | Backpropagation through Combinatorial Algorithms: Identity with Projection Works | Subham Sekhar Sahoo; Anselm Paulus; Marin Vlastelica; Vít Musil; Volodymyr Kuleshov; Georg Martius | Embedding discrete solvers as differentiable layers has given modern deep learning architectures combinatorial expressivity and discrete reasoning capabilities. The derivative of these solvers is zero or undefined, therefore a meaningful replacement is crucial for effective gradient-based learning. Prior works rely on smoothing the solver with input perturbations, relaxing the solver to continuous problems, or interpolating the loss landscape with techniques that typically require additional solver calls, introduce extra hyper-parameters, or compromise performance. We propose a principled approach to exploit the geometry of the discrete solution space to treat the solver as a negative identity on the backward pass and further provide a theoretical justification. Our experiments demonstrate that such a straightforward hyper-parameter-free approach is able to compete with previous more complex methods on numerous experiments such as backpropagation through discrete samplers, deep graph matching, and image retrieval. Furthermore, we substitute the previously proposed problem-specific and label-dependent margin with a generic regularization procedure that prevents cost collapse and increases robustness. | cs.LG | 1 | ICLR-2023 | [
-1.2804981470108032,
-0.46449124813079834,
0.5569424033164978,
0.07159896939992905,
-0.6863030195236206,
-0.2694312334060669,
0.13758555054664612,
0.308399498462677,
-0.5458016395568848,
-0.5462611317634583,
-0.4031689763069153,
1.4320284128189087,
0.3934503495693207,
0.6211336255073547,
... | [
0.37358880043029785,
1.0703109502792358,
-0.5417119264602661,
0.1524227410554886,
-0.4576764404773712,
0.08214052021503448,
0.5970126390457153,
-0.3686662018299103,
-0.5212763547897339,
-0.0728151798248291,
-0.054525356739759445,
0.025547564029693604,
-0.17145229876041412,
-0.4371924102306... | {} | null | null |
http://arxiv.org/abs/2205.14814v2 | 2022-05-30T00:00:00 | Your Contrastive Learning Is Secretly Doing Stochastic Neighbor Embedding | Tianyang Hu; Zhili Liu; Fengwei Zhou; Wenjia Wang; Weiran Huang | Contrastive learning, especially self-supervised contrastive learning (SSCL), has achieved great success in extracting powerful features from unlabeled data. In this work, we contribute to the theoretical understanding of SSCL and uncover its connection to the classic data visualization method, stochastic neighbor embedding (SNE), whose goal is to preserve pairwise distances. From the perspective of preserving neighboring information, SSCL can be viewed as a special case of SNE with the input space pairwise similarities specified by data augmentation. The established correspondence facilitates deeper theoretical understanding of learned features of SSCL, as well as methodological guidelines for practical improvement. Specifically, through the lens of SNE, we provide novel analysis on domain-agnostic augmentations, implicit bias and robustness of learned features. To illustrate the practical advantage, we demonstrate that the modifications from SNE to $t$-SNE can also be adopted in the SSCL setting, achieving significant improvement in both in-distribution and out-of-distribution generalization. | cs.LG; cs.AI; cs.CV | 1 | ICLR-2023 | [
-1.8214764595031738,
-0.7871165871620178,
-0.16373343765735626,
-0.43142765760421753,
-0.9790639281272888,
0.19952209293842316,
0.45086562633514404,
-0.14932464063167572,
-0.5170180797576904,
-0.13984929025173187,
-0.5350120663642883,
1.4393059015274048,
0.18572869896888733,
0.429413139820... | [
-0.02590392529964447,
0.4745425879955292,
-0.6444027423858643,
-0.27179935574531555,
-0.4719560742378235,
-0.25455501675605774,
0.4574081599712372,
-0.08383728563785553,
-0.27385446429252625,
-0.004942178726196289,
0.7596244812011719,
0.3689993619918823,
0.2225140482187271,
0.3265753090381... | {} | null | null |
http://arxiv.org/abs/2205.15460v2 | 2022-05-30T00:00:00 | Critic Sequential Monte Carlo | Vasileios Lioutas; Jonathan Wilder Lavington; Justice Sefas; Matthew Niedoba; Yunpeng Liu; Berend Zwartsenberg; Setareh Dabiri; Frank Wood; Adam Scibior | We introduce CriticSMC, a new algorithm for planning as inference built from a composition of sequential Monte Carlo with learned Soft-Q function heuristic factors. These heuristic factors, obtained from parametric approximations of the marginal likelihood ahead, more effectively guide SMC towards the desired target distribution, which is particularly helpful for planning in environments with hard constraints placed sparsely in time. Compared with previous work, we modify the placement of such heuristic factors, which allows us to cheaply propose and evaluate large numbers of putative action particles, greatly increasing inference and planning efficiency. CriticSMC is compatible with informative priors, whose density function need not be known, and can be used as a model-free control algorithm. Our experiments on collision avoidance in a high-dimensional simulated driving task show that CriticSMC significantly reduces collision rates at a low computational cost while maintaining realism and diversity of driving behaviors across vehicles and environment scenarios. | stat.ML; cs.LG | 1 | ICLR-2023 | [
-1.4422191381454468,
-0.434390664100647,
-0.4752236008644104,
-0.7259796857833862,
-0.7133378982543945,
0.18738269805908203,
0.06468314677476883,
0.4285251498222351,
-0.25638940930366516,
0.38345474004745483,
-0.158117413520813,
0.5416680574417114,
0.11975298821926117,
-0.23315462470054626... | [
0.011981677263975143,
0.9057397246360779,
-0.662622332572937,
0.5969035625457764,
-0.4795962870121002,
0.21458089351654053,
0.6245073676109314,
-0.14923039078712463,
-0.371148943901062,
-0.17646442353725433,
0.1984550952911377,
-0.12267296761274338,
0.11127074062824249,
0.16159529983997345... | {} | null | null |
http://arxiv.org/abs/2205.14977v3 | 2022-05-30T00:00:00 | Fast Nonlinear Vector Quantile Regression | Aviv A. Rosenberg; Sanketh Vedula; Yaniv Romano; Alex M. Bronstein | Quantile regression (QR) is a powerful tool for estimating one or more conditional quantiles of a target variable $\mathrm{Y}$ given explanatory features $\boldsymbol{\mathrm{X}}$. A limitation of QR is that it is only defined for scalar target variables, due to the formulation of its objective function, and since the notion of quantiles has no standard definition for multivariate distributions. Recently, vector quantile regression (VQR) was proposed as an extension of QR for vector-valued target variables, thanks to a meaningful generalization of the notion of quantiles to multivariate distributions via optimal transport. Despite its elegance, VQR is arguably not applicable in practice due to several limitations: (i) it assumes a linear model for the quantiles of the target $\boldsymbol{\mathrm{Y}}$ given the features $\boldsymbol{\mathrm{X}}$; (ii) its exact formulation is intractable even for modestly-sized problems in terms of target dimensions, number of regressed quantile levels, or number of features, and its relaxed dual formulation may violate the monotonicity of the estimated quantiles; (iii) no fast or scalable solvers for VQR currently exist. In this work we fully address these limitations, namely: (i) We extend VQR to the non-linear case, showing substantial improvement over linear VQR; (ii) We propose {vector monotone rearrangement}, a method which ensures the quantile functions estimated by VQR are monotone functions; (iii) We provide fast, GPU-accelerated solvers for linear and nonlinear VQR which maintain a fixed memory footprint, and demonstrate that they scale to millions of samples and thousands of quantile levels; (iv) We release an optimized python package of our solvers as to widespread the use of VQR in real-world applications. | stat.CO; cs.LG; stat.ML | 1 | ICLR-2023 | [
-1.6327217817306519,
-0.6040539741516113,
0.5184046626091003,
-0.13300037384033203,
-1.431029200553894,
0.10042448341846466,
-0.25815320014953613,
0.2618957757949829,
-0.34542563557624817,
0.8439357280731201,
-0.18492677807807922,
1.5839641094207764,
-0.4092557430267334,
-0.318856656551361... | [
0.6630356907844543,
0.7112330198287964,
-0.5763134360313416,
-0.39217525720596313,
-0.7915031313896179,
0.17862537503242493,
0.038017068058252335,
0.1908925324678421,
-0.363900750875473,
-0.05067652463912964,
0.09942775964736938,
-0.22693435847759247,
-0.10346347093582153,
-0.1234940141439... | {} | null | null |
http://arxiv.org/abs/2206.00484v2 | 2022-05-30T00:00:00 | DEP-RL: Embodied Exploration for Reinforcement Learning in Overactuated and Musculoskeletal Systems | Pierre Schumacher; Daniel Häufle; Dieter Büchler; Syn Schmitt; Georg Martius | Muscle-actuated organisms are capable of learning an unparalleled diversity of dexterous movements despite their vast amount of muscles. Reinforcement learning (RL) on large musculoskeletal models, however, has not been able to show similar performance. We conjecture that ineffective exploration in large overactuated action spaces is a key problem. This is supported by the finding that common exploration noise strategies are inadequate in synthetic examples of overactuated systems. We identify differential extrinsic plasticity (DEP), a method from the domain of self-organization, as being able to induce state-space covering exploration within seconds of interaction. By integrating DEP into RL, we achieve fast learning of reaching and locomotion in musculoskeletal systems, outperforming current approaches in all considered tasks in sample efficiency and robustness. | cs.RO; cs.LG | 1 | ICLR-2023 | [
-0.39149361848831177,
-0.10106872022151947,
-0.07060392200946808,
0.5557566285133362,
-0.6086723208427429,
0.11727583408355713,
0.35471343994140625,
-0.18525859713554382,
-0.5303147435188293,
0.524200975894928,
-0.8287230134010315,
0.7132254838943481,
-0.5893133878707886,
0.285218030214309... | [
0.3134993612766266,
1.0088618993759155,
-0.3504999876022339,
1.1063913106918335,
0.03075038269162178,
0.030026495456695557,
0.0007287897169589996,
-0.4467131793498993,
-0.5363595485687256,
0.2217884212732315,
0.4772685468196869,
0.0655536875128746,
-0.40093353390693665,
0.4896126687526703,... | {} | null | null |
http://arxiv.org/abs/2205.15269v2 | 2022-05-30T00:00:00 | Kernel Neural Optimal Transport | Alexander Korotin; Daniil Selikhanovych; Evgeny Burnaev | We study the Neural Optimal Transport (NOT) algorithm which uses the general optimal transport formulation and learns stochastic transport plans. We show that NOT with the weak quadratic cost might learn fake plans which are not optimal. To resolve this issue, we introduce kernel weak quadratic costs. We show that they provide improved theoretical guarantees and practical performance. We test NOT with kernel costs on the unpaired image-to-image translation task. | cs.LG; stat.ML | 1 | ICLR-2023 | [
-0.30354177951812744,
-0.16688492894172668,
-0.2511988878250122,
-0.4872173070907593,
-0.5311216711997986,
0.08398571610450745,
-0.058016858994960785,
0.783957302570343,
-0.6104702353477478,
-0.649186372756958,
-0.3999777138233185,
0.7829965949058533,
0.028642084449529648,
0.38652431964874... | [
0.1551179587841034,
0.8141167163848877,
-0.827250599861145,
0.01540759950876236,
-0.6341642737388611,
-0.04990019276738167,
0.388217568397522,
-0.16098365187644958,
-0.732616126537323,
-0.2943362593650818,
0.2878589928150177,
-0.09260458499193192,
-0.05745454877614975,
-0.01479489728808403... | {} | null | null |
http://arxiv.org/abs/2205.15403v4 | 2022-05-30T00:00:00 | Neural Optimal Transport with General Cost Functionals | Arip Asadulaev; Alexander Korotin; Vage Egiazarian; Petr Mokrov; Evgeny Burnaev | We introduce a novel neural network-based algorithm to compute optimal transport (OT) plans for general cost functionals. In contrast to common Euclidean costs, i.e., $\ell^1$ or $\ell^2$, such functionals provide more flexibility and allow using auxiliary information, such as class labels, to construct the required transport map. Existing methods for general costs are discrete and have limitations in practice, i.e. they do not provide an out-of-sample estimation. We address the challenge of designing a continuous OT approach for general costs that generalizes to new data points in high-dimensional spaces, such as images. Additionally, we provide the theoretical error analysis for our recovered transport plans. As an application, we construct a cost functional to map data distributions while preserving the class-wise structure. | cs.LG | 1 | ICLR-2024 | [
-0.8882470726966858,
-0.4520677924156189,
-0.2742648422718048,
-0.5317132472991943,
-0.8934962153434753,
0.5453099608421326,
0.1064492017030716,
0.5801761150360107,
-0.3211750388145447,
0.019418664276599884,
-0.5837966203689575,
1.3656951189041138,
0.0000830627977848053,
0.1657154411077499... | [
0.18186457455158234,
0.401561439037323,
-0.6337741613388062,
-0.41339555382728577,
-0.44831445813179016,
0.4180833697319031,
-0.0022620782256126404,
-0.21530695259571075,
-0.8247262239456177,
-0.2154959887266159,
0.23067745566368103,
0.5251670479774475,
-0.3517482280731201,
-0.370125859975... | {"http://arxiv.org/abs/2201.12220v3": 0.9563877582550049, "http://arxiv.org/abs/2205.15269v2": 0.9426466226577759, "http://arxiv.org/abs/2205.13684v2": 0.9291688799858093, "http://arxiv.org/abs/2210.12153v3": 0.920760452747345, "http://arxiv.org/abs/2201.11945v3": 0.9146089553833008, "http://arxiv.org/abs/2210.07931v1": 0.9139165878295898, "http://arxiv.org/abs/2209.03003v1": 0.9136571884155273, "http://arxiv.org/abs/2209.13570v5": 0.9121048450469971, "http://arxiv.org/abs/2211.11719v2": 0.9105203151702881, "http://arxiv.org/abs/2205.09244v4": 0.9100257754325867} | 0.956388 | 0.92238 |
http://arxiv.org/abs/2205.15043v2 | 2022-05-30T00:00:00 | RLx2: Training a Sparse Deep Reinforcement Learning Model from Scratch | Yiqin Tan; Pihe Hu; Ling Pan; Jiatai Huang; Longbo Huang | Training deep reinforcement learning (DRL) models usually requires high computation costs. Therefore, compressing DRL models possesses immense potential for training acceleration and model deployment. However, existing methods that generate small models mainly adopt the knowledge distillation-based approach by iteratively training a dense network. As a result, the training process still demands massive computing resources. Indeed, sparse training from scratch in DRL has not been well explored and is particularly challenging due to non-stationarity in bootstrap training. In this work, we propose a novel sparse DRL training framework, "the Rigged Reinforcement Learning Lottery" (RLx2), which builds upon gradient-based topology evolution and is capable of training a sparse DRL model based entirely on a sparse network. Specifically, RLx2 introduces a novel multi-step TD target mechanism with a dynamic-capacity replay buffer to achieve robust value learning and efficient topology exploration in sparse models. It also reaches state-of-the-art sparse training performance in several tasks, showing 7.5\times-20\times model compression with less than 3% performance degradation and up to 20\times and 50\times FLOPs reduction for training and inference, respectively. | cs.LG | 1 | ICLR-2023 | [
-0.9372537136077881,
-1.003726840019226,
0.2867698073387146,
-0.4933134913444519,
-0.21919091045856476,
0.12020474672317505,
0.30993762612342834,
-0.38339999318122864,
-0.12979955971240997,
-0.39852088689804077,
-0.3010677099227905,
1.0286155939102173,
1.0805572271347046,
0.596457660198211... | [
-0.19701682031154633,
0.7486525774002075,
-0.7330580353736877,
0.4707864224910736,
-0.05818495899438858,
0.0024515483528375626,
0.4974901080131531,
-0.06768488883972168,
-0.08007918298244476,
0.5105257034301758,
-0.2130965143442154,
-0.35328203439712524,
0.2830543518066406,
-0.078260324895... | {} | null | null |
http://arxiv.org/abs/2206.01332v3 | 2022-05-31T00:00:00 | Optimal Activation Functions for the Random Features Regression Model | Jianxin Wang; José Bento | The asymptotic mean squared test error and sensitivity of the Random Features Regression model (RFR) have been recently studied. We build on this work and identify in closed-form the family of Activation Functions (AFs) that minimize a combination of the test error and sensitivity of the RFR under different notions of functional parsimony. We find scenarios under which the optimal AFs are linear, saturated linear functions, or expressible in terms of Hermite polynomials. Finally, we show how using optimal AFs impacts well-established properties of the RFR model, such as its double descent curve, and the dependency of its optimal regularization parameter on the observation noise level. | stat.ML; cs.AI; cs.LG | 1 | ICLR-2023 | [
-1.409690499305725,
-0.32330870628356934,
-0.02538873627781868,
-1.0414621829986572,
-1.5321898460388184,
0.3659363090991974,
0.6009039282798767,
-0.17469948530197144,
-0.09672991186380386,
0.054820820689201355,
-1.1884783506393433,
1.3167316913604736,
0.06438526511192322,
0.47767034173011... | [
0.5885528922080994,
0.8667452335357666,
0.019031092524528503,
-0.09040302783250809,
-0.9259320497512817,
0.25332820415496826,
0.10939901322126389,
-0.5337334871292114,
-0.23123985528945923,
-0.066979318857193,
0.3137902319431305,
0.7665931582450867,
-0.2549072206020355,
0.37878456711769104... | {} | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.