bibtex_url
null | proceedings
stringlengths 42
42
| bibtext
stringlengths 197
792
| abstract
stringlengths 303
3.45k
| title
stringlengths 10
159
| authors
sequencelengths 1
28
⌀ | id
stringclasses 44
values | type
stringclasses 16
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 444
values | n_linked_authors
int64 -1
9
| upvotes
int64 -1
42
| num_comments
int64 -1
13
| n_authors
int64 -1
92
| paper_page_exists_pre_conf
int64 0
1
| Models
sequencelengths 0
100
| Datasets
sequencelengths 0
11
| Spaces
sequencelengths 0
100
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | https://openreview.net/forum?id=uAyElhYKxg | @inproceedings{
even2023sgd,
title={(S){GD} over Diagonal Linear Networks: Implicit bias, Large Stepsizes and Edge of Stability},
author={Mathieu Even and Scott Pesme and Suriya Gunasekar and Nicolas Flammarion},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=uAyElhYKxg}
} | In this paper, we investigate the impact of stochasticity and large stepsizes on the implicit regularisation of gradient descent (GD) and stochastic gradient descent (SGD) over $2$-layer diagonal linear networks. We prove the convergence of GD and SGD with macroscopic stepsizes in an overparametrised regression setting and characterise their solutions through an implicit regularisation problem. Our crisp characterisation leads to qualitative insights about the impact of stochasticity and stepsizes on the recovered solution. Specifically, we show that large stepsizes consistently benefit SGD for sparse regression problems, while they can hinder the recovery of sparse solutions for GD. These effects are magnified for stepsizes in a tight window just below the divergence threshold, in the ``edge of stability'' regime. Our findings are supported by experimental results. | (S)GD over Diagonal Linear Networks: Implicit bias, Large Stepsizes and Edge of Stability | [
"Mathieu Even",
"Scott Pesme",
"Suriya Gunasekar",
"Nicolas Flammarion"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=u8srPlinoj | @inproceedings{
singh2023reds,
title={Re{DS}: Offline {RL} With Heteroskedastic Datasets via Support Constraints},
author={Anikait Singh and Aviral Kumar and Quan Vuong and Yevgen Chebotar and Sergey Levine},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=u8srPlinoj}
} | Offline reinforcement learning (RL) learns policies entirely from static datasets. Practical applications of offline RL will inevitably require learning from datasets where the variability of demonstrated behaviors changes non-uniformly across the state space. For example, at a red light, nearly all human drivers behave similarly by stopping, but when merging onto a highway, some drivers merge quickly, efficiently, and safely, while many hesitate or merge dangerously. Both theoretically and empirically, we show that typical offline RL methods, which are based on distribution constraints fail to learn from data with such non-uniform variability, due to the requirement to stay close to the behavior policy **to the same extent** across the state space. Ideally, the learned policy should be free to choose **per state** how closely to follow the behavior policy to maximize long-term return, as long as the learned policy stays within the support of the behavior policy. To instantiate this principle, we reweight the data distribution in conservative Q-learning (CQL) to obtain an approximate support constraint formulation. The reweighted distribution is a mixture of the current policy and an additional policy trained to mine poor actions that are likely under the behavior policy. Our method, CQL (ReDS), is theoretically motivated, and improves performance across a wide range of offline RL problems in games, navigation, and pixel-based manipulation. | ReDS: Offline RL With Heteroskedastic Datasets via Support Constraints | [
"Anikait Singh",
"Aviral Kumar",
"Quan Vuong",
"Yevgen Chebotar",
"Sergey Levine"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=u6Xv3FuF8N | @inproceedings{
duan2023flocks,
title={Flocks of Stochastic Parrots: Differentially Private Prompt Learning for Large Language Models},
author={Haonan Duan and Adam Dziedzic and Nicolas Papernot and Franziska Boenisch},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=u6Xv3FuF8N}
} | Large language models (LLMs) are excellent in-context learners. However, the sensitivity of data contained in prompts raises privacy concerns. Our work first shows that these concerns are valid: we instantiate a simple but highly effective membership inference attack against the data used to prompt LLMs. To address this vulnerability, one could forego prompting and resort to fine-tuning LLMs with known algorithms for private gradient descent. However, this comes at the expense of the practicality and efficiency offered by prompting. Therefore, we propose to privately learn to prompt. We first show that soft prompts can be obtained privately through gradient descent on downstream data. However, this is not the case for discrete prompts. Thus, we orchestrate a noisy vote among an ensemble of LLMs presented with different prompts, i.e., a flock of stochastic parrots. The vote privately transfers the flock's knowledge into a single public prompt. We show that LLMs prompted with our private algorithms closely match the non-private baselines. For example, using GPT3 as the base model, we achieve a downstream accuracy of 92.7% on the sst2 dataset with $(\varepsilon=0.147, \delta=10^{-6})$-differential privacy vs. 95.2% for the non-private baseline. Through our experiments, we also show that our prompt-based approach is easily deployed with existing commercial~APIs. | Flocks of Stochastic Parrots: Differentially Private Prompt Learning for Large Language Models | [
"Haonan Duan",
"Adam Dziedzic",
"Nicolas Papernot",
"Franziska Boenisch"
] | Conference | poster | 2305.15594 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=u6Ibs4hTJH | @inproceedings{
zhang2023realworld,
title={Real-World Image Variation by Aligning Diffusion Inversion Chain},
author={Yuechen ZHANG and Jinbo Xing and Eric Lo and Jiaya Jia},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=u6Ibs4hTJH}
} | Recent diffusion model advancements have enabled high-fidelity images to be generated using text prompts. However, a domain gap exists between generated images and real-world images, which poses a challenge in generating high-quality variations of real-world images. Our investigation uncovers that this domain gap originates from a latents' distribution gap in different diffusion processes. To address this issue, we propose a novel inference pipeline called Real-world Image Variation by ALignment (RIVAL) that utilizes diffusion models to generate image variations from a single image exemplar. Our pipeline enhances the generation quality of image variations by aligning the image generation process to the source image's inversion chain.
Specifically, we demonstrate that step-wise latent distribution alignment is essential for generating high-quality variations.
To attain this, we design a cross-image self-attention injection for feature interaction and a step-wise distribution normalization to align the latent features. Incorporating these alignment processes into a diffusion model allows RIVAL to generate high-quality image variations without further parameter optimization. Our experimental results demonstrate that our proposed approach outperforms existing methods concerning semantic similarity and perceptual quality. This generalized inference pipeline can be easily applied to other diffusion-based generation tasks, such as image-conditioned text-to-image generation and stylization. Project page: https://rival-diff.github.io | Real-World Image Variation by Aligning Diffusion Inversion Chain | [
"Yuechen ZHANG",
"Jinbo Xing",
"Eric Lo",
"Jiaya Jia"
] | Conference | spotlight | 2305.18729 | [
"https://github.com/dvlab-research/rival"
] | https://huggingface.co/papers/2305.18729 | 2 | 4 | 1 | 4 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=u6BYyPuD29 | @inproceedings{
dayal2023madg,
title={{MADG}: Margin-based Adversarial Learning for Domain Generalization},
author={Aveen Dayal and Vimal K B and Linga Reddy Cenkeramaddi and C Krishna Mohan and Abhinav Kumar and Vineeth N. Balasubramanian},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=u6BYyPuD29}
} | Domain Generalization (DG) techniques have emerged as a popular approach to address the challenges of domain shift in Deep Learning (DL), with the goal of generalizing well to the target domain unseen during the training. In recent years, numerous methods have been proposed to address the DG setting, among which one popular approach is the adversarial learning-based methodology. The main idea behind adversarial DG methods is to learn domain-invariant features by minimizing a discrepancy metric. However, most adversarial DG methods use 0-1 loss based $\mathcal{H}\Delta\mathcal{H}$ divergence metric. In contrast, the margin loss-based discrepancy metric has the following advantages: more informative, tighter, practical, and efficiently optimizable. To mitigate this gap, this work proposes a novel adversarial learning DG algorithm, $\textbf{MADG}$, motivated by a margin loss-based discrepancy metric. The proposed $\textbf{MADG}$ model learns domain-invariant features across all source domains and uses adversarial training to generalize well to the unseen target domain. We also provide a theoretical analysis of the proposed $\textbf{MADG}$ model based on the unseen target error bound. Specifically, we construct the link between the source and unseen domains in the real-valued hypothesis space and derive the generalization bound using margin loss and Rademacher complexity. We extensively experiment with the $\textbf{MADG}$ model on popular real-world DG datasets, VLCS, PACS, OfficeHome, DomainNet, and TerraIncognita. We evaluate the proposed algorithm on DomainBed's benchmark and observe consistent performance across all the datasets. | MADG: Margin-based Adversarial Learning for Domain Generalization | [
"Aveen Dayal",
"Vimal K B",
"Linga Reddy Cenkeramaddi",
"C Krishna Mohan",
"Abhinav Kumar",
"Vineeth N. Balasubramanian"
] | Conference | poster | 2311.08503 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=u4YXKKG5dX | @inproceedings{
yi2023graph,
title={Graph Denoising Diffusion for Inverse Protein Folding},
author={Kai Yi and Bingxin Zhou and Yiqing Shen and Pietro Lio and Yu Guang Wang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=u4YXKKG5dX}
} | Inverse protein folding is challenging due to its inherent one-to-many mapping characteristic, where numerous possible amino acid sequences can fold into a single, identical protein backbone. This task involves not only identifying viable sequences but also representing the sheer diversity of potential solutions. However, existing discriminative models, such as transformer-based auto-regressive models, struggle to encapsulate the diverse range of plausible solutions. In contrast, diffusion probabilistic models, as an emerging genre of generative approaches, offer the potential to generate a diverse set of sequence candidates for determined protein backbones. We propose a novel graph denoising diffusion model for inverse protein folding, where a given protein backbone guides the diffusion process on the corresponding amino acid residue types. The model infers the joint distribution of amino acids conditioned on the nodes' physiochemical properties and local environment. Moreover, we utilize amino acid replacement matrices for the diffusion forward process, encoding the biologically-meaningful prior knowledge of amino acids from their spatial and sequential neighbors as well as themselves, which reduces the sampling space of the generative process. Our model achieves state-of-the-art performance over a set of popular baseline methods in sequence recovery and exhibits great potential in generating diverse protein sequences for a determined protein backbone structure. | Graph Denoising Diffusion for Inverse Protein Folding | [
"Kai Yi",
"Bingxin Zhou",
"Yiqing Shen",
"Pietro Lio",
"Yu Guang Wang"
] | Conference | poster | 2306.16819 | [
"https://github.com/ykiiiiii/grade_if"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=u39QQh5L8Q | @inproceedings{
gokcen2023uncovering,
title={Uncovering motifs of concurrent signaling across multiple neuronal populations},
author={Evren Gokcen and Anna Ivic Jasper and Alison Xu and Adam Kohn and Christian K. Machens and Byron M. Yu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=u39QQh5L8Q}
} | Modern recording techniques now allow us to record from distinct neuronal populations in different brain networks. However, especially as we consider multiple (more than two) populations, new conceptual and statistical frameworks are needed to characterize the multi-dimensional, concurrent flow of signals among these populations. Here, we develop a dimensionality reduction framework that determines (1) the subset of populations described by each latent dimension, (2) the direction of signal flow among those populations, and (3) how those signals evolve over time within and across experimental trials. We illustrate these features in simulation, and further validate the method by applying it to previously studied recordings from neuronal populations in macaque visual areas V1 and V2. Then we study interactions across select laminar compartments of areas V1, V2, and V3d, recorded simultaneously with multiple Neuropixels probes. Our approach uncovered signatures of selective communication across these three areas that related to their retinotopic alignment. This work advances the study of concurrent signaling across multiple neuronal populations. | Uncovering motifs of concurrent signaling across multiple neuronal populations | [
"Evren Gokcen",
"Anna Ivic Jasper",
"Alison Xu",
"Adam Kohn",
"Christian K. Machens",
"Byron M. Yu"
] | Conference | spotlight | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=u359tNBpxF | @inproceedings{
li2023robust,
title={Robust Data Valuation with Weighted Banzhaf Values},
author={Weida Li and Yaoliang Yu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=u359tNBpxF}
} | Data valuation, a principled way to rank the importance of each training datum, has become increasingly important. However, existing value-based approaches (e.g., Shapley) are known to suffer from the stochasticity inherent in utility functions that render consistent and reliable ranking difficult. Recently, Wang and Jia (2023) proposed the noise-structure-agnostic framework to advocate the Banzhaf value for its robustness against such stochasticity as it achieves the largest safe margin among many alternatives. Surprisingly, our empirical study shows that the Banzhaf value is not always the most robust when compared with a broader family: weighted Banzhaf values. To analyze this scenario, we introduce the concept of Kronecker noise to parameterize stochasticity, through which we prove that the uniquely robust semi-value, which can be analytically derived from the underlying Kronecker noise, lies in the family of weighted Banzhaf values while minimizing the worst-case entropy. In addition, we adopt the maximum sample reuse principle to design an estimator to efficiently approximate weighted Banzhaf values, and show that it enjoys the best time complexity in terms of achieving an $(\epsilon, \delta)$-approximation. Our theory is verified under both synthetic and authentic noises. For the latter, we fit a Kronecker noise to the inherent stochasticity, which is then plugged in to generate the predicted most robust semi-value. Our study suggests that weighted Banzhaf values are promising when facing undue noises in data valuation. | Robust Data Valuation with Weighted Banzhaf Values | [
"Weida Li",
"Yaoliang Yu"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=u2RJ0I3o3j | @inproceedings{
dimitrov2023plane,
title={PlanE: Representation Learning over Planar Graphs},
author={Radoslav Dimitrov and Zeyang Zhao and Ralph Abboud and Ismail Ilkan Ceylan},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=u2RJ0I3o3j}
} | Graph neural networks are prominent models for representation learning over graphs, where the idea is to iteratively compute representations of nodes of an input graph through a series of transformations in such a way that the learned graph function is isomorphism-invariant on graphs, which makes the learned representations graph invariants. On the other hand, it is well-known that graph invariants learned by these class of models are incomplete: there are pairs of non-isomorphic graphs which cannot be distinguished by standard graph neural networks. This is unsurprising given the computational difficulty of graph isomorphism testing on general graphs, but the situation begs to differ for special graph classes, for which efficient graph isomorphism testing algorithms are known, such as planar graphs. The goal of this work is to design architectures for efficiently learning complete invariants of planar graphs. Inspired by the classical planar graph isomorphism algorithm of Hopcroft and Tarjan, we propose PlanE as a framework for planar representation learning. PlanE includes architectures which can learn complete invariants over planar graphs while remaining practically scalable. We empirically validate the strong performance of the resulting model architectures on well-known planar graph benchmarks, achieving multiple state-of-the-art results. | PlanE: Representation Learning over Planar Graphs | [
"Radoslav Dimitrov",
"Zeyang Zhao",
"Ralph Abboud",
"Ismail Ilkan Ceylan"
] | Conference | poster | 2307.01180 | [
"https://github.com/zzysonny/plane"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=tzxP9Rx0LV | @inproceedings{
safaryan2023knowledge,
title={Knowledge Distillation Performs Partial Variance Reduction},
author={Mher Safaryan and Alexandra Peste and Dan Alistarh},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=tzxP9Rx0LV}
} | Knowledge distillation is a popular approach for enhancing the performance of "student" models, with lower representational capacity, by taking advantage of more powerful "teacher" models. Despite its apparent simplicity, the underlying mechanics behind knowledge distillation (KD) are not yet fully understood. In this work, we shed new light on the inner workings of this method, by examining it from an optimization perspective. Specifically, we show that, in the context of linear and deep linear models, KD can be interpreted as a novel type of stochastic variance reduction mechanism. We provide a detailed convergence analysis of the resulting dynamics, which hold under standard assumptions for both strongly-convex and non-convex losses, showing that KD acts as a form of \emph{partial variance reduction}, which can reduce the stochastic gradient noise, but may not eliminate it completely, depending on the properties of the ``teacher'' model. Our analysis puts further emphasis on the need for careful parametrization of KD, in particular w.r.t. the weighting of the distillation loss, and is validated empirically on both linear models and deep neural networks. | Knowledge Distillation Performs Partial Variance Reduction | [
"Mher Safaryan",
"Alexandra Peste",
"Dan Alistarh"
] | Conference | poster | 2305.17581 | [
"https://github.com/IST-DASLab/KDVR"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=tz4ECtAu8e | @inproceedings{
kim2023gex,
title={{GEX}: A flexible method for approximating influence via Geometric Ensemble},
author={SungYub Kim and Kyungsu Kim and Eunho Yang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=tz4ECtAu8e}
} | Through a deeper understanding of predictions of neural networks, Influence Function (IF) has been applied to various tasks such as detecting and relabeling mislabeled samples, dataset pruning, and separation of data sources in practice. However, we found standard approximations of IF suffer from performance degradation due to oversimplified influence distributions caused by their bilinear approximation, suppressing the expressive power of samples with a relatively strong influence. To address this issue, we propose a new interpretation of existing IF approximations as an average relationship between two linearized losses over parameters sampled from the Laplace approximation (LA). In doing so, we highlight two significant limitations of current IF approximations: the linearity of gradients and the singularity of Hessian. Accordingly, by improving each point, we introduce a new IF approximation method with the following features: i) the removal of linearization to alleviate the bilinear constraint and ii) the utilization of Geometric Ensemble (GE) tailored for non-linear losses. Empirically, our approach outperforms existing IF approximations for downstream tasks with lighter computation, thereby providing new feasibility of low-complexity/nonlinear-based IF design. | GEX: A flexible method for approximating influence via Geometric Ensemble | [
"SungYub Kim",
"Kyungsu Kim",
"Eunho Yang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=txv7TnPvOi | @inproceedings{
li2023instant,
title={InstanT: Semi-supervised Learning with Instance-dependent Thresholds},
author={Muyang Li and Runze Wu and Haoyu Liu and Jun Yu and Xun Yang and Bo Han and Tongliang Liu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=txv7TnPvOi}
} | Semi-supervised learning (SSL) has been a fundamental challenge in machine learning for decades. The primary family of SSL algorithms, known as pseudo-labeling, involves assigning pseudo-labels to confident unlabeled instances and incorporating them into the training set. Therefore, the selection criteria of confident instances are crucial to the success of SSL. Recently, there has been growing interest in the development of SSL methods that use dynamic or adaptive thresholds. Yet, these methods typically apply the same threshold to all samples, or use class-dependent thresholds for instances belonging to a certain class, while neglecting instance-level information. In this paper, we propose the study of instance-dependent thresholds, which has the highest degree of freedom compared with existing methods. Specifically, we devise a novel instance-dependent threshold function for all unlabeled instances by utilizing their instance-level ambiguity and the instance-dependent error rates of pseudo-labels, so instances that are more likely to have incorrect pseudo-labels will have higher thresholds. Furthermore, we demonstrate that our instance-dependent threshold function provides a bounded probabilistic guarantee for the correctness of the pseudo-labels it assigns. | InstanT: Semi-supervised Learning with Instance-dependent Thresholds | [
"Muyang Li",
"Runze Wu",
"Haoyu Liu",
"Jun Yu",
"Xun Yang",
"Bo Han",
"Tongliang Liu"
] | Conference | poster | 2310.18910 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=txPdKZrrZF | @inproceedings{
zhang2023fedfa,
title={Fed-{FA}: Theoretically Modeling Client Data Divergence for Federated Language Backdoor Defense},
author={Zhiyuan Zhang and Deli Chen and Hao Zhou and Fandong Meng and Jie Zhou and Xu Sun},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=txPdKZrrZF}
} | Federated learning algorithms enable neural network models to be trained across multiple decentralized edge devices without sharing private data. However, they are susceptible to backdoor attacks launched by malicious clients. Existing robust federated aggregation algorithms heuristically detect and exclude suspicious clients based on their parameter distances, but they are ineffective on Natural Language Processing (NLP) tasks. The main reason is that, although text backdoor patterns are obvious at the underlying dataset level, they are usually hidden at the parameter level, since injecting backdoors into texts with discrete feature space has less impact on the statistics of the model parameters. To settle this issue, we propose to identify backdoor clients by explicitly modeling the data divergence among clients in federated NLP systems. Through theoretical analysis, we derive the f-divergence indicator to estimate the client data divergence with aggregation updates and Hessians. Furthermore, we devise a dataset synthesization method with a Hessian reassignment mechanism guided by the diffusion theory to address the key challenge of inaccessible datasets in calculating clients' data Hessians.
We then present the novel Federated F-Divergence-Based Aggregation~(\textbf{Fed-FA}) algorithm, which leverages the f-divergence indicator to detect and discard suspicious clients. Extensive empirical results show that Fed-FA outperforms all the parameter distance-based methods in defending against backdoor attacks among various natural language backdoor attack scenarios. | Fed-FA: Theoretically Modeling Client Data Divergence for Federated Language Backdoor Defense | [
"Zhiyuan Zhang",
"Deli Chen",
"Hao Zhou",
"Fandong Meng",
"Jie Zhou",
"Xu Sun"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=twmHKU3Ds4 | @inproceedings{
liu2023dinosr,
title={Dino{SR}: Self-Distillation and Online Clustering for Self-supervised Speech Representation Learning},
author={Alexander H. Liu and Heng-Jui Chang and Michael Auli and Wei-Ning Hsu and James R. Glass},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=twmHKU3Ds4}
} | In this paper, we introduce self-distillation and online clustering for self-supervised speech representation learning (DinoSR) which combines masked language modeling, self-distillation, and online clustering. We show that these concepts complement each other and result in a strong representation learning model for speech. DinoSR first extracts contextualized embeddings from the input audio with a teacher network, then runs an online clustering system on the embeddings to yield a machine-discovered phone inventory, and finally uses the discretized tokens to guide a student network. We show that DinoSR surpasses previous state-of-the-art performance in several downstream tasks, and provide a detailed analysis of the model and the learned discrete units. | DinoSR: Self-Distillation and Online Clustering for Self-supervised Speech Representation Learning | [
"Alexander H. Liu",
"Heng-Jui Chang",
"Michael Auli",
"Wei-Ning Hsu",
"James R. Glass"
] | Conference | poster | 2305.10005 | [
"https://github.com/alexander-h-liu/dinosr"
] | https://huggingface.co/papers/2305.10005 | 2 | 2 | 0 | 5 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=tw4QaiiJex | @inproceedings{
moran2023the,
title={The Bayesian Stability Zoo},
author={Shay Moran and Hilla Schefler and Jonathan Shafer},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=tw4QaiiJex}
} | We show that many definitions of stability found in the learning theory literature are equivalent to one another.
We distinguish between two families of definitions of stability: distribution-dependent and distribution-independent Bayesian stability. Within each family, we establish equivalences between various definitions, encompassing approximate differential privacy, pure differential privacy, replicability, global stability, perfect generalization, TV stability, mutual information stability, KL-divergence stability, and Rényi-divergence stability. Along the way, we prove boosting results that enable the amplification of the stability of a learning rule. This work is a step towards a more systematic taxonomy of stability notions in learning theory, which can promote clarity and an improved understanding of an array of stability concepts that have emerged in recent years. | The Bayesian Stability Zoo | [
"Shay Moran",
"Hilla Schefler",
"Jonathan Shafer"
] | Conference | poster | 2310.18428 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=tt7bQnTdRm | @inproceedings{
chen2023secure,
title={Secure Out-of-Distribution Task Generalization with Energy-Based Models},
author={Shengzhuang Chen and Long-Kai Huang and Jonathan Richard Schwarz and Yilun Du and Ying Wei},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=tt7bQnTdRm}
} | The success of meta-learning on out-of-distribution (OOD) tasks in the wild has proved to be hit-and-miss.
To safeguard the generalization capability of the meta-learned prior knowledge to OOD tasks, in particularly safety-critical applications, necessitates detection of an OOD task followed by adaptation of the task towards the prior.
Nonetheless, the reliability of estimated uncertainty on OOD tasks by existing Bayesian meta-learning methods is restricted by incomplete coverage of the feature distribution shift and insufficient expressiveness of the meta-learned prior.
Besides, they struggle to adapt an OOD task, running parallel to the line of cross-domain task adaptation solutions which are vulnerable to overfitting.
To this end, we build a single coherent framework that supports both detection and adaptation of OOD tasks, while remaining compatible with off-the-shelf meta-learning backbones.
The proposed Energy-Based Meta-Learning (EBML) framework learns to characterize any arbitrary meta-training task distribution with the composition of two expressive neural-network-based energy functions. We deploy the sum of the two energy functions, being proportional to the joint distribution of a task, as a reliable score for detecting OOD tasks; during meta-testing, we adapt the OOD task to in-distribution tasks by energy minimization.
Experiments on four regression and classification datasets demonstrate the effectiveness of our proposal. | Secure Out-of-Distribution Task Generalization with Energy-Based Models | [
"Shengzhuang Chen",
"Long-Kai Huang",
"Jonathan Richard Schwarz",
"Yilun Du",
"Ying Wei"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=trHfuGQyyr | @inproceedings{
lv2023disentangled,
title={Disentangled Counterfactual Learning for Physical Audiovisual Commonsense Reasoning},
author={Changsheng Lv and Shuai Zhang and Yapeng Tian and Mengshi Qi and Huadong Ma},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=trHfuGQyyr}
} | In this paper, we propose a Disentangled Counterfactual Learning (DCL) approach for physical audiovisual commonsense reasoning. The task aims to infer objects’ physics commonsense based on both video and audio input, with the main challenge is how to imitate the reasoning ability of humans. Most of the current methods fail to take full advantage of different characteristics in multi-modal data, and lacking causal reasoning ability in models impedes the progress of implicit physical knowledge inferring. To address these issues, our proposed DCL method decouples videos into static (time-invariant) and dynamic (time-varying) factors in the latent space by the disentangled sequential encoder, which adopts a variational autoencoder (VAE) to maximize the mutual information with a contrastive loss function. Furthermore, we introduce a counterfactual learning module to augment the model’s reasoning ability by modeling physical knowledge relationships among different objects under counterfactual intervention. Our proposed method is a plug-and-play module that can be incorporated into any baseline. In experiments, we show that our proposed method improves baseline methods and achieves state-of-the-art performance. Our source code is available at https://github.com/Andy20178/DCL. | Disentangled Counterfactual Learning for Physical Audiovisual Commonsense Reasoning | [
"Changsheng Lv",
"Shuai Zhang",
"Yapeng Tian",
"Mengshi Qi",
"Huadong Ma"
] | Conference | poster | 2310.19559 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=tp2nEZ5zfP | @inproceedings{
piterbarg2023nethack,
title={NetHack is Hard to Hack},
author={Ulyana Piterbarg and Lerrel Pinto and Rob Fergus},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=tp2nEZ5zfP}
} | Neural policy learning methods have achieved remarkable results in various control problems, ranging from Atari games to simulated locomotion. However, these methods struggle in long-horizon tasks, especially in open-ended environments with multi-modal observations, such as the popular dungeon-crawler game, NetHack. Intriguingly, the NeurIPS 2021 NetHack Challenge revealed that symbolic agents outperformed neural approaches by over four times in median game score. In this paper, we delve into the reasons behind this performance gap and present an extensive study on neural policy learning for NetHack. To conduct this study, we analyze the winning symbolic agent, extending its codebase to track internal strategy selection in order to generate one of the largest available demonstration datasets. Utilizing this dataset, we examine (i) the advantages of an action hierarchy; (ii) enhancements in neural architecture; and (iii) the integration of reinforcement learning with imitation learning. Our investigations produce a state-of-the-art neural agent that surpasses previous fully neural policies by 127% in offline settings and 25% in online settings on median game score. However, we also demonstrate that mere scaling is insufficient to bridge the performance gap with the best symbolic models or even the top human players. | NetHack is Hard to Hack | [
"Ulyana Piterbarg",
"Lerrel Pinto",
"Rob Fergus"
] | Conference | poster | 2305.19240 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=toYvRJ7Zmy | @inproceedings{
bashari2023derandomized,
title={Derandomized novelty detection with {FDR} control via conformal e-values},
author={Meshi Bashari and Amir Epstein and Yaniv Romano and Matteo Sesia},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=toYvRJ7Zmy}
} | Conformal inference provides a general distribution-free method to rigorously calibrate the output of any machine learning algorithm for novelty detection. While this approach has many strengths, it has the limitation of being randomized, in the sense that it may lead to different results when analyzing twice the same data and this can hinder the interpretation of any findings. We propose to make conformal inferences more stable by leveraging suitable conformal e-values instead of p-values to quantify statistical significance. This solution allows the evidence gathered from multiple analyses of the same data to be aggregated effectively while provably controlling the false discovery rate. Further, we show that the proposed method can reduce randomness without much loss of power compared to standard conformal inference, partly thanks to an innovative way of weighting conformal e-values based on additional side information carefully extracted from the same data. Simulations with synthetic and real data confirm this solution can be effective at eliminating random noise in the inferences obtained with state-of-the-art alternative techniques, sometimes also leading to higher power. | Derandomized novelty detection with FDR control via conformal e-values | [
"Meshi Bashari",
"Amir Epstein",
"Yaniv Romano",
"Matteo Sesia"
] | Conference | poster | 2302.07294 | [
"https://github.com/meshiba/derandomized-novelty-detection"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=toEGuA9Qfn | @inproceedings{
jang2023safedice,
title={Safe{DICE}: Offline Safe Imitation Learning with Non-Preferred Demonstrations},
author={Youngsoo Jang and Geon-Hyeong Kim and Jongmin Lee and Sungryull Sohn and Byoungjip Kim and Honglak Lee and Moontae Lee},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=toEGuA9Qfn}
} | We consider offline safe imitation learning (IL), where the agent aims to learn the safe policy that mimics preferred behavior while avoiding non-preferred behavior from non-preferred demonstrations and unlabeled demonstrations. This problem setting corresponds to various real-world scenarios, where satisfying safety constraints is more important than maximizing the expected return. However, it is very challenging to learn the policy to avoid constraint-violating (i.e. non-preferred) behavior, as opposed to standard imitation learning which learns the policy to mimic given demonstrations. In this paper, we present a hyperparameter-free offline safe IL algorithm, SafeDICE, that learns safe policy by leveraging the non-preferred demonstrations in the space of stationary distributions. Our algorithm directly estimates the stationary distribution corrections of the policy that imitate the demonstrations excluding the non-preferred behavior. In the experiments, we demonstrate that our algorithm learns a more safe policy that satisfies the cost constraint without degrading the reward performance, compared to baseline algorithms. | SafeDICE: Offline Safe Imitation Learning with Non-Preferred Demonstrations | [
"Youngsoo Jang",
"Geon-Hyeong Kim",
"Jongmin Lee",
"Sungryull Sohn",
"Byoungjip Kim",
"Honglak Lee",
"Moontae Lee"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=tnRboxQIec | @inproceedings{
du2023dream,
title={Dream the Impossible: Outlier Imagination with Diffusion Models},
author={Xuefeng Du and Yiyou Sun and Jerry Zhu and Yixuan Li},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=tnRboxQIec}
} | Utilizing auxiliary outlier datasets to regularize the machine learning model has demonstrated promise for out-of-distribution (OOD) detection and safe prediction. Due to the labor intensity in data collection and cleaning, automating outlier data generation has been a long-desired alternative. Despite the appeal, generating photo-realistic outliers in the high dimensional pixel space has been an open challenge for the field. To tackle the problem, this paper proposes a new framework Dream-OOD, which enables imagining photo-realistic outliers by way of diffusion models, provided with only the in-distribution (ID) data and classes. Specifically, Dream-OOD learns a text-conditioned latent space based on ID data, and then samples outliers in the low-likelihood region via the latent, which can be decoded into images by the diffusion model. Different from prior works [16, 95], Dream-OOD enables visualizing and understanding the imagined outliers, directly in the pixel space. We conduct comprehensive quantitative and qualitative studies to understand the efficacy of Dream-OOD, and show that training with the samples generated by Dream-OOD can significantly benefit OOD detection performance. | Dream the Impossible: Outlier Imagination with Diffusion Models | [
"Xuefeng Du",
"Yiyou Sun",
"Jerry Zhu",
"Yixuan Li"
] | Conference | poster | 2309.13415 | [
"https://github.com/deeplearning-wisc/dream-ood"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=tn9Dldam9L | @inproceedings{
l{\"u}dke2023add,
title={Add and Thin: Diffusion for Temporal Point Processes},
author={David L{\"u}dke and Marin Bilo{\v{s}} and Oleksandr Shchur and Marten Lienen and Stephan G{\"u}nnemann},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=tn9Dldam9L}
} | Autoregressive neural networks within the temporal point process (TPP) framework have become the standard for modeling continuous-time event data. Even though these models can expressively capture event sequences in a one-step-ahead fashion, they are inherently limited for long-term forecasting applications due to the accumulation of errors caused by their sequential nature. To overcome these limitations, we derive ADD-THIN, a principled probabilistic denoising diffusion model for TPPs that operates on entire event sequences. Unlike existing diffusion approaches, ADD-THIN naturally handles data with discrete and continuous components. In experiments on synthetic and real-world datasets, our model matches the state-of-the-art TPP models in density estimation and strongly outperforms them in forecasting. | Add and Thin: Diffusion for Temporal Point Processes | [
"David Lüdke",
"Marin Biloš",
"Oleksandr Shchur",
"Marten Lienen",
"Stephan Günnemann"
] | Conference | poster | 2311.01139 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=tmxjuIFSEc | @inproceedings{
chen2023space,
title={{SPACE}: Single-round Participant Amalgamation for Contribution Evaluation in Federated Learning},
author={Yi-Chung Chen and Hsi-Wen Chen and Shun-Guei Wang and Ming-Syan Chen},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=tmxjuIFSEc}
} | The evaluation of participant contribution in federated learning (FL) has recently gained significant attention due to its applicability in various domains, such as incentive mechanisms, robustness enhancement, and client selection. Previous approaches have predominantly relied on the widely adopted Shapley value for participant evaluation. However, the computation of the Shapley value is expensive, despite using techniques like gradient-based model reconstruction and truncating unnecessary evaluations. Therefore, we present an efficient approach called Single-round Participants Amalgamation for Contribution Evaluation (SPACE). SPACE incorporates two novel components, namely Federated Knowledge Amalgamation and Prototype-based Model Evaluation to reduce the evaluation effort by eliminating the dependence on the size of the validation set and enabling participant evaluation within a single communication round. Experimental results demonstrate that SPACE outperforms state-of-the-art methods in terms of both running time and Pearson’s Correlation Coefficient (PCC). Furthermore, extensive experiments conducted on applications, client reweighting, and client selection highlight the effectiveness of SPACE. The code is available at https://github.com/culiver/SPACE. | SPACE: Single-round Participant Amalgamation for Contribution Evaluation in Federated Learning | [
"Yi-Chung Chen",
"Hsi-Wen Chen",
"Shun-Guei Wang",
"Ming-Syan Chen"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=tkenkPYkxj | @inproceedings{
panageas2023exponential,
title={Exponential Lower Bounds for Fictitious Play in Potential Games},
author={Ioannis Panageas and Nikolas Patris and Stratis Skoulakis and Volkan Cevher},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=tkenkPYkxj}
} | Fictitious Play (FP) is a simple and natural dynamic for repeated play with many applications in game theory and multi-agent reinforcement learning. It was introduced by Brown and its convergence properties for two-player zero-sum games was established later by Robinson. Potential games [Monderer and Shapley 1996] is another class of games which exhibit the FP property [Monderer and Shapley 1996], i.e., FP dynamics converges to a Nash equilibrium if all agents follows it. Nevertheless, except for two-player zero-sum games and for specific instances of payoff matrices [Abernethy et. al. 2021] or for adversarial tie-breaking rules [Daskalakis and Pan, 2014], the \textit{convergence rate} of FP is unknown. In this work, we focus on the rate of convergence of FP when applied to potential games and more specifically identical payoff games. We prove that FP can take exponential time (in the number of strategies) to reach a Nash equilibrium, even if the game is restricted to \textit{two agents}. To prove this, we recursively construct a two-player coordination game with a unique Nash equilibrium. Moreover, every approximate Nash equilibrium in the constructed game must be close to the pure Nash equilibrium in $\ell_1$-distance. | Exponential Lower Bounds for Fictitious Play in Potential Games | [
"Ioannis Panageas",
"Nikolas Patris",
"Stratis Skoulakis",
"Volkan Cevher"
] | Conference | poster | 2310.02387 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=tj86aGVNb3 | @inproceedings{
suzuki2023feature,
title={Feature learning via mean-field Langevin dynamics: classifying sparse parities and beyond},
author={Taiji Suzuki and Denny Wu and Kazusato Oko and Atsushi Nitanda},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=tj86aGVNb3}
} | Neural network in the mean-field regime is known to be capable of \textit{feature learning}, unlike the kernel (NTK) counterpart. Recent works have shown that mean-field neural networks can be globally optimized by a noisy gradient descent update termed the \textit{mean-field Langevin dynamics} (MFLD). However, all existing guarantees for MFLD only considered the \textit{optimization} efficiency, and it is unclear if this algorithm leads to improved \textit{generalization} performance and sample complexity due to the presence of feature learning. To fill this gap, in this work we study the statistical and computational complexity of MFLD in learning a class of binary classification problems. Unlike existing margin bounds for neural networks, we avoid the typical norm control by utilizing the perspective that MFLD optimizes the \textit{distribution} of parameters rather than the parameter itself; this leads to an improved analysis of the sample complexity and convergence rate. We apply our general framework to the learning of $k$-sparse parity functions, where we prove that unlike kernel methods, two-layer neural networks optimized by MFLD achieves a sample complexity where the degree $k$ is ``decoupled'' from the exponent in the dimension dependence. | Feature learning via mean-field Langevin dynamics: classifying sparse parities and beyond | [
"Taiji Suzuki",
"Denny Wu",
"Kazusato Oko",
"Atsushi Nitanda"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=thbXgJ8gNK | @inproceedings{
kaddour2023no,
title={No Train No Gain: Revisiting Efficient Training Algorithms For Transformer-based Language Models},
author={Jean Kaddour and Oscar Key and Piotr Nawrot and Pasquale Minervini and Matt Kusner},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=thbXgJ8gNK}
} | The computation necessary for training Transformer-based language models has skyrocketed in recent years.
This trend has motivated research on efficient training algorithms designed to improve training, validation, and downstream performance faster than standard training. In this work, we revisit three categories of such algorithms: dynamic architectures (layer stacking, layer dropping), batch selection (selective backprop., RHO-loss), and efficient optimizers (Lion, Sophia). When pre-training BERT and T5 with a fixed computation budget using such methods, we find that their training, validation, and downstream gains vanish compared to a baseline with a fully-decayed learning rate. We define an evaluation protocol that enables computation to be done on arbitrary machines by mapping all computation time to a reference machine which we call reference system time. We discuss the limitations of our proposed protocol and release our code to encourage rigorous research in efficient training procedures: https://github.com/JeanKaddour/NoTrainNoGain. | No Train No Gain: Revisiting Efficient Training Algorithms For Transformer-based Language Models | [
"Jean Kaddour",
"Oscar Key",
"Piotr Nawrot",
"Pasquale Minervini",
"Matt Kusner"
] | Conference | poster | 2307.06440 | [
"https://github.com/jeankaddour/notrainnogain"
] | https://huggingface.co/papers/2307.06440 | 1 | 3 | 0 | 5 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=thPI8hrA4V | @inproceedings{
yang2023glyphcontrol,
title={GlyphControl: Glyph Conditional Controllable Visual Text Generation},
author={Yukang Yang and Dongnan Gui and Yuhui Yuan and Weicong Liang and Haisong Ding and Han Hu and Kai Chen},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=thPI8hrA4V}
} | Recently, there has been an increasing interest in developing diffusion-based text-to-image generative models capable of generating coherent and well-formed visual text. In this paper, we propose a novel and efficient approach called GlyphControl to address this task. Unlike existing methods that rely on character-aware text encoders like ByT5 and require retraining of text-to-image models, our approach leverages additional glyph conditional information to enhance the performance of the off-the-shelf Stable-Diffusion model in generating accurate visual text. By incorporating glyph instructions, users can customize the content, location, and size of the generated text according to their specific requirements. To facilitate further research in visual text generation, we construct a training benchmark dataset called LAION-Glyph. We evaluate the effectiveness of our approach by measuring OCR-based metrics, CLIP score, and FID of the generated visual text. Our empirical evaluations demonstrate that GlyphControl outperforms the recent DeepFloyd IF approach in terms of OCR accuracy, CLIP score, and FID, highlighting the efficacy of our method. | GlyphControl: Glyph Conditional Control for Visual Text Generation | [
"Yukang Yang",
"Dongnan Gui",
"Yuhui Yuan",
"Weicong Liang",
"Haisong Ding",
"Han Hu",
"Kai Chen"
] | Conference | poster | 2305.18259 | [
"https://github.com/aigtext/glyphcontrol-release"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=tgQRMrsxht | @inproceedings{
zhang2023bypassing,
title={Bypassing spike sorting: Density-based decoding using spike localization from dense multielectrode probes},
author={Yizi Zhang and Tianxiao He and Julien Boussard and Charlie Windolf and Olivier Winter and Eric M. Trautmann and Noam Roth and Hailey Barrel and Mark M Churchland and Nick Steinmetz and Erdem Varol and Cole Lincoln Hurwitz and Liam Paninski},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=tgQRMrsxht}
} | Neural decoding and its applications to brain computer interfaces (BCI) are essential for understanding the association between neural activity and behavior. A prerequisite for many decoding approaches is spike sorting, the assignment of action potentials (spikes) to individual neurons. Current spike sorting algorithms, however, can be inaccurate and do not properly model uncertainty of spike assignments, therefore discarding information that could potentially improve decoding performance. Recent advances in high-density probes (e.g., Neuropixels) and computational methods now allow for extracting a rich set of spike features from unsorted data; these features can in turn be used to directly decode behavioral correlates. To this end, we propose a spike sorting-free decoding method that directly models the distribution of extracted spike features using a mixture of Gaussians (MoG) encoding the uncertainty of spike assignments, without aiming to solve the spike clustering problem explicitly. We allow the mixing proportion of the MoG to change over time in response to the behavior and develop variational inference methods to fit the resulting model and to perform decoding. We benchmark our method with an extensive suite of recordings from different animals and probe geometries, demonstrating that our proposed decoder can consistently outperform current methods based on thresholding (i.e. multi-unit activity) and spike sorting. Open source code is available at https://github.com/yzhang511/density_decoding. | Bypassing spike sorting: Density-based decoding using spike localization from dense multielectrode probes | [
"Yizi Zhang",
"Tianxiao He",
"Julien Boussard",
"Charlie Windolf",
"Olivier Winter",
"Eric M. Trautmann",
"Noam Roth",
"Hailey Barrel",
"Mark M Churchland",
"Nick Steinmetz",
"Erdem Varol",
"Cole Lincoln Hurwitz",
"Liam Paninski"
] | Conference | spotlight | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=tfyr2zRVoK | @inproceedings{
li2023sheetcopilot,
title={SheetCopilot: Bringing Software Productivity to the Next Level through Large Language Models},
author={Hongxin Li and Jingran Su and Yuntao Chen and Qing Li and Zhaoxiang Zhang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=tfyr2zRVoK}
} | Computer end users have spent billions of hours completing daily tasks like tabular data processing and project timeline scheduling. Most of these tasks are repetitive and error-prone, yet most end users lack the skill to automate these burdensome works. With the advent of large language models (LLMs), directing software with natural language user requests become a reachable goal. In this work, we propose a SheetCopilot agent that takes natural language task and control spreadsheet to fulfill the requirements. We propose a set of atomic actions as an abstraction of spreadsheet software functionalities. We further design a state machine-based task planning framework for LLMs to robustly interact with spreadsheets. We curate a representative dataset containing 221 spreadsheet control tasks and establish a fully automated evaluation pipeline for rigorously benchmarking the ability of LLMs in software control tasks. Our SheetCopilot correctly completes 44.3\% of tasks for a single generation, outperforming the strong code generation baseline by a wide margin. Our project page: https://sheetcopilot.github.io/. | SheetCopilot: Bringing Software Productivity to the Next Level through Large Language Models | [
"Hongxin Li",
"Jingran Su",
"Yuntao Chen",
"Qing Li",
"Zhaoxiang Zhang"
] | Conference | poster | 2305.19308 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=tesBViWnbx | @inproceedings{
du2023stable,
title={Stable Diffusion is Unstable},
author={Chengbin Du and Yanxi Li and Zhongwei Qiu and Chang Xu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=tesBViWnbx}
} | Recently, text-to-image models have been thriving. Despite their powerful generative capacity, our research has uncovered a lack of robustness in this generation process. Specifically, the introduction of small perturbations to the text prompts can result in the blending of primary subjects with other categories or their complete disappearance in the generated images. In this paper, we propose **Auto-attack on Text-to-image Models (ATM)**, a gradient-based approach, to effectively and efficiently generate such perturbations. By learning a Gumbel Softmax distribution, we can make the discrete process of word replacement or extension continuous, thus ensuring the differentiability of the perturbation generation. Once the distribution is learned, ATM can sample multiple attack samples simultaneously. These attack samples can prevent the generative model from generating the desired subjects without tampering with the category keywords in the prompt. ATM has achieved a 91.1\% success rate in short-text attacks and an 81.2\% success rate in long-text attacks. Further empirical analysis revealed three attack patterns based on: 1) variability in generation speed, 2) similarity of coarse-grained characteristics, and 3) polysemy of words. The code is available at https://github.com/duchengbin8/Stable_Diffusion_is_Unstable | Stable Diffusion is Unstable | [
"Chengbin Du",
"Yanxi Li",
"Zhongwei Qiu",
"Chang Xu"
] | Conference | spotlight | 2306.02583 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=tdyLryDebq | @inproceedings{
yang2023face,
title={{FACE}: Evaluating Natural Language Generation with Fourier Analysis of Cross-Entropy},
author={Zuhao Yang and Yingfang Yuan and Yang Xu and SHUO ZHAN and Huajun Bai and Kefan Chen},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=tdyLryDebq}
} | Measuring the distance between machine-produced and human language is a critical open problem. Inspired by empirical findings from psycholinguistics on the periodicity of entropy in language, we propose FACE, a set of metrics based on Fourier Analysis of the estimated Cross-Entropy of language, for measuring the similarity between model-generated and human-written languages. Based on an open-ended generation task and the experimental data from previous studies, we find that FACE can effectively identify the human-model gap, scales with model size, reflects the outcomes of different sampling methods for decoding, correlates well with other evaluation metrics and with human judgment scores. | FACE: Evaluating Natural Language Generation with Fourier Analysis of Cross-Entropy | [
"Zuhao Yang",
"Yingfang Yuan",
"Yang Xu",
"SHUO ZHAN",
"Huajun Bai",
"Kefan Chen"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=tcotyjon2a | @inproceedings{
lee2023cqm,
title={{CQM}: Curriculum Reinforcement Learning with a Quantized World Model},
author={Seungjae Lee and Daesol Cho and Jonghae Park and H. Jin Kim},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=tcotyjon2a}
} | Recent curriculum Reinforcement Learning (RL) has shown notable progress in solving complex tasks by proposing sequences of surrogate tasks. However, the previous approaches often face challenges when they generate curriculum goals in a high-dimensional space. Thus, they usually rely on manually specified goal spaces. To alleviate this limitation and improve the scalability of the curriculum, we propose a novel curriculum method that automatically defines the semantic goal space which contains vital information for the curriculum process, and suggests curriculum goals over it. To define the semantic goal space, our method discretizes continuous observations via vector quantized-variational autoencoders (VQ-VAE) and restores the temporal relations between the discretized observations by a graph. Concurrently, ours suggests uncertainty and temporal distance-aware curriculum goals that converges to the final goals over the automatically composed goal space. We demonstrate that the proposed method allows efficient explorations in an uninformed environment with raw goal examples only. Also, ours outperforms the state-of-the-art curriculum RL methods on data efficiency and performance, in various goal-reaching tasks even with ego-centric visual inputs. | CQM: Curriculum Reinforcement Learning with a Quantized World Model | [
"Seungjae Lee",
"Daesol Cho",
"Jonghae Park",
"H. Jin Kim"
] | Conference | poster | 2310.17330 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=tbbId8u7nP | @inproceedings{
lindner2023tracr,
title={Tracr: Compiled Transformers as a Laboratory for Interpretability},
author={David Lindner and Janos Kramar and Sebastian Farquhar and Matthew Rahtz and Thomas McGrath and Vladimir Mikulik},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=tbbId8u7nP}
} | We show how to "compile" human-readable programs into standard decoder-only transformer models. Our compiler, Tracr, generates models with known structure. This structure can be used to design experiments. For example, we use it to study "superposition" in transformers that execute multi-step algorithms. Additionally, the known structure of Tracr-compiled models can serve as _ground-truth_ for evaluating interpretability methods. Commonly, because the "programs" learned by transformers are unknown it is unclear whether an interpretation succeeded. We demonstrate our approach by implementing and examining programs including computing token frequencies, sorting, and parenthesis checking. We provide an open-source implementation of Tracr at https://github.com/google-deepmind/tracr. | Tracr: Compiled Transformers as a Laboratory for Interpretability | [
"David Lindner",
"Janos Kramar",
"Sebastian Farquhar",
"Matthew Rahtz",
"Thomas McGrath",
"Vladimir Mikulik"
] | Conference | spotlight | 2301.05062 | [
"https://github.com/google-deepmind/tracr"
] | https://huggingface.co/papers/2301.05062 | 0 | 0 | 0 | 6 | 1 | [] | [] | [
"CSquid333/RASP-Synthesis"
] |
null | https://openreview.net/forum?id=tW2KSph9o8 | @inproceedings{
tomar2023ignorance,
title={Ignorance is Bliss: Robust Control via Information Gating},
author={Manan Tomar and Riashat Islam and Matthew E. Taylor and Sergey Levine and Philip Bachman},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=tW2KSph9o8}
} | Informational parsimony provides a useful inductive bias for learning representations that achieve better generalization by being robust to noise and spurious correlations. We propose *information gating* as a way to learn parsimonious representations that identify the minimal information required for a task. When gating information, we can learn to reveal as little information as possible so that a task remains solvable, or hide as little information as possible so that a task becomes unsolvable. We gate information using a differentiable parameterization of the signal-to-noise ratio, which can be applied to arbitrary values in a network, e.g., erasing pixels at the input layer or activations in some intermediate layer. When gating at the input layer, our models learn which visual cues matter for a given task. When gating intermediate layers, our models learn which activations are needed for subsequent stages of computation. We call our approach *InfoGating*. We apply InfoGating to various objectives such as multi-step forward and inverse dynamics models, Q-learning, and behavior cloning, highlighting how InfoGating can naturally help in discarding information not relevant for control. Results show that learning to identify and use minimal information can improve generalization in downstream tasks. Policies based on InfoGating are considerably more robust to irrelevant visual features, leading to improved pretraining and finetuning of RL models. | Ignorance is Bliss: Robust Control via Information Gating | [
"Manan Tomar",
"Riashat Islam",
"Matthew E. Taylor",
"Sergey Levine",
"Philip Bachman"
] | Conference | poster | 2303.06121 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=tUyW68cRqr | @inproceedings{
ma2023language,
title={Language Semantic Graph Guided Data-Efficient Learning},
author={Wenxuan Ma and Shuang Li and Lincan Cai and Jingxuan Kang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=tUyW68cRqr}
} | Developing generalizable models that can effectively learn from limited data and with minimal reliance on human supervision is a significant objective within the machine learning community, particularly in the era of deep neural networks. Therefore, to achieve data-efficient learning, researchers typically explore approaches that can leverage more related or unlabeled data without necessitating additional manual labeling efforts, such as Semi-Supervised Learning (SSL), Transfer Learning (TL), and Data Augmentation (DA).
SSL leverages unlabeled data in the training process, while TL enables the transfer of expertise from related data distributions. DA broadens the dataset by synthesizing new data from existing examples. However, the significance of additional knowledge contained within labels has been largely overlooked in research. In this paper, we propose a novel perspective on data efficiency that involves exploiting the semantic information contained in the labels of the available data. Specifically, we introduce a Language Semantic Graph (LSG) which is constructed from labels manifest as natural language descriptions. Upon this graph, an auxiliary graph neural network is trained to extract high-level semantic relations and then used to guide the training of the primary model, enabling more adequate utilization of label knowledge. Across image, video, and audio modalities, we utilize the LSG method in both TL and SSL scenarios and illustrate its versatility in significantly enhancing performance compared to other data-efficient learning approaches. Additionally, our in-depth analysis shows that the LSG method also expedites the training process. | Language Semantic Graph Guided Data-Efficient Learning | [
"Wenxuan Ma",
"Shuang Li",
"Lincan Cai",
"Jingxuan Kang"
] | Conference | poster | 2311.08782 | [
"https://github.com/bit-da/lsg"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=tScBQRNgjk | @inproceedings{
dooley2023forecastpfn,
title={Forecast{PFN}: Synthetically-Trained Zero-Shot Forecasting},
author={Samuel Dooley and Gurnoor Singh Khurana and Chirag Mohapatra and Siddartha Venkat Naidu and Colin White},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=tScBQRNgjk}
} | The vast majority of time-series forecasting approaches require a substantial training dataset. However, many real-life forecasting applications have very little initial observations, sometimes just 40 or fewer. Thus, the applicability of most forecasting methods is restricted in data-sparse commercial applications. While there is recent work in the setting of very limited initial data (so-called `zero-shot' forecasting), its performance is inconsistent depending on the data used for pretraining. In this work, we take a different approach and devise ForecastPFN, the first zero-shot forecasting model trained purely on a novel synthetic data distribution. ForecastPFN is a prior-data fitted network, trained to approximate Bayesian inference, which can make predictions on a new time series dataset in a single forward pass. Through extensive experiments, we show that zero-shot predictions made by ForecastPFN are more accurate and faster compared to state-of-the-art forecasting methods, even when the other methods are allowed to train on hundreds of additional in-distribution data points. | ForecastPFN: Synthetically-Trained Zero-Shot Forecasting | [
"Samuel Dooley",
"Gurnoor Singh Khurana",
"Chirag Mohapatra",
"Siddartha Venkat Naidu",
"Colin White"
] | Conference | poster | 2311.01933 | [
"https://github.com/abacusai/forecastpfn"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=tSEeRl7ACo | @inproceedings{
peng2023humanguided,
title={Human-Guided Complexity-Controlled Abstractions},
author={Andi Peng and Mycal Tucker and Eoin M. Kenny and Noga Zaslavsky and Pulkit Agrawal and Julie Shah},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=tSEeRl7ACo}
} | Neural networks often learn task-specific latent representations that fail to generalize to novel settings or tasks. Conversely, humans learn discrete representations (i.e., concepts or words) at a variety of abstraction levels (e.g., "bird" vs. "sparrow'") and use the appropriate abstraction based on tasks. Inspired by this, we train neural models to generate a spectrum of discrete representations, and control the complexity of the representations (roughly, how many bits are allocated for encoding inputs) by tuning the entropy of the distribution over representations. In finetuning experiments, using only a small number of labeled examples for a new task, we show that (1) tuning the representation to a task-appropriate complexity level supports the greatest finetuning performance, and (2) in a human-participant study, users were able to identify the appropriate complexity level for a downstream task via visualizations of discrete representations. Our results indicate a promising direction for rapid model finetuning by leveraging human insight. | Human-Guided Complexity-Controlled Abstractions | [
"Andi Peng",
"Mycal Tucker",
"Eoin M. Kenny",
"Noga Zaslavsky",
"Pulkit Agrawal",
"Julie Shah"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=tRKimbAk5D | @inproceedings{
sun2023modeling,
title={Modeling Human Visual Motion Processing with Trainable Motion Energy Sensing and a Self-attention Network},
author={Zitang Sun and Yen-Ju Chen and Yung-Hao Yang and Shin'ya Nishida},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=tRKimbAk5D}
} | Visual motion processing is essential for humans to perceive and interact with dynamic environments. Despite extensive research in cognitive neuroscience, image-computable models that can extract informative motion flow from natural scenes in a manner consistent with human visual processing have yet to be established. Meanwhile, recent advancements in computer vision (CV), propelled by deep learning, have led to significant progress in optical flow estimation, a task closely related to motion perception. Here we propose an image-computable model of human motion perception by bridging the gap between biological and CV models. Specifically, we introduce a novel two-stages approach that combines trainable motion energy sensing with a recurrent self-attention network for adaptive motion integration and segregation. This model architecture aims to capture the computations in V1-MT, the core structure for motion perception in the biological visual system, while providing the ability to derive informative motion flow for a wide range of stimuli, including complex natural scenes. In silico neurophysiology reveals that our model's unit responses are similar to mammalian neural recordings regarding motion pooling and speed tuning. The proposed model can also replicate human responses to a range of stimuli examined in past psychophysical studies. The experimental results on the Sintel benchmark demonstrate that our model predicts human responses better than the ground truth, whereas the state-of-the-art CV models show the opposite. Our study provides a computational architecture consistent with human visual motion processing, although the physiological correspondence may not be exact. | Modeling Human Visual Motion Processing with Trainable Motion Energy Sensing and a Self-attention Network | [
"Zitang Sun",
"Yen-Ju Chen",
"Yung-Hao Yang",
"Shin'ya Nishida"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=tQYGjnxPOm | @inproceedings{
yu2023dcsg,
title={D\${\textasciicircum}2\${CSG}: Unsupervised Learning of Compact {CSG} Trees with Dual Complements and Dropouts},
author={Fenggen Yu and Qimin Chen and Maham Tanveer and Ali Mahdavi Amiri and Hao Zhang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=tQYGjnxPOm}
} | We present D$^2$CSG, a neural model composed of two dual and complementary network branches, with dropouts, for unsupervised learning of compact constructive solid geometry (CSG) representations of 3D CAD shapes. Our network is trained to reconstruct a 3D shape by a fixed-order assembly of quadric primitives, with both branches producing a union of primitive intersections or inverses. A key difference between D$^2$CSG and all prior neural CSG models is its dedicated residual branch to assemble the potentially complex shape complement, which is subtracted from an overall shape modeled by the cover branch. With the shape complements, our network is provably general, while the weight dropout further improves compactness of the CSG tree by removing redundant primitives. We demonstrate both quantitatively and qualitatively that D$^2$CSG produces compact CSG reconstructions with superior quality and more natural primitives than all existing alternatives, especially over complex and high-genus CAD shapes. | D^2CSG: Unsupervised Learning of Compact CSG Trees with Dual Complements and Dropouts | [
"Fenggen Yu",
"Qimin Chen",
"Maham Tanveer",
"Ali Mahdavi Amiri",
"Hao Zhang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=tP50lLiZIo | @inproceedings{
chen2023nonstationary,
title={Non-Stationary Bandits with Auto-Regressive Temporal Dependency},
author={Qinyi Chen and Negin Golrezaei and Djallel Bouneffouf},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=tP50lLiZIo}
} | Traditional multi-armed bandit (MAB) frameworks, predominantly examined under stochastic or adversarial settings, often overlook the temporal dynamics inherent in many real-world applications such as recommendation systems and online advertising. This paper introduces a novel non-stationary MAB framework that captures the temporal structure of these real-world dynamics through an auto-regressive (AR) reward structure. We propose an algorithm that integrates two key mechanisms: (i) an alternation mechanism adept at leveraging temporal dependencies to dynamically balance exploration and exploitation, and (ii) a restarting mechanism designed to discard out-of-date information. Our algorithm achieves a regret upper bound that nearly matches the lower bound, with regret measured against a robust dynamic benchmark. Finally, via a real-world case study on tourism demand prediction, we demonstrate both the efficacy of our algorithm and the broader applicability of our techniques to more complex, rapidly evolving time series. | Non-Stationary Bandits with Auto-Regressive Temporal Dependency | [
"Qinyi Chen",
"Negin Golrezaei",
"Djallel Bouneffouf"
] | Conference | poster | 2210.16386 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=tLrkjK128n | @inproceedings{
sukhija2023optimistic,
title={Optimistic Active Exploration of Dynamical Systems},
author={Bhavya Sukhija and Lenart Treven and Cansu Sancaktar and Sebastian Blaes and Stelian Coros and Andreas Krause},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=tLrkjK128n}
} | Reinforcement learning algorithms commonly seek to optimize policies for solving one particular task. How should we explore an unknown dynamical system such that the estimated model allows us to solve multiple downstream tasks in a zero-shot manner?
In this paper, we address this challenge, by developing an algorithm -- OPAX -- for active exploration. OPAX uses well-calibrated probabilistic models to quantify the epistemic uncertainty about the unknown dynamics. It optimistically---w.r.t. to plausible dynamics---maximizes the information gain between the unknown dynamics and state observations. We show how the resulting optimization problem can be reduced to an optimal control problem that can be solved at each episode using standard approaches. We analyze our algorithm for general models, and, in the case of Gaussian process dynamics, we give a sample complexity bound and
show that the epistemic uncertainty converges to zero. In our experiments, we compare OPAX with other heuristic active exploration approaches on several environments. Our experiments show that OPAX is not only theoretically sound but also performs well for zero-shot planning on novel downstream tasks. | Optimistic Active Exploration of Dynamical Systems | [
"Bhavya Sukhija",
"Lenart Treven",
"Cansu Sancaktar",
"Sebastian Blaes",
"Stelian Coros",
"Andreas Krause"
] | Conference | poster | 2306.12371 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=tLTtqySDFb | @inproceedings{
marconato2023not,
title={Not All Neuro-Symbolic Concepts Are Created Equal: Analysis and Mitigation of Reasoning Shortcuts},
author={Emanuele Marconato and Stefano Teso and Antonio Vergari and Andrea Passerini},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=tLTtqySDFb}
} | Neuro-Symbolic (NeSy) predictive models hold the promise of improved compliance with given constraints, systematic generalization, and interpretability, as they allow to infer labels that are consistent with some prior knowledge by reasoning over high-level concepts extracted from sub-symbolic inputs. It was recently shown that NeSy predictors are affected by *reasoning shortcuts*: they can attain high accuracy but by leveraging concepts with \textit{unintended semantics}, thus coming short of their promised advantages. Yet, a systematic characterization of reasoning shortcuts and of potential mitigation strategies is missing. This work fills this gap by characterizing them as unintended optima of the learning objective and identifying four key conditions behind their occurrence. Based on this, we derive several natural mitigation strategies, and analyze their efficacy both theoretically and empirically. Our analysis shows reasoning shortcuts are difficult to deal with, casting doubts on the trustworthiness and interpretability of existing NeSy solutions. | Not All Neuro-Symbolic Concepts Are Created Equal: Analysis and Mitigation of Reasoning Shortcuts | [
"Emanuele Marconato",
"Stefano Teso",
"Antonio Vergari",
"Andrea Passerini"
] | Conference | poster | 2305.19951 | [
"https://github.com/ema-marconato/reasoning-shortcuts"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=tLEDsaKuDh | @inproceedings{
lei2023emergent,
title={Emergent Communication in Interactive Sketch Question Answering},
author={Zixing Lei and Yiming Zhang and Yuxin Xiong and Siheng Chen},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=tLEDsaKuDh}
} | Vision-based emergent communication (EC) aims to learn to communicate through sketches and demystify the evolution of human communication. Ironically, previous works neglect multi-round interaction, which is indispensable in human communication. To fill this gap, we first introduce a novel Interactive Sketch Question Answering (ISQA) task, where two collaborative players are interacting through sketches to answer a question about an image. To accomplish this task, we design a new and efficient interactive EC system, which can achieve an effective balance among three evaluation factors, including the question answering accuracy, drawing complexity and human interpretability. Our experimental results demonstrate that multi-round interactive mechanism facilitates tar- geted and efficient communication between intelligent agents. The code will be released. | Emergent Communication in Interactive Sketch Question Answering | [
"Zixing Lei",
"Yiming Zhang",
"Yuxin Xiong",
"Siheng Chen"
] | Conference | poster | 2310.15597 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=tJwyg9Zg9G | @inproceedings{
chen2023parallelmentoring,
title={Parallel-mentoring for Offline Model-based Optimization},
author={Can Chen and Christopher Beckham and Zixuan Liu and Xue Liu and Christopher Pal},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=tJwyg9Zg9G}
} | We study offline model-based optimization to maximize a black-box objective function with a static dataset of designs and scores. These designs encompass a variety of domains, including materials, robots, DNA sequences, and proteins. A common approach trains a proxy on the static dataset and performs gradient ascent to obtain new designs. However, this often results in poor designs due to the proxy inaccuracies for out-of-distribution designs. Recent studies indicate that (a) gradient ascent with a mean ensemble of proxies generally outperforms simple gradient ascent, and (b) a trained proxy provides weak ranking supervision signals for design selection. Motivated by (a) and (b), we propose $\textit{parallel-mentoring}$ as an effective and novel method that facilitates mentoring among proxies, creating a more robust ensemble to mitigate the out-of-distribution issue. We focus on the three-proxy case in the main paper and our method consists of two modules. The first module, $\textit{voting-based pairwise supervision}$, operates on three parallel proxies and captures their ranking supervision signals as pairwise comparison labels. These labels are combined through majority voting to generate consensus labels, which incorporates ranking supervision signals from all proxies and enables mutual mentoring. Yet, label noise arises due to possible incorrect consensus. To alleviate this, we introduce an $\textit{adaptive soft-labeling}$ module with soft-labels initialized as consensus labels. Based on bi-level optimization, this module fine-tunes proxies in the inner level and learns more accurate labels in the outer level to adaptively mentor proxies, resulting in a more robust ensemble. Experiments validate the effectiveness of our method. Our code is available here. | Parallel-mentoring for Offline Model-based Optimization | [
"Can Chen",
"Christopher Beckham",
"Zixuan Liu",
"Xue Liu",
"Christopher Pal"
] | Conference | poster | 2309.11592 | [
"https://github.com/ggchen1997/parallel_mentoring"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=tJN664ZNVG | @inproceedings{
gu2023offline,
title={Offline {RL} with Discrete Proxy Representations for Generalizability in {POMDP}s},
author={Pengjie Gu and Xinyu Cai and Dong Xing and Xinrun Wang and Mengchen Zhao and Bo An},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=tJN664ZNVG}
} | Offline Reinforcement Learning (RL) has demonstrated promising results in various applications by learning policies from previously collected datasets, reducing the need for online exploration and interactions. However, real-world scenarios usually involve partial observability, which brings crucial challenges of the deployment of offline RL methods: i) the policy trained on data with full observability is not robust against the masked observations during execution, and ii) the information of which parts of observations are masked is usually unknown during training. In order to address these challenges, we present Offline RL with DiscrEte pRoxy representations (ORDER), a probabilistic framework which leverages novel state representations to improve the robustness against diverse masked observabilities. Specifically, we propose a discrete representation of the states and use a proxy representation to recover the states from masked partial observable trajectories. The training of ORDER can be compactly described as the following three steps. i) Learning the discrete state representations on data with full observations, ii) Training the decision module based on the discrete representations, and iii) Training the proxy discrete representations on the data with various partial observations, aligning with the discrete representations. We conduct extensive experiments to evaluate ORDER, showcasing its effectiveness in offline RL for diverse partially observable scenarios and highlighting the significance of discrete proxy representations in generalization performance.
ORDER is a flexible framework to employ any offline RL algorithms and we hope that ORDER can pave the way for the deployment of RL policy against various partial observabilities in the real world. | Offline RL with Discrete Proxy Representations for Generalizability in POMDPs | [
"Pengjie Gu",
"Xinyu Cai",
"Dong Xing",
"Xinrun Wang",
"Mengchen Zhao",
"Bo An"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=tJ88RBqupo | @inproceedings{
breugel2023can,
title={Can You Rely on Your Model Evaluation? Improving Model Evaluation with Synthetic Test Data},
author={Boris van Breugel and Nabeel Seedat and Fergus Imrie and Mihaela van der Schaar},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=tJ88RBqupo}
} | Evaluating the performance of machine learning models on diverse and underrepresented subgroups is essential for ensuring fairness and reliability in real-world applications. However, accurately assessing model performance becomes challenging due to two main issues: (1) a scarcity of test data, especially for small subgroups, and (2) possible distributional shifts in the model's deployment setting, which may not align with the available test data. In this work, we introduce 3S Testing, a deep generative modeling framework to facilitate model evaluation by generating synthetic test sets for small subgroups and simulating distributional shifts. Our experiments demonstrate that 3S-Testing outperforms traditional baselines---including real test data alone---in estimating model performance on minority subgroups and under plausible distributional shifts. In addition, 3S offers intervals around its performance estimates, exhibiting superior coverage of the ground truth compared to existing approaches. Overall, these results raise the question of whether we need a paradigm shift away from limited real test data towards synthetic test data. | Can You Rely on Your Model Evaluation? Improving Model Evaluation with Synthetic Test Data | [
"Boris van Breugel",
"Nabeel Seedat",
"Fergus Imrie",
"Mihaela van der Schaar"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=tIzbNQko3c | @inproceedings{
zhang2023pretraining,
title={Pre-Training Protein Encoder via Siamese Sequence-Structure Diffusion Trajectory Prediction},
author={Zuobai Zhang and Minghao Xu and Aurelie Lozano and Vijil Chenthamarakshan and Payel Das and Jian Tang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=tIzbNQko3c}
} | Self-supervised pre-training methods on proteins have recently gained attention, with most approaches focusing on either protein sequences or structures, neglecting the exploration of their joint distribution, which is crucial for a comprehensive understanding of protein functions by integrating co-evolutionary information and structural characteristics. In this work, inspired by the success of denoising diffusion models in generative tasks, we propose the DiffPreT approach to pre-train a protein encoder by sequence-structure joint diffusion modeling. DiffPreT guides the encoder to recover the native protein sequences and structures from the perturbed ones along the joint diffusion trajectory, which acquires the joint distribution of sequences and structures. Considering the essential protein conformational variations, we enhance DiffPreT by a method called Siamese Diffusion Trajectory Prediction (SiamDiff) to capture the correlation between different conformers of a protein. SiamDiff attains this goal by maximizing the mutual information between representations of diffusion trajectories of structurally-correlated conformers. We study the effectiveness of DiffPreT and SiamDiff on both atom- and residue-level structure-based protein understanding tasks. Experimental results show that the performance of DiffPreT is consistently competitive on all tasks, and SiamDiff achieves new state-of-the-art performance, considering the mean ranks on all tasks. Code will be released upon acceptance. | Pre-Training Protein Encoder via Siamese Sequence-Structure Diffusion Trajectory Prediction | [
"Zuobai Zhang",
"Minghao Xu",
"Aurelie Lozano",
"Vijil Chenthamarakshan",
"Payel Das",
"Jian Tang"
] | Conference | spotlight | 2301.12068 | [
"https://github.com/deepgraphlearning/siamdiff"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=tGuMwFnRZX | @inproceedings{
lu2023latent,
title={Latent Graph Inference with Limited Supervision},
author={Jianglin Lu and Yi Xu and Huan Wang and Yue Bai and Yun Fu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=tGuMwFnRZX}
} | Latent graph inference (LGI) aims to jointly learn the underlying graph structure and node representations from data features. However, existing LGI methods commonly suffer from the issue of supervision starvation, where massive edge weights are learned without semantic supervision and do not contribute to the training loss. Consequently, these supervision-starved weights, which determine the predictions of testing samples, cannot be semantically optimal, resulting in poor generalization. In this paper, we observe that this issue is actually caused by the graph sparsification operation, which severely destroys the important connections established between pivotal nodes and labeled ones. To address this, we propose to restore the corrupted affinities and replenish the missed supervision for better LGI. The key challenge then lies in identifying the critical nodes and recovering the corrupted affinities. We begin by defining the pivotal nodes as k-hop starved nodes, which can be identified based on a given adjacency matrix. Considering the high computational burden, we further present a more efficient alternative inspired by CUR matrix decomposition. Subsequently, we eliminate the starved nodes by reconstructing the destroyed connections. Extensive experiments on representative benchmarks demonstrate that reducing the starved nodes consistently improves the performance of state-of-the-art LGI methods, especially under extremely limited supervision (6.12% improvement on Pubmed with a labeling rate of only 0.3%). | Latent Graph Inference with Limited Supervision | [
"Jianglin Lu",
"Yi Xu",
"Huan Wang",
"Yue Bai",
"Yun Fu"
] | Conference | poster | 2310.04314 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=tGPx7HdBr4 | @inproceedings{
liu2023distributionfree,
title={Distribution-Free Model-Agnostic Regression Calibration via Nonparametric Methods},
author={Shang Liu and Zhongze Cai and Xiaocheng Li},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=tGPx7HdBr4}
} | In this paper, we consider the uncertainty quantification problem for regression models. Specifically, we consider an individual calibration objective for characterizing the quantiles of the prediction model. While such an objective is well-motivated from downstream tasks such as newsvendor cost, the existing methods have been largely heuristic and lack of statistical guarantee in terms of individual calibration. We show via simple examples that the existing methods focusing on population-level calibration guarantees such as average calibration or sharpness can lead to harmful and unexpected results. We propose simple nonparametric calibration methods that are agnostic of the underlying prediction model and enjoy both computational efficiency and statistical consistency. Our approach enables a better understanding of the possibility of individual calibration, and we establish matching upper and lower bounds for the calibration error of our proposed methods. Technically, our analysis combines the nonparametric analysis with a covering number argument for parametric analysis, which advances the existing theoretical analyses in the literature of nonparametric density estimation and quantile bandit problems. Importantly, the nonparametric perspective sheds new theoretical insights into regression calibration in terms of the curse of dimensionality and reconciles the existing results on the impossibility of individual calibration. To our knowledge, we make the first effort to reach both individual calibration and finite-sample guarantee with minimal assumptions in terms of conformal prediction. Numerical experiments show the advantage of such a simple approach under various metrics, and also under covariates shift. We hope our work provides a simple benchmark and a starting point of theoretical ground for future research on regression calibration. | Distribution-Free Model-Agnostic Regression Calibration via Nonparametric Methods | [
"Shang Liu",
"Zhongze Cai",
"Xiaocheng Li"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=tFsxtqGmkn | @inproceedings{
jain2023maximum,
title={Maximum State Entropy Exploration using Predecessor and Successor Representations},
author={Arnav Kumar Jain and Lucas Lehnert and Irina Rish and Glen Berseth},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=tFsxtqGmkn}
} | Animals have a developed ability to explore that aids them in important tasks such as locating food, exploring for shelter, and finding misplaced items. These exploration skills necessarily track where they have been so that they can plan for finding items with relative efficiency. Contemporary exploration algorithms often learn a less efficient exploration strategy because they either condition only on the current state or simply rely on making random open-loop exploratory moves. In this work, we propose $\eta\psi$-Learning, a method to learn efficient exploratory policies by conditioning on past episodic experience to make the next exploratory move. Specifically, $\eta\psi$-Learning learns an exploration policy that maximizes the entropy of the state visitation distribution of a single trajectory. Furthermore, we demonstrate how variants of the predecessor representation and successor representations can be combined to predict the state visitation entropy. Our experiments demonstrate the efficacy of $\eta\psi$-Learning to strategically explore the environment and maximize the state coverage with limited samples. | Maximum State Entropy Exploration using Predecessor and Successor Representations | [
"Arnav Kumar Jain",
"Lucas Lehnert",
"Irina Rish",
"Glen Berseth"
] | Conference | poster | 2306.14808 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=tFeaLw9AWn | @inproceedings{
choudhury2023singlecall,
title={Single-Call Stochastic Extragradient Methods for Structured Non-monotone Variational Inequalities: Improved Analysis under Weaker Conditions},
author={Sayantan Choudhury and Eduard Gorbunov and Nicolas Loizou},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=tFeaLw9AWn}
} | Single-call stochastic extragradient methods, like stochastic past extragradient (SPEG) and stochastic optimistic gradient (SOG), have gained a lot of interest in recent years and are one of the most efficient algorithms for solving large-scale min-max optimization and variational inequalities problems (VIP) appearing in various machine learning tasks. However, despite their undoubted popularity, current convergence analyses of SPEG and SOG require strong assumptions like bounded variance or growth conditions. In addition, several important questions regarding the convergence properties of these methods are still open, including mini-batching, efficient step-size selection, and convergence guarantees under different sampling strategies. In this work, we address these questions and provide convergence guarantees for two large classes of structured non-monotone VIPs: (i) quasi-strongly monotone problems (a generalization of strongly monotone problems) and (ii) weak Minty variational inequalities (a generalization of monotone and Minty VIPs). We introduce the expected residual condition, explain its benefits, and show how it allows us to obtain a strictly weaker bound than previously used growth conditions, expected co-coercivity, or bounded variance assumptions. Finally, our convergence analysis holds under the arbitrary sampling paradigm, which includes importance sampling and various mini-batching strategies as special cases. | Single-Call Stochastic Extragradient Methods for Structured Non-monotone Variational Inequalities: Improved Analysis under Weaker Conditions | [
"Sayantan Choudhury",
"Eduard Gorbunov",
"Nicolas Loizou"
] | Conference | poster | 2302.14043 | [
"https://github.com/isayantan/single-call-stochastic-extragradient-methods"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=tF7W8ai8J3 | @inproceedings{
zhang2023federated,
title={Federated Compositional Deep {AUC} Maximization},
author={Xinwen Zhang and Yihan Zhang and Tianbao Yang and Richard Souvenir and Hongchang Gao},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=tF7W8ai8J3}
} | Federated learning has attracted increasing attention due to the promise of balancing privacy and large-scale learning; numerous approaches have been proposed. However, most existing approaches focus on problems with balanced data, and prediction performance is far from satisfactory for many real-world applications where the number of samples in different classes is highly imbalanced. To address this challenging problem, we developed a novel federated learning method for imbalanced data by directly optimizing the area under curve (AUC) score. In particular, we formulate the AUC maximization problem as a federated compositional minimax optimization problem, develop a local stochastic compositional gradient descent ascent with momentum algorithm, and provide bounds on the computational and communication complexities of our algorithm. To the best of our knowledge, this is the first work to achieve such favorable theoretical results. Finally, extensive experimental results confirm the efficacy of our method. | Federated Compositional Deep AUC Maximization | [
"Xinwen Zhang",
"Yihan Zhang",
"Tianbao Yang",
"Richard Souvenir",
"Hongchang Gao"
] | Conference | poster | 2304.10101 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=tEmFyqjaJh | @inproceedings{
yuan2023ppi,
title={{PP}i: Pretraining Brain Signal Model for Patient-independent Seizure Detection},
author={Zhizhang Yuan and Daoze Zhang and Yang Yang and Junru Chen and Yafeng Li},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=tEmFyqjaJh}
} | Automated seizure detection is of great importance to epilepsy diagnosis and treatment. An emerging method used in seizure detection, stereoelectroencephalography (SEEG), can provide detailed and stereoscopic brainwave information. However, modeling SEEG in clinical scenarios will face challenges like huge domain shift between different patients and dramatic pattern evolution among different brain areas. In this study, we propose a Pretraining-based model for Patient-independent seizure detection (PPi) to address these challenges. Firstly, we design two novel self-supervised tasks which can extract rich information from abundant SEEG data while preserving the unique characteristics between brain signals recorded from different brain areas. Then two techniques channel background subtraction and brain region enhancement are proposed to effectively tackle the domain shift problem. Extensive experiments show that PPi outperforms the SOTA baselines on two public datasets and a real-world clinical dataset collected by ourselves, which demonstrates the effectiveness and practicability of PPi. Finally, visualization analysis illustrates the rationality of the two domain generalization techniques. | PPi: Pretraining Brain Signal Model for Patient-independent Seizure Detection | [
"Zhizhang Yuan",
"Daoze Zhang",
"Yang Yang",
"Junru Chen",
"Yafeng Li"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=tEKBU5XOTw | @inproceedings{
schilling2023safety,
title={Safety Verification of Decision-Tree Policies in Continuous Time},
author={Christian Schilling and Anna Lukina and Emir Demirovi{\'c} and Kim Guldstrand Larsen},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=tEKBU5XOTw}
} | Decision trees have gained popularity as interpretable surrogate models for learning-based control policies. However, providing safety guarantees for systems controlled by decision trees is an open challenge. We show that the problem is undecidable even for systems with the simplest dynamics, and PSPACE-complete for finite-horizon properties. The latter can be verified for discrete-time systems via bounded model checking. However, for continuous-time systems, such an approach requires discretization, thereby weakening the guarantees for the original system. This paper presents the first algorithm to directly verify decision-tree controlled system in continuous time. The key aspect of our method is exploiting the decision-tree structure to propagate a set-based approximation through the decision nodes. We demonstrate the effectiveness of our approach by verifying safety of several decision trees distilled to imitate neural-network policies for nonlinear systems. | Safety Verification of Decision-Tree Policies in Continuous Time | [
"Christian Schilling",
"Anna Lukina",
"Emir Demirović",
"Kim Guldstrand Larsen"
] | Conference | spotlight | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=tECyQO1QOp | @inproceedings{
dickerson2023doubly,
title={Doubly Constrained Fair Clustering},
author={John P Dickerson and Seyed A. Esmaeili and Jamie Heather Morgenstern and Claire Jie Zhang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=tECyQO1QOp}
} | The remarkable attention which fair clustering has received in the last few years has resulted in a significant number of different notions of fairness. Despite the fact that these notions are well-justified, they are often motivated and studied in a disjoint manner where one fairness desideratum is considered exclusively in isolation from the others. This leaves the understanding of the relations between different fairness notions as an important open problem in fair clustering. In this paper, we take the first step in this direction. Specifically, we consider the two most prominent demographic representation fairness notions in clustering: (1) Group Fairness ($\textbf{GF}$), where the different demographic groups are supposed to have close to population-level representation in each cluster and (2) Diversity in Center Selection ($\textbf{DS}$), where the selected centers are supposed to have close to population-level representation of each group. We show that given a constant approximation algorithm for one constraint ($\textbf{GF}$ or $\textbf{DS}$ only) we can obtain a constant approximation solution that satisfies both constraints simultaneously. Interestingly, we prove that any given solution that satisfies the $\textbf{GF}$ constraint can always be post-processed at a bounded degradation to the clustering cost to additionally satisfy the $\textbf{DS}$ constraint while the same statement is not true given a solution that satisfies $\textbf{DS}$ instead. Furthermore, we show that both $\textbf{GF}$ and $\textbf{DS}$ are incompatible (having an empty feasibility set in the worst case) with a collection of other distance-based fairness notions. Finally, we carry experiments to validate our theoretical findings. | Doubly Constrained Fair Clustering | [
"John P Dickerson",
"Seyed A. Esmaeili",
"Jamie Heather Morgenstern",
"Claire Jie Zhang"
] | Conference | poster | 2305.19475 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=tDAu3FPJn9 | @inproceedings{
huang2023a,
title={A Robust and Opponent-Aware League Training Method for StarCraft {II}},
author={Ruozi Huang and Xipeng Wu and Hongsheng Yu and Zhong Fan and Haobo Fu and QIANG FU and Yang Wei},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=tDAu3FPJn9}
} | It is extremely difficult to train a superhuman Artificial Intelligence (AI) for games of similar size to StarCraft II. AlphaStar is the first AI that beat human professionals in the full game of StarCraft II, using a league training framework that is inspired by a game-theoretic approach. In this paper, we improve AlphaStar's league training in two significant aspects. We train goal-conditioned exploiters, whose abilities of spotting weaknesses in the main agent and the entire league are greatly improved compared to the unconditioned exploiters in AlphaStar. In addition, we endow the agents in the league with the new ability of opponent modeling, which makes the agent more responsive to the opponent's real-time strategy. Based on these improvements, we train a better and superhuman AI with orders of magnitude less resources than AlphaStar (see Table 1 for a full comparison). Considering the iconic role of StarCraft II in game AI research, we believe our method and results on StarCraft II provide valuable design principles on how one would utilize the general league training framework for obtaining a least-exploitable strategy in various, large-scale, real-world games. | A Robust and Opponent-Aware League Training Method for StarCraft II | [
"Ruozi Huang",
"Xipeng Wu",
"Hongsheng Yu",
"Zhong Fan",
"Haobo Fu",
"QIANG FU",
"Yang Wei"
] | Conference | spotlight | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=tC0r8duG9z | @inproceedings{
mao2023on,
title={On the Power of {SVD} in the Stochastic Block Model},
author={Xinyu Mao and Jiapeng Zhang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=tC0r8duG9z}
} | A popular heuristic method for improving clustering results is to apply dimensionality reduction before running clustering algorithms.
It has been observed that spectral-based dimensionality reduction tools, such as PCA or SVD, improve the performance of clustering algorithms in many applications. This phenomenon indicates that spectral method not only serves as a dimensionality reduction tool, but also contributes to the clustering procedure in some sense. It is an interesting question to understand the behavior of spectral steps in clustering problems.
As an initial step in this direction, this paper studies the power of vanilla-SVD algorithm in the stochastic block model (SBM). We show that, in the symmetric setting, vanilla-SVD algorithm recovers all clusters correctly. This result answers an open question posed by Van Vu (Combinatorics Probability and Computing, 2018) in the symmetric setting. | On the Power of SVD in the Stochastic Block Model | [
"Xinyu Mao",
"Jiapeng Zhang"
] | Conference | poster | 2309.15322 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=tBwRbgsol1 | @inproceedings{
eaton2023replicable,
title={Replicable Reinforcement Learning},
author={ERIC EATON and Marcel Hussing and Michael Kearns and Jessica Sorrell},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=tBwRbgsol1}
} | The replicability crisis in the social, behavioral, and data sciences has led to the formulation of algorithm frameworks for replicability --- i.e., a requirement that an algorithm produce identical outputs (with high probability) when run on two different samples from the same underlying distribution. While still in its infancy, provably replicable algorithms have been developed for many fundamental tasks in machine learning and statistics, including statistical query learning, the heavy hitters problem, and distribution testing. In this work we initiate the study of replicable reinforcement learning, providing a provably replicable algorithm for parallel value iteration, and a provably replicable version of R-Max in the episodic setting. These are the first formal replicability results for control problems, which present different challenges for replication than batch learning settings. | Replicable Reinforcement Learning | [
"ERIC EATON",
"Marcel Hussing",
"Michael Kearns",
"Jessica Sorrell"
] | Conference | poster | 2305.15284 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=tBib2fWr3r | @inproceedings{
zhang2023understanding,
title={Understanding Deep Gradient Leakage via Inversion Influence Functions},
author={Haobo Zhang and Junyuan Hong and Yuyang Deng and Mehrdad Mahdavi and Jiayu Zhou},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=tBib2fWr3r}
} | Deep Gradient Leakage (DGL) is a highly effective attack that recovers private training images from gradient vectors.
This attack casts significant privacy challenges on distributed learning from clients with sensitive data, where clients are required to share gradients.
Defending against such attacks requires but lacks an understanding of when and how privacy leakage happens, mostly because of the black-box nature of deep networks.
In this paper, we propose a novel Inversion Influence Function (I$^2$F) that establishes a closed-form connection between the recovered images and the private gradients by implicitly solving the DGL problem.
Compared to directly solving DGL, I$^2$F is scalable for analyzing deep networks, requiring only oracle access to gradients and Jacobian-vector products.
We empirically demonstrate that I$^2$F effectively approximated the DGL generally on different model architectures, datasets, modalities, attack implementations, and perturbation-based defenses.
With this novel tool, we provide insights into effective gradient perturbation directions, the unfairness of privacy protection, and privacy-preferred model initialization.
Our codes are provided in https://github.com/illidanlab/inversion-influence-function. | Understanding Deep Gradient Leakage via Inversion Influence Functions | [
"Haobo Zhang",
"Junyuan Hong",
"Yuyang Deng",
"Mehrdad Mahdavi",
"Jiayu Zhou"
] | Conference | poster | 2309.13016 | [
"https://github.com/illidanlab/inversion-influence-function"
] | https://huggingface.co/papers/2309.13016 | 1 | 0 | 0 | 5 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=tAwjG5bM7H | @inproceedings{
zhuang2023a,
title={A Bounded Ability Estimation for Computerized Adaptive Testing},
author={Yan Zhuang and Qi Liu and GuanHao Zhao and Zhenya Huang and Weizhe Huang and Zachary Pardos and Enhong Chen and Jinze Wu and Xin Li},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=tAwjG5bM7H}
} | Computerized adaptive testing (CAT), as a tool that can efficiently measure student's ability, has been widely used in various standardized tests (e.g., GMAT and GRE). The adaptivity of CAT refers to the selection of the most informative questions for each student, reducing test length. Existing CAT methods do not explicitly target ability estimation accuracy since there is no student's true ability as ground truth; therefore, these methods cannot be guaranteed to make the estimate converge to the true with such limited responses. In this paper, we analyze the statistical properties of estimation and find a theoretical approximation of the true ability: the ability estimated by full responses to question bank. Based on this, a Bounded Ability Estimation framework for CAT (BECAT) is proposed in a data-summary manner, which selects a question subset that closely matches the gradient of the full responses. Thus, we develop an expected gradient difference approximation to design a simple greedy selection algorithm, and show the rigorous theoretical and error upper-bound guarantees of its ability estimate. Experiments on both real-world and synthetic datasets, show that it can reach the same estimation accuracy using 15\% less questions on average, significantly reducing test length. | A Bounded Ability Estimation for Computerized Adaptive Testing | [
"Yan Zhuang",
"Qi Liu",
"GuanHao Zhao",
"Zhenya Huang",
"Weizhe Huang",
"Zachary Pardos",
"Enhong Chen",
"Jinze Wu",
"Xin Li"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=t9Swbo82dB | @inproceedings{
yang2023uncertainty,
title={Uncertainty Estimation for Safety-critical Scene Segmentation via Fine-grained Reward Maximization},
author={Hongzheng Yang and Cheng Chen and Yueyao Chen and Markus Scheppach and Hon Chi Yip and Qi Dou},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=t9Swbo82dB}
} | Uncertainty estimation plays an important role for future reliable deployment of deep segmentation models in safety-critical scenarios such as medical applications. However, existing methods for uncertainty estimation have been limited by the lack of explicit guidance for calibrating the prediction risk and model confidence. In this work, we propose a novel fine-grained reward maximization (FGRM) framework, to address uncertainty estimation by directly utilizing an uncertainty metric related reward function with a reinforcement learning based model tuning algorithm. This would benefit the model uncertainty estimation with direct optimization guidance for model calibration. Specifically, our method designs a new uncertainty estimation reward function using the calibration metric, which is maximized to fine-tune an evidential learning pre-trained segmentation model for calibrating prediction risk. Importantly, we innovate an effective fine-grained parameter update scheme, which imposes fine-grained reward-weighting of each network parameter according to the parameter importance quantified by the fisher information matrix. To the best of our knowledge, this is the first work exploring reward optimization for model uncertainty estimation in safety-critical vision tasks. The effectiveness of our method is demonstrated on two large safety-critical surgical scene segmentation datasets under two different uncertainty estimation settings. With real-time one forward pass at inference, our method outperforms state-of-the-art methods by a clear margin on all the calibration metrics of uncertainty estimation, while maintaining a high task accuracy for the segmentation results. Code is available at https://github.com/med-air/FGRM. | Uncertainty Estimation for Safety-critical Scene Segmentation via Fine-grained Reward Maximization | [
"Hongzheng Yang",
"Cheng Chen",
"Yueyao Chen",
"Markus Scheppach",
"Hon Chi Yip",
"Qi Dou"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=t877958UGZ | @inproceedings{
luo2023cheap,
title={Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large Language Models},
author={Gen Luo and Yiyi Zhou and Tianhe Ren and Shengxin Chen and Xiaoshuai Sun and Rongrong Ji},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=t877958UGZ}
} | Recently, growing interest has been aroused in extending the multimodal capability of large language models (LLMs), e.g., vision-language (VL) learning, which is regarded as the next milestone of artificial general intelligence. However, existing solutions are prohibitively expensive, which not only need to optimize excessive parameters, but also require another large-scale pre-training before VL instruction tuning. In this paper, we propose a novel and affordable solution for the effective VL adaption of LLMs, called Mixture-of-Modality Adaptation (MMA). Instead of using large neural networks to connect the image encoder and LLM, MMA adopts lightweight modules, i.e., adapters, to bridge the gap between LLMs and VL tasks, which also enables the joint optimization of the image and language models. Meanwhile, MMA is also equipped with a routing algorithm to help LLMs achieve an automatic shift between single- and multi-modal instructions without compromising their ability of natural language understanding. To validate MMA, we apply it to a recent LLM called LLaMA and term this formed large vision-language instructed model as LaVIN. To validate MMA and LaVIN, we conduct extensive experiments under two setups, namely multimodal science question answering and multimodal dialogue. The experimental results not only demonstrate the competitive performance and the superior training efficiency of LaVIN than existing multimodal LLMs, but also confirm its great potential as a general-purpose chatbot. More importantly, the actual expenditure of LaVIN is extremely cheap, e.g., only 1.4 training hours with 3.8M trainable parameters, greatly confirming the effectiveness of MMA. Our code is anonymously released at: https://anonymous.4open.science/r/LaVIN--1067. | Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large Language Models | [
"Gen Luo",
"Yiyi Zhou",
"Tianhe Ren",
"Shengxin Chen",
"Xiaoshuai Sun",
"Rongrong Ji"
] | Conference | poster | 2305.15023 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=t7ozN4AXd0 | @inproceedings{
sun2023rewiring,
title={Rewiring Neurons in Non-Stationary Environments},
author={Zhicheng Sun and Yadong MU},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=t7ozN4AXd0}
} | The human brain rewires itself for neuroplasticity in the presence of new tasks. We are inspired to harness this key process in continual reinforcement learning, prioritizing adaptation to non-stationary environments. In distinction to existing rewiring approaches that rely on pruning or dynamic routing, which may limit network capacity and plasticity, this work presents a novel rewiring scheme by permuting hidden neurons. Specifically, the neuron permutation is parameterized to be end-to-end learnable and can rearrange all available synapses to explore a large span of weight space, thereby promoting adaptivity. In addition, we introduce two main designs to steer the rewiring process in continual reinforcement learning: first, a multi-mode rewiring strategy is proposed which diversifies the policy and encourages exploration when encountering new environments. Secondly, to ensure stability on history tasks, the network is devised to cache each learned wiring while subtly updating its weights, allowing for retrospective recovery of any previous state appropriate for the task. Meanwhile, an alignment mechanism is curated to achieve better plasticity-stability tradeoff by jointly optimizing cached wirings and weights. Our proposed method is comprehensively evaluated on 18 continual reinforcement learning scenarios ranging from locomotion to manipulation, demonstrating its advantages over state-of-the-art competitors in performance-efficiency tradeoffs. Code is available at https://github.com/feifeiobama/RewireNeuron. | Rewiring Neurons in Non-Stationary Environments | [
"Zhicheng Sun",
"Yadong MU"
] | Conference | spotlight | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=t7lnhhi7De | @inproceedings{
deleu2023joint,
title={Joint Bayesian Inference of Graphical Structure and Parameters with a Single Generative Flow Network},
author={Tristan Deleu and Mizu Nishikawa-Toomey and Jithendaraa Subramanian and Nikolay Malkin and Laurent Charlin and Yoshua Bengio},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=t7lnhhi7De}
} | Generative Flow Networks (GFlowNets), a class of generative models over discrete and structured sample spaces, have been previously applied to the problem of inferring the marginal posterior distribution over the directed acyclic graph (DAG) of a Bayesian Network, given a dataset of observations. Based on recent advances extending this framework to non-discrete sample spaces, we propose in this paper to approximate the joint posterior over not only the structure of a Bayesian Network, but also the parameters of its conditional probability distributions. We use a single GFlowNet whose sampling policy follows a two-phase process: the DAG is first generated sequentially one edge at a time, and then the corresponding parameters are picked once the full structure is known. Since the parameters are included in the posterior distribution, this leaves more flexibility for the local probability models of the Bayesian Network, making our approach applicable even to non-linear models parametrized by neural networks. We show that our method, called JSP-GFN, offers an accurate approximation of the joint posterior, while comparing favorably against existing methods on both simulated and real data. | Joint Bayesian Inference of Graphical Structure and Parameters with a Single Generative Flow Network | [
"Tristan Deleu",
"Mizu Nishikawa-Toomey",
"Jithendaraa Subramanian",
"Nikolay Malkin",
"Laurent Charlin",
"Yoshua Bengio"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=t7ZowrDWVw | @inproceedings{
xia2023achieving,
title={Achieving Cross Modal Generalization with Multimodal Unified Representation},
author={Yan Xia and Hai Huang and Jieming Zhu and Zhou Zhao},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=t7ZowrDWVw}
} | This paper introduces a novel task called Cross Modal Generalization (CMG), which addresses the challenge of learning a unified discrete representation from paired multimodal data during pre-training. Then in downstream tasks, the model can achieve zero-shot generalization ability in other modalities when only one modal is labeled. Existing approaches in multimodal representation learning focus more on coarse-grained alignment or rely on the assumption that
information from different modalities is completely aligned, which is impractical in real-world scenarios. To overcome this limitation, we propose \textbf{Uni-Code}, which contains two key contributions: the Dual Cross-modal Information Disentangling (DCID) module and the Multi-Modal Exponential Moving Average (MM-EMA). These methods facilitate bidirectional supervision between modalities and align semantically equivalent information in a shared discrete latent space, enabling fine-grained unified representation of multimodal sequences. During pre-training, we investigate various modality combinations, including audio-visual, audio-text, and the tri-modal combination of audio-visual-text. Extensive experiments on various downstream tasks, i.e., cross-modal event classification, localization, cross-modal retrieval, query-based video segmentation, and cross-dataset event localization, demonstrate the effectiveness of our proposed methods. The code is available at https://github.com/haihuangcode/CMG. | Achieving Cross Modal Generalization with Multimodal Unified Representation | [
"Yan Xia",
"Hai Huang",
"Jieming Zhu",
"Zhou Zhao"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=t6nA7x3GAC | @inproceedings{
campbell2023transdimensional,
title={Trans-Dimensional Generative Modeling via Jump Diffusion Models},
author={Andrew Campbell and William Harvey and Christian Dietrich Weilbach and Valentin De Bortoli and Tom Rainforth and Arnaud Doucet},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=t6nA7x3GAC}
} | We propose a new class of generative model that naturally handles data of varying dimensionality by jointly modeling the state and dimension of each datapoint. The generative process is formulated as a jump diffusion process that makes jumps between different dimensional spaces. We first define a dimension destroying forward noising process, before deriving the dimension creating time-reversed generative process along with a novel evidence lower bound training objective for learning to approximate it.
Simulating our learned approximation to the time-reversed generative process then provides an effective way of sampling data of varying dimensionality by jointly generating state values and dimensions.
We demonstrate our approach on molecular and video datasets of varying dimensionality, reporting better compatibility with test-time diffusion guidance imputation tasks and improved interpolation capabilities versus fixed dimensional models that generate state values and dimensions separately. | Trans-Dimensional Generative Modeling via Jump Diffusion Models | [
"Andrew Campbell",
"William Harvey",
"Christian Dietrich Weilbach",
"Valentin De Bortoli",
"Tom Rainforth",
"Arnaud Doucet"
] | Conference | spotlight | 2305.16261 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=t3vPEjgNtj | @inproceedings{
paniagua2023quadattack,
title={QuadAttac\$K\$: A Quadratic Programming Approach to Learning Ordered Top-\$K\$ Adversarial Attacks},
author={Thomas Paniagua and Ryan Grainger and Tianfu Wu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=t3vPEjgNtj}
} | The adversarial vulnerability of Deep Neural Networks (DNNs) has been well-known and widely concerned, often under the context of learning top-$1$ attacks (e.g., fooling a DNN to classify a cat image as dog). This paper shows that the concern is much more serious by learning significantly more aggressive ordered top-$K$ clear-box targeted attacks proposed in~\citep{zhang2020learning}. We propose a novel and rigorous quadratic programming (QP) method of learning ordered top-$K$ attacks with low computing cost, dubbed as \textbf{QuadAttac$K$}. Our QuadAttac$K$ directly solves the QP to satisfy the attack constraint in the feature embedding space (i.e., the input space to the final linear classifier), which thus exploits the semantics of the feature embedding space (i.e., the principle of class coherence). With the optimized feature embedding vector perturbation, it then computes the adversarial perturbation in the data space via the vanilla one-step back-propagation. In experiments, the proposed QuadAttac$K$ is tested in the ImageNet-1k classification using ResNet-50, DenseNet-121, and Vision Transformers (ViT-B and DEiT-S). It successfully pushes the boundary of successful ordered top-$K$ attacks from $K=10$ up to $K=20$ at a cheap budget ($1\times 60$) and further improves attack success rates for $K=5$ for all tested models, while retaining the performance for $K=1$. | QuadAttacK: A Quadratic Programming Approach to Learning Ordered Top-K Adversarial Attacks | [
"Thomas Paniagua",
"Ryan Grainger",
"Tianfu Wu"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=t3WCiGjHqd | @inproceedings{
bertran2023scalable,
title={Scalable Membership Inference Attacks via Quantile Regression},
author={Martin Andres Bertran and Shuai Tang and Aaron Roth and Michael Kearns and Jamie Heather Morgenstern and Steven Wu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=t3WCiGjHqd}
} | Membership inference attacks are designed to determine, using black box access to trained models, whether a particular example was used in training or not. Membership inference can be formalized as a hypothesis testing problem. The most effective existing attacks estimate the distribution of some test statistic (usually the model's confidence on the true label) on points that were (and were not) used in training by training many \emph{shadow models}---i.e. models of the same architecture as the model being attacked, trained on a random subsample of data. While effective, these attacks are extremely computationally expensive, especially when the model under attack is large. \footnotetext[0]{
Martin and Shuai are the lead authors, and other authors are ordered alphabetically. \{maberlop,shuat\}@amazon.com}
We introduce a new class of attacks based on performing quantile regression on the distribution of confidence scores induced by the model under attack on points that are not used in training. We show that our method is competitive with state-of-the-art shadow model attacks, while requiring substantially less compute because our attack requires training only a single model. Moreover, unlike shadow model attacks, our proposed attack does not require any knowledge of the architecture of the model under attack and is therefore truly ``black-box". We show the efficacy of this approach in an extensive series of experiments on various datasets and model architectures. Our code is available at \href{https://github.com/amazon-science/quantile-mia}{github.com/amazon-science/quantile-mia.} | Scalable Membership Inference Attacks via Quantile Regression | [
"Martin Andres Bertran",
"Shuai Tang",
"Aaron Roth",
"Michael Kearns",
"Jamie Heather Morgenstern",
"Steven Wu"
] | Conference | poster | 2307.03694 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=t2hEZadBBk | @inproceedings{
huang2023tailoring,
title={Tailoring Self-Attention for Graph via Rooted Subtrees},
author={Siyuan Huang and Yunchong Song and Jiayue Zhou and Zhouhan Lin},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=t2hEZadBBk}
} | Attention mechanisms have made significant strides in graph learning, yet they still exhibit notable limitations: local attention faces challenges in capturing long-range information due to the inherent problems of the message-passing scheme, while global attention cannot reflect the hierarchical neighborhood structure and fails to capture fine-grained local information. In this paper, we propose a novel multi-hop graph attention mechanism, named Subtree Attention (STA), to address the aforementioned issues. STA seamlessly bridges the fully-attentional structure and the rooted subtree, with theoretical proof that STA approximates the global attention under extreme settings. By allowing direct computation of attention weights among multi-hop neighbors, STA mitigates the inherent problems in existing graph attention mechanisms. Further we devise an efficient form for STA by employing kernelized softmax, which yields a linear time complexity. Our resulting GNN architecture, the STAGNN, presents a simple yet performant STA-based graph neural network leveraging a hop-aware attention strategy. Comprehensive evaluations on ten node classification datasets demonstrate that STA-based models outperform existing graph transformers and mainstream GNNs. The code
is available at https://github.com/LUMIA-Group/SubTree-Attention. | Tailoring Self-Attention for Graph via Rooted Subtrees | [
"Siyuan Huang",
"Yunchong Song",
"Jiayue Zhou",
"Zhouhan Lin"
] | Conference | poster | 2310.05296 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=t1jLRFvBqm | @inproceedings{
zadaianchuk2023objectcentric,
title={Object-Centric Learning for Real-World Videos by Predicting Temporal Feature Similarities},
author={Andrii Zadaianchuk and Maximilian Seitzer and Georg Martius},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=t1jLRFvBqm}
} | Unsupervised video-based object-centric learning is a promising avenue to learn structured representations from large, unlabeled video collections, but previous approaches have only managed to scale to real-world datasets in restricted domains.
Recently, it was shown that the reconstruction of pre-trained self-supervised features leads to object-centric representations on unconstrained real-world image datasets.
Building on this approach, we propose a novel way to use such pre-trained features in the form of a temporal feature similarity loss.
This loss encodes semantic and temporal correlations between image patches and is a natural way to introduce a motion bias for object discovery.
We demonstrate that this loss leads to state-of-the-art performance on the challenging synthetic MOVi datasets.
When used in combination with the feature reconstruction loss, our model is the first object-centric video model that scales to unconstrained video datasets such as YouTube-VIS.
https://martius-lab.github.io/videosaur/ | Object-Centric Learning for Real-World Videos by Predicting Temporal Feature Similarities | [
"Andrii Zadaianchuk",
"Maximilian Seitzer",
"Georg Martius"
] | Conference | poster | 2306.04829 | [
"https://github.com/martius-lab/videosaur"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=t0fkjO4aZj | @inproceedings{
chu2023a,
title={A unified framework for information-theoretic generalization bounds},
author={Yifeng Chu and Maxim Raginsky},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=t0fkjO4aZj}
} | This paper presents a general methodology for deriving information-theoretic generalization bounds for learning algorithms. The main technical tool is a probabilistic decorrelation lemma based on a change of measure and a relaxation of Young's inequality in $L_{\psi_p}$ Orlicz spaces. Using the decorrelation lemma in combination with other techniques, such as symmetrization, couplings, and chaining in the space of probability measures, we obtain new upper bounds on the generalization error, both in expectation and in high probability, and recover as special cases many of the existing generalization bounds, including the ones based on mutual information, conditional mutual information, stochastic chaining, and PAC-Bayes inequalities. In addition, the Fernique--Talagrand upper bound on the expected supremum of a subgaussian process emerges as a special case. | A unified framework for information-theoretic generalization bounds | [
"Yifeng Chu",
"Maxim Raginsky"
] | Conference | poster | 2305.11042 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=szFqlNRxeS | @inproceedings{
hu2023riemannian,
title={Riemannian Projection-free Online Learning},
author={Zihao Hu and Guanghui Wang and Jacob Abernethy},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=szFqlNRxeS}
} | The projection operation is a critical component in a wide range of optimization algorithms, such as online gradient descent (OGD),
for enforcing constraints and achieving optimal regret bounds. However, it suffers from computational complexity limitations in high-dimensional settings or
when dealing with ill-conditioned constraint sets. Projection-free algorithms address this issue by replacing the projection oracle with more efficient optimization
subroutines. But to date, these methods have been developed primarily in the Euclidean setting, and while there has been growing interest in optimization on
Riemannian manifolds, there has been essentially no work in trying to utilize projection-free tools here. An apparent issue is that non-trivial affine functions
are generally non-convex in such domains. In this paper, we present methods for obtaining sub-linear regret guarantees in online geodesically convex optimization
on curved spaces for two scenarios: when we have access to (a) a separation oracle or (b) a linear optimization oracle. For geodesically convex losses, and
when a separation oracle is available, our algorithms achieve $O(T^{\frac{1}{2}})$, $O(T^{\frac{3}{4}})$ and $O(T^{\frac{1}{2}})$ adaptive regret guarantees in the full
information setting, the bandit setting with one-point feedback and the bandit setting with two-point feedback, respectively. When a linear optimization oracle is
available, we obtain regret rates of $O(T^{\frac{3}{4}})$ for geodesically convex losses
and $O(T^{\frac{2}{3}}\log T)$ for strongly geodesically convex losses. | Riemannian Projection-free Online Learning | [
"Zihao Hu",
"Guanghui Wang",
"Jacob Abernethy"
] | Conference | poster | 2305.19349 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=sxao2udWXi | @inproceedings{
kothapalli2023a,
title={A Neural Collapse Perspective on Feature Evolution in Graph Neural Networks},
author={Vignesh Kothapalli and Tom Tirer and Joan Bruna},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=sxao2udWXi}
} | Graph neural networks (GNNs) have become increasingly popular for classification tasks on graph-structured data. Yet, the interplay between graph topology and feature evolution in GNNs is not well understood. In this paper, we focus on node-wise classification, illustrated with community detection on stochastic block model graphs, and explore the feature evolution through the lens of the "Neural Collapse" (NC) phenomenon. When training instance-wise deep classifiers (e.g. for image classification) beyond the zero training error point, NC demonstrates a reduction in the deepest features' within-class variability and an increased alignment of their class means to certain symmetric structures. We start with an empirical study that shows that a decrease in within-class variability is also prevalent in the node-wise classification setting, however, not to the extent observed in the instance-wise case. Then, we theoretically study this distinction. Specifically, we show that even an "optimistic" mathematical model requires that the graphs obey a strict structural condition in order to possess a minimizer with exact collapse. Furthermore, by studying the gradient dynamics of this model, we provide reasoning for the partial collapse observed empirically. Finally, we present a study on the evolution of within- and between-class feature variability across layers of a well-trained GNN and contrast the behavior with spectral methods. | A Neural Collapse Perspective on Feature Evolution in Graph Neural Networks | [
"Vignesh Kothapalli",
"Tom Tirer",
"Joan Bruna"
] | Conference | poster | 2307.01951 | [
"https://github.com/kvignesh1420/gnn_collapse"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=sxZLrBqg50 | @inproceedings{
wang2023is,
title={Is {RLHF} More Difficult than Standard {RL}? A Theoretical Perspective},
author={Yuanhao Wang and Qinghua Liu and Chi Jin},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=sxZLrBqg50}
} | Reinforcement learning from Human Feedback (RLHF) learns from preference signals, while standard Reinforcement Learning (RL) directly learns from reward signals. Preferences arguably contain less information than rewards, which makes preference-based RL seemingly more difficult. This paper theoretically proves that, for a wide range of preference models, we can solve preference-based RL directly using existing algorithms and techniques for reward-based RL, with small or no extra costs. Specifically, (1) for preferences that are drawn from reward-based probabilistic models, we reduce the problem to robust reward-based RL that can tolerate small errors in rewards; (2) for general arbitrary preferences where the objective is to find the von Neumann winner, we reduce the problem to multiagent reward-based RL which finds Nash equilibria for factored Markov games under a restricted set of policies. The latter case can be further reduce to adversarial MDP when preferences only depend on the final state. We instantiate all reward-based RL subroutines by concrete provable algorithms, and apply our theory to a large class of models including tabular MDPs and MDPs with generic function approximation. We further provide guarantees when K-wise comparisons are available. | Is RLHF More Difficult than Standard RL? A Theoretical Perspective | [
"Yuanhao Wang",
"Qinghua Liu",
"Chi Jin"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=sx0xpaO0za | @inproceedings{
coda-forno2023metaincontext,
title={Meta-in-context learning in large language models},
author={Julian Coda-Forno and Marcel Binz and Zeynep Akata and Matthew Botvinick and Jane X Wang and Eric Schulz},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=sx0xpaO0za}
} | Large language models have shown tremendous performance in a variety of tasks. In-context learning -- the ability to improve at a task after being provided with a number of demonstrations -- is seen as one of the main contributors to their success. In the present paper, we demonstrate that the in-context learning abilities of large language models can be recursively improved via in-context learning itself. We coin this phenomenon meta-in-context learning. Looking at two idealized domains, a one-dimensional regression task and a two-armed bandit task, we show that meta-in-context learning adaptively reshapes a large language model's priors over expected tasks. Furthermore, we find that meta-in-context learning modifies the in-context learning strategies of such models. Finally, we broaden the scope of our investigation to encompass two diverse benchmarks: one focusing on real-world regression problems and the other encompassing multiple NLP tasks. In both cases, we observe competitive performance comparable to that of traditional learning algorithms. Taken together, our work improves our understanding of in-context learning and paves the way toward adapting large language models to the environment they are applied purely through meta-in-context learning rather than traditional finetuning. | Meta-in-context learning in large language models | [
"Julian Coda-Forno",
"Marcel Binz",
"Zeynep Akata",
"Matthew Botvinick",
"Jane X Wang",
"Eric Schulz"
] | Conference | poster | 2305.12907 | [
"https://github.com/juliancodaforno/meta-in-context-learning"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=swNtr6vGqg | @inproceedings{
ziemann2023the,
title={The noise level in linear regression with dependent data},
author={Ingvar Ziemann and Stephen Tu and George J. Pappas and Nikolai Matni},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=swNtr6vGqg}
} | We derive upper bounds for random design linear regression with dependent ($\beta$-mixing) data absent any realizability assumptions. In contrast to the strictly realizable martingale noise regime, no sharp \emph{instance-optimal} non-asymptotics are available in the literature. Up to constant factors, our analysis correctly recovers the variance term predicted by the Central Limit Theorem---the noise level of the problem---and thus exhibits graceful degradation as we introduce misspecification. Past a burn-in, our result is sharp in the moderate deviations regime, and in particular does not inflate the leading order term by mixing time factors. | The noise level in linear regression with dependent data | [
"Ingvar Ziemann",
"Stephen Tu",
"George J. Pappas",
"Nikolai Matni"
] | Conference | poster | 2305.11165 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=sw2Y0sirtM | @inproceedings{
azabou2023a,
title={A Unified, Scalable Framework for Neural Population Decoding},
author={Mehdi Azabou and Vinam Arora and Venkataramana Ganesh and Ximeng Mao and Santosh B Nachimuthu and Michael Jacob Mendelson and Blake Aaron Richards and Matthew G Perich and Guillaume Lajoie and Eva L Dyer},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=sw2Y0sirtM}
} | Our ability to use deep learning approaches to decipher neural activity would likely benefit from greater scale, in terms of both the model size and the datasets. However, the integration of many neural recordings into one unified model is challenging, as each recording contains the activity of different neurons from different individual animals. In this paper, we introduce a training framework and architecture designed to model the population dynamics of neural activity across diverse, large-scale neural recordings. Our method first tokenizes individual spikes within the dataset to build an efficient representation of neural events that captures the fine temporal structure of neural activity. We then employ cross-attention and a PerceiverIO backbone to further construct a latent tokenization of neural population activities. Utilizing this architecture and training framework, we construct a large-scale multi-session model trained on large datasets from seven nonhuman primates, spanning over 158 different sessions of recording from over 27,373 neural units and over 100 hours of recordings. In a number of different tasks, we demonstrate that our pretrained model can be rapidly adapted to new, unseen sessions with unspecified neuron correspondence, enabling few-shot performance with minimal labels. This work presents a powerful new approach for building deep learning tools to analyze neural data and stakes out a clear path to training at scale for neural decoding models. | A Unified, Scalable Framework for Neural Population Decoding | [
"Mehdi Azabou",
"Vinam Arora",
"Venkataramana Ganesh",
"Ximeng Mao",
"Santosh B Nachimuthu",
"Michael Jacob Mendelson",
"Blake Aaron Richards",
"Matthew G Perich",
"Guillaume Lajoie",
"Eva L Dyer"
] | Conference | poster | 2310.16046 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=suzMI2P1rT | @inproceedings{
liu2023ceil,
title={{CEIL}: Generalized Contextual Imitation Learning},
author={Jinxin Liu and Li He and Yachen Kang and Zifeng Zhuang and Donglin Wang and Huazhe Xu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=suzMI2P1rT}
} | In this paper, we present ContExtual Imitation Learning (CEIL), a general and broadly applicable algorithm for imitation learning (IL). Inspired by the formulation of hindsight information matching, we derive CEIL by explicitly learning a hindsight embedding function together with a contextual policy using the hindsight embeddings. To achieve the expert matching objective for IL, we advocate for optimizing a contextual variable such that it biases the contextual policy towards mimicking expert behaviors. Beyond the typical learning from demonstrations (LfD) setting, CEIL is a generalist that can be effectively applied to multiple settings including: 1) learning from observations (LfO), 2) offline IL, 3) cross-domain IL (mismatched experts), and 4) one-shot IL settings. Empirically, we evaluate CEIL on the popular MuJoCo tasks (online) and the D4RL dataset (offline). Compared to prior state-of-the-art baselines, we show that CEIL is more sample-efficient in most online IL tasks and achieves better or competitive performances in offline tasks. | CEIL: Generalized Contextual Imitation Learning | [
"Jinxin Liu",
"Li He",
"Yachen Kang",
"Zifeng Zhuang",
"Donglin Wang",
"Huazhe Xu"
] | Conference | poster | 2306.14534 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=strvrjSi3C | @inproceedings{
yun2023riemannian,
title={Riemannian {SAM}: Sharpness-Aware Minimization on Riemannian Manifolds},
author={Jihun Yun and Eunho Yang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=strvrjSi3C}
} | Contemporary advances in the field of deep learning have embarked upon an exploration of the underlying geometric properties of data, thus encouraging the investigation of techniques that consider general manifolds, for example, hyperbolic or orthogonal neural networks. However, the optimization algorithms for training such geometric deep learning models still remain highly under-explored. In this paper, we introduce Riemannian SAM by generalizing conventional Euclidean SAM to Riemannian manifolds. We successfully formulate the sharpness-aware minimization on Riemannian manifolds, leading to one of a novel instantiation, Lorentz SAM. In addition, SAM variants proposed in previous studies such as Fisher SAM can be derived as special examples under our Riemannian SAM framework. We provide the convergence analysis of Riemannian SAM under a less aggressively decaying ascent learning rate than Euclidean SAM. Our analysis serves as a theoretically sound contribution encompassing a diverse range of manifolds, also providing the guarantees for SAM variants such as Fisher SAM, whose convergence analyses are absent. Lastly, we illustrate the superiority of Riemannian SAM in terms of generalization over previous Riemannian optimization algorithms through experiments on knowledge graph completion and machine translation tasks. | Riemannian SAM: Sharpness-Aware Minimization on Riemannian Manifolds | [
"Jihun Yun",
"Eunho Yang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=stDm3S0CV7 | @inproceedings{
turner2023the,
title={The Simplicity Bias in Multi-Task {RNN}s: Shared Attractors, Reuse of Dynamics, and Geometric Representation},
author={Elia Turner and Omri Barak},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=stDm3S0CV7}
} | How does a single interconnected neural population perform multiple tasks, each with its own dynamical requirements? The relation between task requirements and neural dynamics in Recurrent Neural Networks (RNNs) has been investigated for single tasks. The forces shaping joint dynamics of multiple tasks, however, are largely unexplored. In this work, we first construct a systematic framework to study multiple tasks in RNNs, minimizing interference from input and output correlations with the hidden representation. This allows us to reveal how RNNs tend to share attractors and reuse dynamics, a tendency we define as the "simplicity bias".
We find that RNNs develop attractors sequentially during training, preferentially reusing existing dynamics and opting for simple solutions when possible. This sequenced emergence and preferential reuse encapsulate the simplicity bias. Through concrete examples, we demonstrate that new attractors primarily emerge due to task demands or architectural constraints, illustrating a balance between simplicity bias and external factors.
We examine the geometry of joint representations within a single attractor, by constructing a family of tasks from a set of functions. We show that the steepness of the associated functions controls their alignment within the attractor. This arrangement again highlights the simplicity bias, as points with similar input spacings undergo comparable transformations to reach the shared attractor.
Our findings propose compelling applications. The geometry of shared attractors might allow us to infer the nature of unknown tasks. Furthermore, the simplicity bias implies that without specific incentives, modularity in RNNs may not spontaneously emerge, providing insights into the conditions required for network specialization. | The Simplicity Bias in Multi-Task RNNs: Shared Attractors, Reuse of Dynamics, and Geometric Representation | [
"Elia Turner",
"Omri Barak"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=sqqASmpA2R | @inproceedings{
wortsman2023stable,
title={Stable and low-precision training for large-scale vision-language models},
author={Mitchell Wortsman and Tim Dettmers and Luke Zettlemoyer and Ari S. Morcos and Ali Farhadi and Ludwig Schmidt},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=sqqASmpA2R}
} | We introduce new methods for 1) accelerating and 2) stabilizing training for large language-vision models. 1) For acceleration, we introduce SwitchBack, a linear layer for int8 quantized training which provides a speed-up of 13-25% while matching the performance of bfloat16 training within 0.1 percentage points for the 1B parameter CLIP ViT-Huge---the largest int8 training to date. Our main focus is int8 as GPU support for float8 is rare, though we also analyze float8 training through simulation. While SwitchBack proves effective for float8, we show that standard techniques are also successful if the network is trained and initialized so that large feature magnitudes are discouraged, which we accomplish via layer-scale initialized with zeros. 2) For stability, we analyze loss spikes and find they consistently occur 1-8 iterations after the squared gradients become under-estimated by their AdamW second moment estimator. As a result, we recommend an AdamW-Adafactor hybrid which avoids loss spikes when training a CLIP ViT-Huge model and outperforms gradient clipping at the scales we test. | Stable and low-precision training for large-scale vision-language models | [
"Mitchell Wortsman",
"Tim Dettmers",
"Luke Zettlemoyer",
"Ari S. Morcos",
"Ali Farhadi",
"Ludwig Schmidt"
] | Conference | poster | 2304.13013 | [
"https://github.com/mlfoundations/open_clip"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=sqkGJjIRfG | @inproceedings{
cao2023hassod,
title={{HASSOD}: Hierarchical Adaptive Self-Supervised Object Detection},
author={Shengcao Cao and Dhiraj Joshi and Liangyan Gui and Yu-Xiong Wang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=sqkGJjIRfG}
} | The human visual perception system demonstrates exceptional capabilities in learning without explicit supervision and understanding the part-to-whole composition of objects. Drawing inspiration from these two abilities, we propose Hierarchical Adaptive Self-Supervised Object Detection (HASSOD), a novel approach that learns to detect objects and understand their compositions without human supervision. HASSOD employs a hierarchical adaptive clustering strategy to group regions into object masks based on self-supervised visual representations, adaptively determining the number of objects per image. Furthermore, HASSOD identifies the hierarchical levels of objects in terms of composition, by analyzing coverage relations between masks and constructing tree structures. This additional self-supervised learning task leads to improved detection performance and enhanced interpretability. Lastly, we abandon the inefficient multi-round self-training process utilized in prior methods and instead adapt the Mean Teacher framework from semi-supervised learning, which leads to a smoother and more efficient training process. Through extensive experiments on prevalent image datasets, we demonstrate the superiority of HASSOD over existing methods, thereby advancing the state of the art in self-supervised object detection. Notably, we improve Mask AR from 20.2 to 22.5 on LVIS, and from 17.0 to 26.0 on SA-1B. Project page: https://HASSOD-NeurIPS23.github.io. | HASSOD: Hierarchical Adaptive Self-Supervised Object Detection | [
"Shengcao Cao",
"Dhiraj Joshi",
"Liangyan Gui",
"Yu-Xiong Wang"
] | Conference | poster | 2402.03311 | [
"https://github.com/shengcao-cao/hassod"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=sqTcCXkG4P | @inproceedings{
ghazi2023sparsitypreserving,
title={Sparsity-Preserving Differentially Private Training of Large Embedding Models},
author={Badih Ghazi and Yangsibo Huang and Pritish Kamath and Ravi Kumar and Pasin Manurangsi and Amer Sinha and Chiyuan Zhang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=sqTcCXkG4P}
} | As the use of large embedding models in recommendation systems and language applications increases, concerns over user data privacy have also risen. DP-SGD, a training algorithm that combines differential privacy with stochastic gradient descent, has been the workhorse in protecting user privacy without compromising model accuracy by much. However, applying DP-SGD naively to embedding models can destroy gradient sparsity, leading to reduced training efficiency. To address this issue, we present two new algorithms, DP-FEST and DP-AdaFEST, that preserve gradient sparsity during the private training of large embedding models. Our algorithms achieve substantial reductions ($10^6 \times$) in gradient size, while maintaining comparable levels of accuracy, on benchmark real-world datasets. | Sparsity-Preserving Differentially Private Training of Large Embedding Models | [
"Badih Ghazi",
"Yangsibo Huang",
"Pritish Kamath",
"Ravi Kumar",
"Pasin Manurangsi",
"Amer Sinha",
"Chiyuan Zhang"
] | Conference | poster | 2311.08357 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=sq4o3tjWaj | @inproceedings{
hsu2023whats,
title={What{\textquoteright}s Left? Concept Grounding with Logic-Enhanced Foundation Models},
author={Joy Hsu and Jiayuan Mao and Joshua B. Tenenbaum and Jiajun Wu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=sq4o3tjWaj}
} | Recent works such as VisProg and ViperGPT have smartly composed foundation models for visual reasoning—using large language models (LLMs) to produce programs that can be executed by pre-trained vision-language models. However, they operate in limited domains, such as 2D images, not fully exploiting the generalization of language: abstract concepts like “*left*” can also be grounded in 3D, temporal, and action data, as in moving to your *left*. This limited generalization stems from these inference-only methods’ inability to learn or adapt pre-trained models to a new domain. We propose the **L**ogic-**E**nhanced **F**ounda**T**ion Model (**LEFT**), a unified framework that *learns* to ground and reason with concepts across domains with a differentiable, domain-independent, first-order logic-based program executor. LEFT has an LLM interpreter that outputs a program represented in a general, logic-based reasoning language, which is shared across all domains and tasks. LEFT’s executor then executes the program with trainable domain-specific grounding modules. We show that LEFT flexibly learns concepts in four domains: 2D images, 3D scenes, human motions, and robotic manipulation. It exhibits strong reasoning ability in a wide variety of tasks, including those that are complex and not seen during training, and can be easily applied to new domains. | What’s Left? Concept Grounding with Logic-Enhanced Foundation Models | [
"Joy Hsu",
"Jiayuan Mao",
"Joshua B. Tenenbaum",
"Jiajun Wu"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=sq0m11cUMV | @inproceedings{
kim2023belief,
title={Belief Projection-Based Reinforcement Learning for Environments with Delayed Feedback},
author={Jangwon Kim and Hangyeol Kim and Jiwook Kang and Jongchan Baek and Soohee Han},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=sq0m11cUMV}
} | We present a novel actor-critic algorithm for an environment with delayed feedback, which addresses the state-space explosion problem of conventional approaches. Conventional approaches use an augmented state constructed from the last observed state and actions executed since visiting the last observed state. Using the augmented state space, the correct Markov decision process for delayed environments can be constructed; however, this causes the state space to explode as the number of delayed timesteps increases, leading to slow convergence. Our proposed algorithm, called Belief-Projection-Based Q-learning (BPQL), addresses the state-space explosion problem by evaluating the values of the critic for which the input state size is equal to the original state-space size rather than that of the augmented one. We compare BPQL to traditional approaches in continuous control tasks and demonstrate that it significantly outperforms other algorithms in terms of asymptotic performance and sample efficiency. We also show that BPQL solves long-delayed environments, which conventional approaches are unable to do. | Belief Projection-Based Reinforcement Learning for Environments with Delayed Feedback | [
"Jangwon Kim",
"Hangyeol Kim",
"Jiwook Kang",
"Jongchan Baek",
"Soohee Han"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=sovxUzPzLN | @inproceedings{
hedlin2023unsupervised,
title={Unsupervised Semantic Correspondence Using Stable Diffusion},
author={Eric Hedlin and Gopal Sharma and Shweta Mahajan and Hossam Isack and Abhishek Kar and Andrea Tagliasacchi and Kwang Moo Yi},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=sovxUzPzLN}
} | Text-to-image diffusion models are now capable of generating images that are often indistinguishable from real images. To generate such images, these models must understand the semantics of the objects they are asked to generate. In this work we show that, without any training, one can leverage this semantic knowledge within diffusion models to find semantic correspondences – locations in multiple images that have the same semantic meaning. Specifically, given an image, we optimize the prompt embeddings of these models for maximum attention on the regions of interest. These optimized embeddings capture semantic information about the location, which can then be transferred to another image. By doing so we obtain results on par with the strongly supervised state of the art on the PF-Willow dataset and significantly outperform (20.9% relative for the SPair-71k dataset) any existing weakly- or unsupervised method on PF-Willow, CUB-200 and SPair-71k datasets. | Unsupervised Semantic Correspondence Using Stable Diffusion | [
"Eric Hedlin",
"Gopal Sharma",
"Shweta Mahajan",
"Hossam Isack",
"Abhishek Kar",
"Andrea Tagliasacchi",
"Kwang Moo Yi"
] | Conference | poster | 2305.15581 | [
"https://github.com/ubc-vision/LDM_correspondences"
] | https://huggingface.co/papers/2305.15581 | 4 | 2 | 0 | 7 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=sodl2c3aTM | @inproceedings{
zhang2023dynamically,
title={Dynamically Masked Discriminator for {GAN}s},
author={Wentian Zhang and Haozhe Liu and Bing Li and Jinheng Xie and Yawen Huang and Yuexiang Li and Yefeng Zheng and Bernard Ghanem},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=sodl2c3aTM}
} | Training Generative Adversarial Networks (GANs) remains a challenging problem. The discriminator trains the generator by learning the distribution of real/generated data. However, the distribution of generated data changes throughout the training process, which is difficult for the discriminator to learn. In this paper, we propose a novel method for GANs from the viewpoint of online continual learning. We observe that the discriminator model, trained on historically generated data, often slows down its adaptation to the changes in the new arrival generated data, which accordingly decreases the quality of generated results. By treating the generated data in training as a stream, we propose to detect whether the discriminator slows down the learning of new knowledge in generated data. Therefore, we can explicitly enforce the discriminator to learn new knowledge fast. Particularly, we propose a new discriminator, which automatically detects its retardation and then dynamically masks its features, such that the discriminator can adaptively learn the temporally-vary distribution of generated data. Experimental results show our method outperforms the state-of-the-art approaches. | Dynamically Masked Discriminator for GANs | [
"Wentian Zhang",
"Haozhe Liu",
"Bing Li",
"Jinheng Xie",
"Yawen Huang",
"Yuexiang Li",
"Yefeng Zheng",
"Bernard Ghanem"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=snY3FOnlQi | @inproceedings{
liang2023avnerf,
title={{AV}-Ne{RF}: Learning Neural Fields for Real-World Audio-Visual Scene Synthesis},
author={Susan Liang and Chao Huang and Yapeng Tian and Anurag Kumar and Chenliang Xu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=snY3FOnlQi}
} | Can machines recording an audio-visual scene produce realistic, matching audio-visual experiences at novel positions and novel view directions? We answer it by studying a new task---real-world audio-visual scene synthesis---and a first-of-its-kind NeRF-based approach for multimodal learning. Concretely, given a video recording of an audio-visual scene, the task is to synthesize new videos with spatial audios along arbitrary novel camera trajectories in that scene. We propose an acoustic-aware audio generation module that integrates prior knowledge of audio propagation into NeRF, in which we implicitly associate audio generation with the 3D geometry and material properties of a visual environment. Furthermore, we present a coordinate transformation module that expresses a view direction relative to the sound source, enabling the model to learn sound source-centric acoustic fields. To facilitate the study of this new task, we collect a high-quality Real-World Audio-Visual Scene (RWAVS) dataset. We demonstrate the advantages of our method on this real-world dataset and the simulation-based SoundSpaces dataset. Notably, we refer readers to view our demo videos for convincing comparisons. | AV-NeRF: Learning Neural Fields for Real-World Audio-Visual Scene Synthesis | [
"Susan Liang",
"Chao Huang",
"Yapeng Tian",
"Anurag Kumar",
"Chenliang Xu"
] | Conference | poster | 2302.02088 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=sla7V80uWA | @inproceedings{
shao2023beyond,
title={Beyond {MLE}: Convex Learning for Text Generation},
author={Chenze Shao and Zhengrui Ma and Min Zhang and Yang Feng},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=sla7V80uWA}
} | Maximum likelihood estimation (MLE) is a statistical method used to estimate the parameters of a probability distribution that best explain the observed data. In the context of text generation, MLE is often used to train generative language models, which can then be used to generate new text. However, we argue that MLE is not always necessary and optimal, especially for closed-ended text generation tasks like machine translation. In these tasks, the goal of model is to generate the most appropriate response, which does not necessarily require it to estimate the entire data distribution with MLE. To this end, we propose a novel class of training objectives based on convex functions, which enables text generation models to focus on highly probable outputs without having to estimate the entire data distribution. We investigate the theoretical properties of the optimal predicted distribution when applying convex functions to the loss, demonstrating that convex functions can sharpen the optimal distribution, thereby enabling the model to better capture outputs with high probabilities. Experiments on various text generation tasks and models show the effectiveness of our approach. It enables autoregressive models to bridge the gap between greedy and beam search, and facilitates the learning of non-autoregressive models with a maximum improvement of 9+ BLEU points. Moreover, our approach also exhibits significant impact on large language models (LLMs), substantially enhancing their generative capability on various tasks. Source code is available at \url{https://github.com/ictnlp/Convex-Learning}. | Beyond MLE: Convex Learning for Text Generation | [
"Chenze Shao",
"Zhengrui Ma",
"Min Zhang",
"Yang Feng"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=shePL2nbwl | @inproceedings{
li2023survival,
title={Survival Instinct in Offline Reinforcement Learning},
author={Anqi Li and Dipendra Misra and Andrey Kolobov and Ching-An Cheng},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=shePL2nbwl}
} | We present a novel observation about the behavior of offline reinforcement learning (RL) algorithms: on many benchmark datasets, offline RL can produce well-performing and safe policies even when trained with "wrong" reward labels, such as those that are zero everywhere or are negatives of the true rewards. This phenomenon cannot be easily explained by offline RL's return maximization objective. Moreover, it gives offline RL a degree of robustness that is uncharacteristic of its online RL counterparts, which are known to be sensitive to reward design. We demonstrate that this surprising robustness property is attributable to an interplay between the notion of *pessimism* in offline RL algorithms and certain implicit biases in common data collection practices. As we prove in this work, pessimism endows the agent with a *survival instinct*, i.e., an incentive to stay within the data support in the long term, while the limited and biased data coverage further constrains the set of survival policies. Formally, given a reward class -- which may not even contain the true reward -- we identify conditions on the training data distribution that enable offline RL to learn a near-optimal and safe policy from any reward within the class. We argue that the survival instinct should be taken into account when interpreting results from existing offline RL benchmarks and when creating future ones. Our empirical and theoretical results suggest a new paradigm for offline RL, whereby an agent is "nudged" to learn a desirable behavior with imperfect reward but purposely biased data coverage. Please visit our website [https://survival-instinct.github.io](https://survival-instinct.github.io) for accompanied code and videos. | Survival Instinct in Offline Reinforcement Learning | [
"Anqi Li",
"Dipendra Misra",
"Andrey Kolobov",
"Ching-An Cheng"
] | Conference | spotlight | 2306.03286 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=shXnfALjuH | @inproceedings{
song2023fdalign,
title={{FD}-Align: Feature Discrimination Alignment for Fine-tuning Pre-Trained Models in Few-Shot Learning},
author={Kun Song and Huimin Ma and Bochao Zou and Huishuai Zhang and Weiran Huang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=shXnfALjuH}
} | Due to the limited availability of data, existing few-shot learning methods trained from scratch fail to achieve satisfactory performance. In contrast, large-scale pre-trained models such as CLIP demonstrate remarkable few-shot and zero-shot capabilities. To enhance the performance of pre-trained models for downstream tasks, fine-tuning the model on downstream data is frequently necessary. However, fine-tuning the pre-trained model leads to a decrease in its generalizability in the presence of distribution shift, while the limited number of samples in few-shot learning makes the model highly susceptible to overfitting. Consequently, existing methods for fine-tuning few-shot learning primarily focus on fine-tuning the model's classification head or introducing additional structure. In this paper, we introduce a fine-tuning approach termed Feature Discrimination Alignment (FD-Align). Our method aims to bolster the model's generalizability by preserving the consistency of spurious features across the fine-tuning process. Extensive experimental results validate the efficacy of our approach for both ID and OOD tasks. Once fine-tuned, the model can seamlessly integrate with existing methods, leading to performance improvements. Our code can be found in https://github.com/skingorz/FD-Align. | FD-Align: Feature Discrimination Alignment for Fine-tuning Pre-Trained Models in Few-Shot Learning | [
"Kun Song",
"Huimin Ma",
"Bochao Zou",
"Huishuai Zhang",
"Weiran Huang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=sgCrNMOuXp | @inproceedings{
ravindranath2023data,
title={Data Market Design through Deep Learning},
author={Sai Srivatsa Ravindranath and Yanchen Jiang and David C. Parkes},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=sgCrNMOuXp}
} | The _data market design_ problem is a problem in economic theory to find a set of signaling schemes (statistical experiments) to maximize expected revenue to the information seller, where each experiment reveals some of the information known to a seller and has a corresponding price. Each buyer has their own decision to make in a world environment, and their subjective expected value for the information associated with a particular experiment comes from the improvement in this decision and depends on their prior and value for different outcomes. In a setting with multiple buyers, a buyer's expected value for an experiment may also depend on the information sold to others. We introduce the application of deep learning for the design of revenue-optimal data markets, looking to expand the frontiers of what can be understood and achieved. Relative to earlier work on deep learning for auction design, we must learn signaling schemes rather than allocation rules and handle _obedience constraints_ — these arising from modeling the downstream actions of buyers — in addition to incentive constraints on bids. Our experiments demonstrate that this new deep learning framework can almost precisely replicate all known solutions from theory, expand to more complex settings, and be used to establish the optimality of new designs for data markets and make conjectures in regard to the structure of optimal designs. | Data Market Design through Deep Learning | [
"Sai Srivatsa Ravindranath",
"Yanchen Jiang",
"David C. Parkes"
] | Conference | poster | 2310.20096 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=sdlh4gVOj8 | @inproceedings{
nguyen-tang2023on,
title={On Sample-Efficient Offline Reinforcement Learning: Data Diversity, Posterior Sampling and Beyond},
author={Thanh Nguyen-Tang and Raman Arora},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=sdlh4gVOj8}
} | We seek to understand what facilitates sample-efficient learning from historical datasets for sequential decision-making, a problem that is popularly known as offline reinforcement learning (RL). Further, we are interested in algorithms that enjoy sample efficiency while leveraging (value) function approximation. In this paper, we address these fundamental questions by (i) proposing a notion of data diversity that subsumes the previous notions of coverage measures in offline RL and (ii) using this notion to \emph{unify} three distinct classes of offline RL algorithms based on version spaces (VS), regularized optimization (RO), and posterior sampling (PS). We establish that VS-based, RO-based, and PS-based algorithms, under standard assumptions, achieve \emph{comparable} sample efficiency, which recovers the state-of-the-art sub-optimality bounds for finite and linear model classes with the standard assumptions. This result is surprising, given that the prior work suggested an unfavorable sample complexity of the RO-based algorithm compared to the VS-based algorithm, whereas posterior sampling is rarely considered in offline RL due to its explorative nature. Notably, our proposed model-free PS-based algorithm for offline RL is \emph{novel}, with sub-optimality bounds that are \emph{frequentist} (i.e., worst-case) in nature. | On Sample-Efficient Offline Reinforcement Learning: Data Diversity, Posterior Sampling and Beyond | [
"Thanh Nguyen-Tang",
"Raman Arora"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=scaKiAtbI3 | @inproceedings{
cui2023retrievalaugmented,
title={Retrieval-Augmented Multiple Instance Learning},
author={Yufei CUI and Ziquan Liu and Yixin CHEN and Yuchen Lu and Xinyue Yu and Xue Liu and Tei-Wei Kuo and Miguel R. D. Rodrigues and Chun Jason Xue and Antoni B. Chan},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=scaKiAtbI3}
} | Multiple Instance Learning (MIL) is a crucial weakly supervised learning method applied across various domains, e.g., medical diagnosis based on whole slide images (WSIs). Recent advancements in MIL algorithms have yielded exceptional performance when the training and test data originate from the same domain, such as WSIs obtained from the same hospital. However, this paper reveals a performance deterioration of MIL models when tested on an out-of-domain test set, exemplified by WSIs sourced from a novel hospital. To address this challenge, this paper introduces the Retrieval-AugMented MIL (RAM-MIL) framework, which integrates Optimal Transport (OT) as the distance metric for nearest neighbor retrieval. The development of RAM-MIL is driven by two key insights. First, a theoretical discovery indicates that reducing the input's intrinsic dimension can minimize the approximation error in attention-based MIL. Second, previous studies highlight a link between input intrinsic dimension and the feature merging process with the retrieved data. Empirical evaluations conducted on WSI classification demonstrate that the proposed RAM-MIL framework achieves state-of-the-art performance in both in-domain scenarios, where the training and retrieval data are in the same domain, and more crucially, in out-of-domain scenarios, where the (unlabeled) retrieval data originates from a different domain. Furthermore, the use of the transportation matrix derived from OT renders the retrieval results interpretable at the instance level, in contrast to the vanilla $l_2$ distance, and allows for visualization for human experts. *Code can be found at \url{https://github.com/ralphc1212/ram-mil*. | Retrieval-Augmented Multiple Instance Learning | [
"Yufei CUI",
"Ziquan Liu",
"Yixin CHEN",
"Yuchen Lu",
"Xinyue Yu",
"Xue Liu",
"Tei-Wei Kuo",
"Miguel R. D. Rodrigues",
"Chun Jason Xue",
"Antoni B. Chan"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=scYa9DYUAy | @inproceedings{
chen2023vast,
title={{VAST}: A Vision-Audio-Subtitle-Text Omni-Modality Foundation Model and Dataset},
author={Sihan Chen and Handong Li and Qunbo Wang and Zijia Zhao and Mingzhen Sun and Xinxin Zhu and Jing Liu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=scYa9DYUAy}
} | Vision and text have been fully explored in contemporary video-text foundational models, while other modalities such as audio and subtitles in videos have not received sufficient attention. In this paper, we resort to establish connections between multi-modality video tracks, including Vision, Audio, and Subtitle, and Text by exploring an automatically generated large-scale omni-modality video caption dataset called VAST-27M. Specifically, we first collect 27 million open-domain video clips and separately train a vision and an audio captioner to generate vision and audio captions. Then, we employ an off-the-shelf Large Language Model (LLM) to integrate the generated captions, together with subtitles and instructional prompts into omni-modality captions. Based on the proposed VAST-27M dataset, we train an omni-modality video-text foundational model named VAST, which can perceive and process vision, audio, and subtitle modalities from video, and better support various tasks including vision-text, audio-text, and multi-modal video-text tasks (retrieval, captioning and QA). Extensive experiments have been conducted to demonstrate the effectiveness of our proposed VAST-27M corpus and VAST foundation model. VAST achieves 22 new state-of-the-art results on various cross-modality benchmarks. | VAST: A Vision-Audio-Subtitle-Text Omni-Modality Foundation Model and Dataset | [
"Sihan Chen",
"Handong Li",
"Qunbo Wang",
"Zijia Zhao",
"Mingzhen Sun",
"Xinxin Zhu",
"Jing Liu"
] | Conference | poster | 2305.18500 | [
"https://github.com/txh-mercury/vast"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=scG0cwftEe | @inproceedings{
liang2023unleashing,
title={Unleashing the Full Potential of Product Quantization for Large-Scale Image Retrieval},
author={Yu Liang and Shiliang Zhang and Kenli Li and Xiaoyu Wang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=scG0cwftEe}
} | Due to its promising performance, deep hashing has become a prevalent method for approximate nearest neighbors search (ANNs). However, most of current deep hashing methods are validated on relatively small-scale datasets, leaving potential threats when are applied to large-scale real-world scenarios. Specifically, they can be constrained either by the computational cost due to the large number of training categories and samples, or unsatisfactory accuracy. To tackle those issues, we propose a novel deep hashing framework based on product quantization (PQ). It uses a softmax-based differentiable PQ branch to learn a set of predefined PQ codes of the classes. Our method is easy to implement, does not involve large-scale matrix operations, and learns highly discriminate compact codes. We validate our method on multiple large-scaled datasets, including ImageNet100, ImageNet1K, and Glint360K, where the category size scales from 100 to 360K and sample number scales from 10K to 17 million, respectively. Extensive experiments demonstrate the superiority of our method. Code is available at https://github.com/yuleung/FPPQ. | Unleashing the Full Potential of Product Quantization for Large-Scale Image Retrieval | [
"Yu Liang",
"Shiliang Zhang",
"Kenli Li",
"Xiaoyu Wang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=sbusw6LD41 | @inproceedings{
bondarenko2023quantizable,
title={Quantizable Transformers: Removing Outliers by Helping Attention Heads Do Nothing},
author={Yelysei Bondarenko and Markus Nagel and Tijmen Blankevoort},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=sbusw6LD41}
} | Transformer models have been widely adopted in various domains over the last years and especially large language models have advanced the field of AI significantly. Due to their size, the capability of these networks has increased tremendously, but this has come at the cost of a significant increase in necessary compute. Quantization is one of the most effective ways for reducing the computational time and memory consumption of neural networks. Many studies have shown, however, that modern transformer models tend to learn strong outliers in their activations, making them difficult to quantize. To retain acceptable performance, the existence of these outliers requires activations to be in higher-bitwidth or the use of different numeric formats, extra fine-tuning, or other workarounds. We show that strong outliers are related to very specific behavior of attention heads that try to learn a "no-op", or just a partial update of the residual. To achieve the exact zeros needed in the attention matrix for a no-update, the input to the softmax is pushed to be larger and larger during training, causing outliers in other parts of the network. Based on these observations, we propose two simple (independent) modifications to the attention mechanism - _clipped softmax_ and _gated attention_. We empirically show that models pre-trained using our methods learn significantly smaller outliers while maintaining and sometimes even improving the floating-point task performance. This enables us to quantize transformers to full INT8 quantization of the activations without any additional effort. We demonstrate the effectiveness of our methods on both language models (BERT, OPT) and vision transformers. | Quantizable Transformers: Removing Outliers by Helping Attention Heads Do Nothing | [
"Yelysei Bondarenko",
"Markus Nagel",
"Tijmen Blankevoort"
] | Conference | poster | 2306.12929 | [
""
] | https://huggingface.co/papers/2306.12929 | 2 | 12 | 0 | 3 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=sZNBYvunEr | @inproceedings{
yoo2023dreamsparse,
title={DreamSparse: Escaping from Plato{\textquoteright}s Cave with 2D Diffusion Model Given Sparse Views},
author={Paul Yoo and Jiaxian Guo and Yutaka Matsuo and Shixiang Shane Gu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=sZNBYvunEr}
} | Synthesizing novel view images from a few views is a challenging but practical problem. Existing methods often struggle with producing high-quality results or necessitate per-object optimization in such few-view settings due to the insufficient information provided. In this work, we explore leveraging the strong 2D priors in pre-trained diffusion models for synthesizing novel view images. 2D diffusion models, nevertheless, lack 3D awareness, leading to distorted image synthesis and compromising the identity. To address these problems, we propose $\textit{DreamSparse}$, a framework that enables the frozen pre-trained diffusion model to generate geometry and identity-consistent novel view images. Specifically, DreamSparse incorporates a geometry module designed to capture features about spatial information from sparse views as a 3D prior. Subsequently, a spatial guidance model is introduced to convert rendered feature maps as spatial information for the generative process. This information is then used to guide the pre-trained diffusion model to
encourage the synthesis of geometrically consistent images without further tuning. Leveraging the strong image priors in the pre-trained diffusion models, DreamSparse is capable of synthesizing high-quality novel views for both object and object-centric scene-level images and generalising to open-set images.
Experimental results demonstrate that our framework can effectively synthesize novel view images from sparse views and outperforms baselines in both trained and open-set category images. More results can be found on our project page: https://sites.google.com/view/dreamsparse-webpage. | DreamSparse: Escaping from Plato’s Cave with 2D Diffusion Model Given Sparse Views | [
"Paul Yoo",
"Jiaxian Guo",
"Yutaka Matsuo",
"Shixiang Shane Gu"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=sXMQPKbLXf | @inproceedings{
zhang2023diffpack,
title={DiffPack: A Torsional Diffusion Model for Autoregressive Protein Side-Chain Packing},
author={Yangtian Zhang and Zuobai Zhang and Bozitao Zhong and Sanchit Misra and Jian Tang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=sXMQPKbLXf}
} | Proteins play a critical role in carrying out biological functions, and their 3D structures are essential in determining their functions.
Accurately predicting the conformation of protein side-chains given their backbones is important for applications in protein structure prediction, design and protein-protein interactions. Traditional methods are computationally intensive and have limited accuracy, while existing machine learning methods treat the problem as a regression task and overlook the restrictions imposed by the constant covalent bond lengths and angles. In this work, we present DiffPack, a torsional diffusion model that learns the joint distribution of side-chain torsional angles, the only degrees of freedom in side-chain packing, by diffusing and denoising on the torsional space. To avoid issues arising from simultaneous perturbation of all four torsional angles, we propose autoregressively generating the four torsional angles from $\chi_1$ to $\chi_4$ and training diffusion models for each torsional angle. We evaluate the method on several benchmarks for protein side-chain packing and show that our method achieves improvements of 11.9% and 13.5% in angle accuracy on CASP13 and CASP14, respectively, with a significantly smaller model size ($60\times$ fewer parameters). Additionally, we show the effectiveness of our method in enhancing side-chain predictions in the AlphaFold2 model. Code is available at https://github.com/DeepGraphLearning/DiffPack. | DiffPack: A Torsional Diffusion Model for Autoregressive Protein Side-Chain Packing | [
"Yangtian Zhang",
"Zuobai Zhang",
"Bozitao Zhong",
"Sanchit Misra",
"Jian Tang"
] | Conference | poster | 2306.01794 | [
"https://github.com/deepgraphlearning/diffpack"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=sWNOvNXGLP | @inproceedings{
panousis2023discover,
title={{DISCOVER}: Making Vision Networks Interpretable via Competition and Dissection},
author={Konstantinos P. Panousis and Sotirios Chatzis},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=sWNOvNXGLP}
} | Modern deep networks are highly complex and their inferential outcome very hard to interpret. This is a serious obstacle to their transparent deployment in safety-critical or bias-aware applications. This work contributes to *post-hoc* interpretability, and specifically Network Dissection. Our goal is to present a framework that makes it easier to *discover* the individual functionality of each neuron in a network trained on a vision task; discovery is performed in terms of textual description generation. To achieve this objective, we leverage: (i) recent advances in multimodal vision-text models and (ii) network layers founded upon the novel concept of stochastic local competition between linear units. In this setting, only a *small subset* of layer neurons are activated *for a given input*, leading to extremely high activation sparsity (as low as only $\approx 4\%$). Crucially, our proposed method infers (sparse) neuron activation patterns that enables the neurons to activate/specialize to inputs with specific characteristics, diversifying their individual functionality. This capacity of our method supercharges the potential of dissection processes: human understandable descriptions are generated only for the very few active neurons, thus facilitating the direct investigation of the network's decision process. As we experimentally show, our approach: (i) yields Vision Networks that retain or improve classification performance, and (ii) realizes a principled framework for text-based description and examination of the generated neuronal representations. | DISCOVER: Making Vision Networks Interpretable via Competition and Dissection | [
"Konstantinos P. Panousis",
"Sotirios Chatzis"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |