bibtex_url
null | proceedings
stringlengths 42
42
| bibtext
stringlengths 197
792
| abstract
stringlengths 303
3.45k
| title
stringlengths 10
159
| authors
sequencelengths 1
28
⌀ | id
stringclasses 44
values | type
stringclasses 16
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 444
values | n_linked_authors
int64 -1
9
| upvotes
int64 -1
42
| num_comments
int64 -1
13
| n_authors
int64 -1
92
| paper_page_exists_pre_conf
int64 0
1
| Models
sequencelengths 0
100
| Datasets
sequencelengths 0
11
| Spaces
sequencelengths 0
100
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | https://openreview.net/forum?id=qJRlz3SucN | @inproceedings{
salazar2023vart,
title={Va{RT}: Variational Regression Trees},
author={Sebastian Salazar},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=qJRlz3SucN}
} | Decision trees are a well-established tool in machine learning for classification and regression tasks. In this paper, we introduce a novel non-parametric Bayesian model that uses variational inference to approximate a posterior distribution over the space of stochastic decision trees. We evaluate the model's performance on 18 datasets and demonstrate its competitiveness with other state-of-the-art methods in regression tasks. We also explore its application to causal inference problems. We provide a fully vectorized implementation of our algorithm in PyTorch. | VaRT: Variational Regression Trees | [
"Sebastian Salazar"
] | Conference | spotlight | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=qJJmu4qsLO | @inproceedings{
tan2023is,
title={Is Heterogeneity Notorious? Taming Heterogeneity to Handle Test-Time Shift in Federated Learning},
author={Yue Tan and Chen Chen and Weiming Zhuang and Xin Dong and Lingjuan Lyu and Guodong Long},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=qJJmu4qsLO}
} | Federated learning (FL) is an effective machine learning paradigm where multiple clients can train models based on heterogeneous data in a decentralized manner without accessing their private data. However, existing FL systems undergo performance deterioration due to feature-level test-time shifts, which are well investigated in centralized settings but rarely studied in FL. The common non-IID issue in FL usually refers to inter-client heterogeneity during training phase, while the test-time shift refers to the intra-client heterogeneity during test phase. Although the former is always deemed to be notorious for FL, there is still a wealth of useful information delivered by heterogeneous data sources, which may potentially help alleviate the latter issue. To explore the possibility of using inter-client heterogeneity in handling intra-client heterogeneity, we firstly propose a contrastive learning-based FL framework, namely FedICON, to capture invariant knowledge among heterogeneous clients and consistently tune the model to adapt to test data. In FedICON, each client performs sample-wise supervised contrastive learning during the local training phase, which enhances sample-wise invariance encoding ability. Through global aggregation, the invariance extraction ability can be mutually boosted among inter-client heterogeneity. During the test phase, our test-time adaptation procedure leverages unsupervised contrastive learning to guide the model to smoothly generalize to test data under intra-client heterogeneity. Extensive experiments validate the effectiveness of the proposed FedICON in taming heterogeneity to handle test-time shift problems. | Is Heterogeneity Notorious? Taming Heterogeneity to Handle Test-Time Shift in Federated Learning | [
"Yue Tan",
"Chen Chen",
"Weiming Zhuang",
"Xin Dong",
"Lingjuan Lyu",
"Guodong Long"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=qHzEFxtheD | @inproceedings{
dexter2023sketching,
title={Sketching Algorithms for Sparse Dictionary Learning: {PTAS} and Turnstile Streaming},
author={Gregory Dexter and Petros Drineas and David Woodruff and Taisuke Yasuda},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=qHzEFxtheD}
} | Sketching algorithms have recently proven to be a powerful approach both for designing low-space streaming algorithms as well as fast polynomial time approximation schemes (PTAS). In this work, we develop new techniques to extend the applicability of sketching-based approaches to the sparse dictionary learning and the Euclidean $k$-means clustering problems. In particular, we initiate the study of the challenging setting where the dictionary/clustering assignment for each of the $n$ input points must be output, which has surprisingly received little attention in prior work. On the fast algorithms front, we obtain a new approach for designing PTAS's for the $k$-means clustering problem, which generalizes to the first PTAS for the sparse dictionary learning problem. On the streaming algorithms front, we obtain new upper bounds and lower bounds for dictionary learning and $k$-means clustering. In particular, given a design matrix $\mathbf A\in\mathbb R^{n\times d}$ in a turnstile stream, we show an $\tilde O(nr/\epsilon^2 + dk/\epsilon)$ space upper bound for $r$-sparse dictionary learning of size $k$, an $\tilde O(n/\epsilon^2 + dk/\epsilon)$ space upper bound for $k$-means clustering, as well as an $\tilde O(n)$ space upper bound for $k$-means clustering on random order row insertion streams with a natural "bounded sensitivity" assumption. On the lower bounds side, we obtain a general $\tilde\Omega(n/\epsilon + dk/\epsilon)$ lower bound for $k$-means clustering, as well as an $\tilde\Omega(n/\epsilon^2)$ lower bound for algorithms which can estimate the cost of a single fixed set of candidate centers. | Sketching Algorithms for Sparse Dictionary Learning: PTAS and Turnstile Streaming | [
"Gregory Dexter",
"Petros Drineas",
"David Woodruff",
"Taisuke Yasuda"
] | Conference | poster | 2310.19068 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=qHrZszJSXj | @inproceedings{
galli2023dont,
title={Don't be so Monotone: Relaxing Stochastic Line Search in Over-Parameterized Models},
author={Leonardo Galli and Holger Rauhut and Mark Schmidt},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=qHrZszJSXj}
} | Recent works have shown that line search methods can speed up Stochastic Gradient Descent (SGD) and Adam in modern over-parameterized settings. However, existing line searches may take steps that are smaller than necessary since they require a monotone decrease of the (mini-)batch objective function. We explore nonmonotone line search methods to relax this condition and possibly accept larger step sizes. Despite the lack of a monotonic decrease, we prove the same fast rates of convergence as in the monotone case. Our experiments show that nonmonotone methods improve the speed of convergence and generalization properties of SGD/Adam even beyond the previous monotone line searches. We propose a POlyak NOnmonotone Stochastic (PoNoS) method, obtained by combining a nonmonotone line search with a Polyak initial step size. Furthermore, we develop a new resetting technique that in the majority of the iterations reduces the amount of backtracks to zero while still maintaining a large initial step size. To the best of our knowledge, a first runtime comparison shows that the epoch-wise advantage of line-search-based methods gets reflected in the overall computational time. | Don't be so Monotone: Relaxing Stochastic Line Search in Over-Parameterized Models | [
"Leonardo Galli",
"Holger Rauhut",
"Mark Schmidt"
] | Conference | poster | 2306.12747 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=qHrADgAdYu | @inproceedings{
feng2023towards,
title={Towards Revealing the Mystery behind Chain of Thought: A Theoretical Perspective},
author={Guhao Feng and Bohang Zhang and Yuntian Gu and Haotian Ye and Di He and Liwei Wang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=qHrADgAdYu}
} | Recent studies have discovered that Chain-of-Thought prompting (CoT) can dramatically improve the performance of Large Language Models (LLMs), particularly when dealing with complex tasks involving mathematics or reasoning. Despite the enormous empirical success, the underlying mechanisms behind CoT and how it unlocks the potential of LLMs remain elusive. In this paper, we take a first step towards theoretically answering these questions. Specifically, we examine the expressivity of LLMs with CoT in solving fundamental mathematical and decision-making problems. By using circuit complexity theory, we first give impossibility results showing that bounded-depth Transformers are unable to directly produce correct answers for basic arithmetic/equation tasks unless the model size grows super-polynomially with respect to the input length. In contrast, we then prove by construction that autoregressive Transformers of constant size suffice to solve both tasks by generating CoT derivations using a commonly used math language format. Moreover, we show LLMs with CoT can handle a general class of decision-making problems known as Dynamic Programming, thus justifying their power in tackling complex real-world tasks. Finally, an extensive set of experiments show that, while Transformers always fail to directly predict the answers, they can consistently learn to generate correct solutions step-by-step given sufficient CoT demonstrations. | Towards Revealing the Mystery behind Chain of Thought: A Theoretical Perspective | [
"Guhao Feng",
"Bohang Zhang",
"Yuntian Gu",
"Haotian Ye",
"Di He",
"Liwei Wang"
] | Conference | oral | 2305.15408 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=qCglMj6A4z | @inproceedings{
koloskova2023gradient,
title={Gradient Descent with Linearly Correlated Noise: Theory and Applications to Differential Privacy},
author={Anastasia Koloskova and Ryan McKenna and Zachary Charles and J Keith Rush and Hugh Brendan McMahan},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=qCglMj6A4z}
} | We study gradient descent under linearly correlated noise. Our work is motivated by recent practical methods for optimization with differential privacy (DP), such as DP-FTRL, which achieve strong performance in settings where privacy amplification techniques are infeasible (such as in federated learning). These methods inject privacy noise through a matrix factorization mechanism, making the noise *linearly correlated* over iterations. We propose a simplified setting that distills key facets of these methods and isolates the impact of linearly correlated noise. We analyze the behavior of gradient descent in this setting, for both convex and non-convex functions. Our analysis is demonstrably tighter than prior work and recovers multiple important special cases exactly (including anticorrelated perturbed gradient descent). We use our results to develop new, effective matrix factorizations for differentially private optimization, and highlight the benefits of these factorizations theoretically and empirically. | Gradient Descent with Linearly Correlated Noise: Theory and Applications to Differential Privacy | [
"Anastasia Koloskova",
"Ryan McKenna",
"Zachary Charles",
"J Keith Rush",
"Hugh Brendan McMahan"
] | Conference | poster | 2302.01463 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=qBAED3u1XZ | @inproceedings{
yin2023vlattack,
title={{VLATTACK}: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models},
author={Ziyi Yin and Muchao Ye and Tianrong Zhang and Tianyu Du and Jinguo Zhu and Han Liu and Jinghui Chen and Ting Wang and Fenglong Ma},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=qBAED3u1XZ}
} | Vision-Language (VL) pre-trained models have shown their superiority on many multimodal tasks. However, the adversarial robustness of such models has not been fully explored. Existing approaches mainly focus on exploring the adversarial robustness under the white-box setting, which is unrealistic. In this paper, we aim to investigate a new yet practical task to craft image and text perturbations using pre-trained VL models to attack black-box fine-tuned models on different downstream tasks. Towards this end, we propose VLATTACK to generate adversarial samples by fusing perturbations of images and texts from both single-modal and multi-modal levels. At the single-modal level, we propose a new block-wise similarity attack (BSA) strategy to learn image perturbations for disrupting universal representations. Besides, we adopt an existing text attack strategy to generate text perturbations independent of the image-modal attack. At the multi-modal level, we design a novel iterative cross-search attack (ICSA) method to update adversarial image-text pairs periodically, starting with the outputs from the single-modal level. We conduct extensive experiments to attack three widely-used VL pretrained models for six tasks on eight datasets. Experimental results show that the proposed VLATTACK framework achieves the highest attack success rates on all tasks compared with state-of-the-art baselines, which reveals a significant blind spot in the deployment of pre-trained VL models. | VLATTACK: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models | [
"Ziyi Yin",
"Muchao Ye",
"Tianrong Zhang",
"Tianyu Du",
"Jinguo Zhu",
"Han Liu",
"Jinghui Chen",
"Ting Wang",
"Fenglong Ma"
] | Conference | poster | 2310.04655 | [
"https://github.com/ericyinyzy/vlattack"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=qA0uHmaVKk | @inproceedings{
wu2023complexvalued,
title={Complex-valued Neurons Can Learn More but Slower than Real-valued Neurons via Gradient Descent},
author={Jin-Hui Wu and Shao-Qun Zhang and Yuan Jiang and Zhi-Hua Zhou},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=qA0uHmaVKk}
} | Complex-valued neural networks potentially possess better representations and performance than real-valued counterparts when dealing with some complicated tasks such as acoustic analysis, radar image classification, etc. Despite empirical successes, it remains unknown theoretically when and to what extent complex-valued neural networks outperform real-valued ones. We take one step in this direction by comparing the learnability of real-valued neurons and complex-valued neurons via gradient descent. We show that a complex-valued neuron can efficiently learn functions expressed by any one real-valued neuron and any one complex-valued neuron with convergence rate $O(t^{-3})$ and $O(t^{-1})$ where $t$ is the iteration index of gradient descent, respectively, whereas a two-layer real-valued neural network with finite width cannot learn a single non-degenerate complex-valued neuron. We prove that a complex-valued neuron learns a real-valued neuron with rate $\Omega (t^{-3})$, exponentially slower than the $O(\mathrm{e}^{- c t})$ rate of learning one real-valued neuron using a real-valued neuron with a constant $c$. We further verify and extend these results via simulation experiments in more general settings. | Complex-valued Neurons Can Learn More but Slower than Real-valued Neurons via Gradient Descent | [
"Jin-Hui Wu",
"Shao-Qun Zhang",
"Yuan Jiang",
"Zhi-Hua Zhou"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=q9WMXjUxxT | @inproceedings{
kim2023trust,
title={Trust Region-Based Safe Distributional Reinforcement Learning for Multiple Constraints},
author={Dohyeong Kim and Kyungjae Lee and Songhwai Oh},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=q9WMXjUxxT}
} | In safety-critical robotic tasks, potential failures must be reduced, and multiple constraints must be met, such as avoiding collisions, limiting energy consumption, and maintaining balance.
Thus, applying safe reinforcement learning (RL) in such robotic tasks requires to handle multiple constraints and use risk-averse constraints rather than risk-neutral constraints.
To this end, we propose a trust region-based safe RL algorithm for multiple constraints called a safe distributional actor-critic (SDAC).
Our main contributions are as follows: 1) introducing a gradient integration method to manage infeasibility issues in multi-constrained problems, ensuring theoretical convergence, and 2) developing a TD($\lambda$) target distribution to estimate risk-averse constraints with low biases.
We evaluate SDAC through extensive experiments involving multi- and single-constrained robotic tasks.
While maintaining high scores, SDAC shows 1.93 times fewer steps to satisfy all constraints in multi-constrained tasks and 1.78 times fewer constraint violations in single-constrained tasks compared to safe RL baselines.
Code is available at: https://github.com/rllab-snu/Safe-Distributional-Actor-Critic. | Trust Region-Based Safe Distributional Reinforcement Learning for Multiple Constraints | [
"Dohyeong Kim",
"Kyungjae Lee",
"Songhwai Oh"
] | Conference | poster | 2301.10923 | [
"https://github.com/rllab-snu/safe-distributional-actor-critic"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=q8mH2d6uw2 | @inproceedings{
wang2023deep,
title={Deep Contract Design via Discontinuous Networks},
author={Tonghan Wang and Paul Duetting and Dmitry Ivanov and Inbal Talgam-Cohen and David C. Parkes},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=q8mH2d6uw2}
} | Contract design involves a principal who establishes contractual agreements about payments for outcomes that arise from the actions of an agent. In this paper, we initiate the study of deep learning for the automated design of optimal contracts. We introduce a novel representation: the Discontinuous ReLU (DeLU) network, which models the principal's utility as a discontinuous piecewise affine function of the design of a contract where each piece corresponds to the agent taking a particular action. DeLU networks implicitly learn closed-form expressions for the incentive compatibility constraints of the agent and the utility maximization objective of the principal, and support parallel inference on each piece through linear programming or interior-point methods that solve for optimal contracts. We provide empirical results that demonstrate success in approximating the principal's utility function with a small number of training samples and scaling to find approximately optimal contracts on problems with a large number of actions and outcomes. | Deep Contract Design via Discontinuous Networks | [
"Tonghan Wang",
"Paul Duetting",
"Dmitry Ivanov",
"Inbal Talgam-Cohen",
"David C. Parkes"
] | Conference | poster | 2307.02318 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=q8SukwaEBy | @inproceedings{
peng2023learning,
title={Learning from Active Human Involvement through Proxy Value Propagation},
author={Zhenghao Peng and Wenjie Mo and Chenda Duan and Quanyi Li and Bolei Zhou},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=q8SukwaEBy}
} | Learning from active human involvement enables the human subject to actively intervene and demonstrate to the AI agent during training. The interaction and corrective feedback from human brings safety and AI alignment to the learning process. In this work, we propose a new reward-free active human involvement method called Proxy Value Propagation for policy optimization. Our key insight is that a proxy value function can be designed to express human intents, wherein state- action pairs in the human demonstration are labeled with high values, while those agents’ actions that are intervened receive low values. Through the TD-learning framework, labeled values of demonstrated state-action pairs are further propagated to other unlabeled data generated from agents’ exploration. The proxy value function thus induces a policy that faithfully emulates human behaviors. Human- in-the-loop experiments show the generality and efficiency of our method. With minimal modification to existing reinforcement learning algorithms, our method can learn to solve continuous and discrete control tasks with various human control devices, including the challenging task of driving in Grand Theft Auto V. Demo video and code are available at: https://metadriverse.github.io/pvp. | Learning from Active Human Involvement through Proxy Value Propagation | [
"Zhenghao Peng",
"Wenjie Mo",
"Chenda Duan",
"Quanyi Li",
"Bolei Zhou"
] | Conference | spotlight | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=q6bVqOgGxP | @inproceedings{
han2023triple,
title={Triple Eagle: Simple, Fast and Practical Budget-Feasible Mechanisms},
author={Kai Han and You Wu and He Huang and Shuang Cui},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=q6bVqOgGxP}
} | We revisit the classical problem of designing Budget-Feasible Mechanisms (BFMs) for submodular valuation functions, which has been extensively studied since the seminal paper of Singer [FOCS’10] due to its wide applications in crowdsourcing and social marketing. We propose TripleEagle, a novel algorithmic framework for designing BFMs, based on which we present several simple yet effective BFMs that
achieve better approximation ratios than the state-of-the-art work for both monotone and non-monotone submodular valuation functions. Moreover, our BFMs are the first in the literature to achieve linear complexities while ensuring obvious strategyproofness, making them more practical than the previous BFMs. We conduct extensive experiments to evaluate the empirical performance of our BFMs, and the experimental results strongly demonstrate the efficiency and effectiveness of our approach. | Triple Eagle: Simple, Fast and Practical Budget-Feasible Mechanisms | [
"Kai Han",
"You Wu",
"He Huang",
"Shuang Cui"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=q6X038vKgU | @inproceedings{
kollovieh2023predict,
title={Predict, Refine, Synthesize: Self-Guiding Diffusion Models for Probabilistic Time Series Forecasting},
author={Marcel Kollovieh and Abdul Fatir Ansari and Michael Bohlke-Schneider and Jasper Zschiegner and Hao Wang and Bernie Wang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=q6X038vKgU}
} | Diffusion models have achieved state-of-the-art performance in generative modeling tasks across various domains. Prior works on time series diffusion models have primarily focused on developing conditional models tailored to specific forecasting or imputation tasks. In this work, we explore the potential of task-agnostic, unconditional diffusion models for several time series applications. We propose TSDiff, an unconditionally-trained diffusion model for time series. Our proposed self-guidance mechanism enables conditioning TSDiff for downstream tasks during inference, without requiring auxiliary networks or altering the training procedure. We demonstrate the effectiveness of our method on three different time series tasks: forecasting, refinement, and synthetic data generation. First, we show that TSDiff is competitive with several task-specific conditional forecasting methods (*predict*). Second, we leverage the learned implicit probability density of TSDiff to iteratively refine the predictions of base forecasters with reduced computational overhead over reverse diffusion (*refine*). Notably, the generative performance of the model remains intact — downstream forecasters trained on synthetic samples from TSDiff outperform forecasters that are trained on samples from other state-of-the-art generative time series models, occasionally even outperforming models trained on real data (*synthesize*).
Our code is available at https://github.com/amazon-science/unconditional-time-series-diffusion | Predict, Refine, Synthesize: Self-Guiding Diffusion Models for Probabilistic Time Series Forecasting | [
"Marcel Kollovieh",
"Abdul Fatir Ansari",
"Michael Bohlke-Schneider",
"Jasper Zschiegner",
"Hao Wang",
"Bernie Wang"
] | Conference | poster | 2307.11494 | [
"https://github.com/amazon-science/unconditional-time-series-diffusion"
] | https://huggingface.co/papers/2307.11494 | 0 | 1 | 0 | 6 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=q5FAZAIooz | @inproceedings{
luo2023difffoley,
title={Diff-Foley: Synchronized Video-to-Audio Synthesis with Latent Diffusion Models},
author={Simian Luo and Chuanhao Yan and Chenxu Hu and Hang Zhao},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=q5FAZAIooz}
} | The Video-to-Audio (V2A) model has recently gained attention for its practical application in generating audio directly from silent videos, particularly in video/film production. However, previous methods in V2A have limited generation quality in terms of temporal synchronization and audio-visual relevance. We present Diff-Foley, a synchronized Video-to-Audio synthesis method with a latent diffusion model (LDM) that generates high-quality audio with improved synchronization and audio-visual relevance. We adopt contrastive audio-visual pretraining (CAVP) to learn more temporally and semantically aligned features, then train an LDM with CAVP-aligned visual features on spectrogram latent space. The CAVP-aligned features enable LDM to capture the subtler audio-visual correlation via a cross-attention module. We further significantly improve sample quality with `double guidance'. Diff-Foley achieves state-of-the-art V2A performance on current large scale V2A dataset. Furthermore, we demonstrate Diff-Foley practical applicability and adaptability via customized downstream finetuning. Project Page: https://diff-foley.github.io/ | Diff-Foley: Synchronized Video-to-Audio Synthesis with Latent Diffusion Models | [
"Simian Luo",
"Chuanhao Yan",
"Chenxu Hu",
"Hang Zhao"
] | Conference | poster | 2306.17203 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=q4HlFS7B7Y | @inproceedings{
bhalla2023discriminative,
title={Discriminative Feature Attributions: Bridging Post Hoc Explainability and Inherent Interpretability},
author={Usha Bhalla and Suraj Srinivas and Himabindu Lakkaraju},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=q4HlFS7B7Y}
} | With the increased deployment of machine learning models in various real-world applications, researchers and practitioners alike have emphasized the need for explanations of model behaviour. To this end, two broad strategies have been outlined in prior literature to explain models. Post hoc explanation methods explain the behaviour of complex black-box models by identifying features critical to model predictions; however, prior work has shown that these explanations may not be faithful, in that they incorrectly attribute high importance to features that are unimportant or non-discriminative for the underlying task. Inherently interpretable models, on the other hand, circumvent these issues by explicitly encoding explanations into model architecture, meaning their explanations are naturally faithful, but they often exhibit poor predictive performance due to their limited expressive power. In this work, we identify a key reason for the lack of faithfulness of feature attributions: the lack of robustness of the underlying black-box models, especially the erasure of unimportant distractor features in the input. To address this issue, we propose Distractor Erasure Tuning (DiET), a method that adapts black-box models to be robust to distractor erasure, thus providing discriminative and faithful attributions. This strategy naturally combines the ease-of-use of post hoc explanations with the faithfulness of inherently interpretable models. We perform extensive experiments on semi-synthetic and real-world datasets, and show that DiET produces models that (1) closely approximate the original black-box models they are intended to explain, and (2) yield explanations that match approximate ground truths available by construction. | Discriminative Feature Attributions: Bridging Post Hoc Explainability and Inherent Interpretability | [
"Usha Bhalla",
"Suraj Srinivas",
"Himabindu Lakkaraju"
] | Conference | poster | 2307.15007 | [
"https://github.com/ai4life-group/diet"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=q3fCWoC9l0 | @inproceedings{
jain2023efficient,
title={Efficient Data Subset Selection to Generalize Training Across Models: Transductive and Inductive Networks},
author={Eeshaan Jain and Tushar Nandy and Gaurav Aggarwal and Ashish V. Tendulkar and Rishabh K Iyer and Abir De},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=q3fCWoC9l0}
} | Existing subset selection methods for efficient learning predominantly employ discrete combinatorial and model-specific approaches, which lack generalizability--- for each new model, the algorithm has to be executed from the beginning. Therefore, for an unseen architecture, one cannot use the subset chosen for a different model. In this work, we propose $\texttt{SubSelNet}$, a non-adaptive subset selection framework, which tackles these problems. Here, we first introduce an attention-based neural gadget that leverages the graph structure of architectures and acts as a surrogate to trained deep neural networks for quick model prediction. Then, we use these predictions to build subset samplers. This naturally provides us two variants of $\texttt{SubSelNet}$. The first variant is transductive (called Transductive-$\texttt{SubSelNet}$), which computes the subset separately for each model by solving a small optimization problem. Such an optimization is still super fast, thanks to the replacement of explicit model training by the model approximator. The second variant is inductive (called Inductive-$\texttt{SubSelNet}$), which computes the subset using a trained subset selector, without any optimization.
Our experiments show that our model outperforms several methods across several real datasets. | Efficient Data Subset Selection to Generalize Training Across Models: Transductive and Inductive Networks | [
"Eeshaan Jain",
"Tushar Nandy",
"Gaurav Aggarwal",
"Ashish V. Tendulkar",
"Rishabh K Iyer",
"Abir De"
] | Conference | poster | 2409.12255 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=q3fA5tTod3 | @inproceedings{
sarch2023brain,
title={Brain Dissection: f{MRI}-trained Networks Reveal Spatial Selectivity in the Processing of Natural Images},
author={Gabriel Herbert Sarch and Michael J. Tarr and Katerina Fragkiadaki and Leila Wehbe},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=q3fA5tTod3}
} | The alignment between deep neural network (DNN) features and cortical responses currently provides the most accurate quantitative explanation for higher visual areas. At the same time, these model features have been critiqued as uninterpretable explanations, trading one black box (the human brain) for another (a neural network). In this paper, we train networks to directly predict, from scratch, brain responses to images from a large-scale dataset of natural scenes (Allen et. al., 2021). We then use "network dissection" (Bau et. al., 2017), an explainable AI technique used for enhancing neural network interpretability by identifying and localizing the most significant features in images for individual units of a trained network, and which has been used to study category selectivity in the human brain (Khosla & Wehbe, 2022). We adapt this approach to create a hypothesis-neutral model that is then used to explore the tuning properties of specific visual regions beyond category selectivity, which we call "brain dissection". We use brain dissection to examine a range of ecologically important, intermediate properties, including depth, surface normals, curvature, and object relations across sub-regions of the parietal, lateral, and ventral visual streams, and scene-selective regions. Our findings reveal distinct preferences in brain regions for interpreting visual scenes, with ventro-lateral areas favoring closer and curvier features, medial and parietal areas opting for more varied and flatter 3D elements, and the parietal region uniquely preferring spatial relations. Scene-selective regions exhibit varied preferences, as the retrosplenial complex prefers distant and outdoor features, while the occipital and parahippocampal place areas favor proximity, verticality, and in the case of the OPA, indoor elements. Such findings show the potential of using explainable AI to uncover spatial feature selectivity across the visual cortex, contributing to a deeper, more fine-grained understanding of the functional characteristics of human visual cortex when viewing natural scenes. | Brain Dissection: fMRI-trained Networks Reveal Spatial Selectivity in the Processing of Natural Images | [
"Gabriel Herbert Sarch",
"Michael J. Tarr",
"Katerina Fragkiadaki",
"Leila Wehbe"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=q1JukwH2yP | @inproceedings{
ma2023learning,
title={Learning to Search Feasible and Infeasible Regions of Routing Problems with Flexible Neural k-Opt},
author={Yining Ma and Zhiguang Cao and Yeow Meng Chee},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=q1JukwH2yP}
} | In this paper, we present Neural k-Opt (NeuOpt), a novel learning-to-search (L2S) solver for routing problems. It learns to perform flexible k-opt exchanges based on a tailored action factorization method and a customized recurrent dual-stream decoder. As a pioneering work to circumvent the pure feasibility masking scheme and enable the autonomous exploration of both feasible and infeasible regions, we then propose the Guided Infeasible Region Exploration (GIRE) scheme, which supplements the NeuOpt policy network with feasibility-related features and leverages reward shaping to steer reinforcement learning more effectively. Additionally, we equip NeuOpt with Dynamic Data Augmentation (D2A) for more diverse searches during inference. Extensive experiments on the Traveling Salesman Problem (TSP) and Capacitated Vehicle Routing Problem (CVRP) demonstrate that our NeuOpt not only significantly outstrips existing (masking-based) L2S solvers, but also showcases superiority over the learning-to-construct (L2C) and learning-to-predict (L2P) solvers. Notably, we offer fresh perspectives on how neural solvers can handle VRP constraints. Our code is available: https://github.com/yining043/NeuOpt. | Learning to Search Feasible and Infeasible Regions of Routing Problems with Flexible Neural k-Opt | [
"Yining Ma",
"Zhiguang Cao",
"Yeow Meng Chee"
] | Conference | poster | 2310.18264 | [
"https://github.com/yining043/neuopt"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=q131tA7HCT | @inproceedings{
buchholz2023learning,
title={Learning Linear Causal Representations from Interventions under General Nonlinear Mixing},
author={Simon Buchholz and Goutham Rajendran and Elan Rosenfeld and Bryon Aragam and Bernhard Sch{\"o}lkopf and Pradeep Kumar Ravikumar},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=q131tA7HCT}
} | We study the problem of learning causal representations from unknown, latent interventions in a general setting, where the latent distribution is Gaussian but the mixing function is completely general. We prove strong identifiability results given unknown single-node interventions, i.e., without having access to the intervention targets. This generalizes prior works which have focused on weaker classes, such as linear maps or paired counterfactual data. This is also the first instance of identifiability from non-paired interventions for deep neural network embeddings and general causal structures. Our proof relies on carefully uncovering the high-dimensional geometric structure present in the data distribution after a non-linear density transformation, which we capture by analyzing quadratic forms of precision matrices of the latent distributions. Finally, we propose a contrastive algorithm to identify the latent variables in practice and evaluate its performance on various tasks. | Learning Linear Causal Representations from Interventions under General Nonlinear Mixing | [
"Simon Buchholz",
"Goutham Rajendran",
"Elan Rosenfeld",
"Bryon Aragam",
"Bernhard Schölkopf",
"Pradeep Kumar Ravikumar"
] | Conference | oral | 2306.02235 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=q0sdoFIfNg | @inproceedings{
lee2023spqr,
title={{SPQR}: Controlling Q-ensemble Independence with Spiked Random Model for Reinforcement Learning},
author={Dohyeok Lee and Seungyub Han and Taehyun Cho and Jungwoo Lee},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=q0sdoFIfNg}
} | Alleviating overestimation bias is a critical challenge for deep reinforcement learning to achieve successful performance on more complex tasks or offline datasets containing out-of-distribution data.
In order to overcome overestimation bias, ensemble methods for Q-learning have been investigated to exploit the diversity of multiple Q-functions.
Since network initialization has been the predominant approach to promote diversity in Q-functions, heuristically designed diversity injection methods have been studied in the literature.
However, previous studies have not attempted to approach guaranteed independence over an ensemble from a theoretical perspective.
By introducing a novel regularization loss for Q-ensemble independence based on random matrix theory, we propose spiked Wishart Q-ensemble independence regularization (SPQR) for reinforcement learning.
Specifically, we modify the intractable hypothesis testing criterion for the Q-ensemble independence into a tractable KL divergence between the spectral distribution of the Q-ensemble and the target Wigner's semicircle distribution.
We implement SPQR in several online and offline ensemble Q-learning algorithms.
In the experiments, SPQR outperforms the baseline algorithms in both online and offline RL benchmarks. | SPQR: Controlling Q-ensemble Independence with Spiked Random Model for Reinforcement Learning | [
"Dohyeok Lee",
"Seungyub Han",
"Taehyun Cho",
"Jungwoo Lee"
] | Conference | poster | 2401.03137 | [
"https://github.com/dohyeoklee/SPQR"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=q0RfX96un8 | @inproceedings{
datta2023on,
title={On the Consistency of Maximum Likelihood Estimation of Probabilistic Principal Component Analysis},
author={Arghya Datta and Sayak Chakrabarty},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=q0RfX96un8}
} | Probabilistic principal component analysis (PPCA) is currently one of the most used statistical tools to reduce the ambient dimension of the data. From multidimensional scaling to the imputation of missing data, PPCA has a broad spectrum of applications ranging from science and engineering to quantitative finance.\\
Despite this wide applicability in various fields, hardly any theoretical guarantees exist to justify the soundness of the maximal likelihood (ML) solution for this model. In fact, it is well known that the maximum likelihood estimation (MLE) can only recover the true model parameters up to a rotation. The main obstruction is posed by the inherent identifiability nature of the PPCA model resulting from the rotational symmetry of the parameterization. To resolve this ambiguity, we propose a novel approach using quotient topological spaces and in particular, we show that the maximum likelihood solution is consistent in an appropriate quotient Euclidean space. Furthermore, our consistency results encompass a more general class of estimators beyond the MLE. Strong consistency of the ML estimate and consequently strong covariance estimation of the PPCA model have also been established under a compactness assumption. | On the Consistency of Maximum Likelihood Estimation of Probabilistic Principal Component Analysis | [
"Arghya Datta",
"Sayak Chakrabarty"
] | Conference | poster | 2311.05046 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=pzc6LnUxYN | @inproceedings{
cheng2023statemask,
title={StateMask: Explaining Deep Reinforcement Learning through State Mask},
author={Zelei Cheng and Xian Wu and Jiahao Yu and Wenhai Sun and Wenbo Guo and Xinyu Xing},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=pzc6LnUxYN}
} | Despite the promising performance of deep reinforcement learning (DRL) agents in many challenging scenarios, the black-box nature of these agents greatly limits their applications in critical domains. Prior research has proposed several explanation techniques to understand the deep learning-based policies in RL. Most existing methods explain why an agent takes individual actions rather than pinpointing the critical steps to its final reward. To fill this gap, we propose StateMask, a novel method to identify the states most critical to the agent's final reward. The high-level idea of StateMask is to learn a mask net that blinds a target agent and forces it to take random actions at some steps without compromising the agent's performance. Through careful design, we can theoretically ensure that the masked agent performs similarly to the original agent. We evaluate StateMask in various popular RL environments and show its superiority over existing explainers in explanation fidelity. We also show that StateMask has better utilities, such as launching adversarial attacks and patching policy errors. | StateMask: Explaining Deep Reinforcement Learning through State Mask | [
"Zelei Cheng",
"Xian Wu",
"Jiahao Yu",
"Wenhai Sun",
"Wenbo Guo",
"Xinyu Xing"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=pw5hEuEroL | @inproceedings{
wang2023unified,
title={Unified Enhancement of Privacy Bounds for Mixture Mechanisms via \$f\$-Differential Privacy},
author={Chendi Wang and Buxin Su and Jiayuan Ye and Reza Shokri and Weijie J Su},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=pw5hEuEroL}
} | Differentially private (DP) machine learning algorithms incur many sources of randomness, such as random initialization, random batch subsampling, and shuffling. However, such randomness is difficult to take into account when proving differential privacy bounds because it induces mixture distributions for the algorithm's output that are difficult to analyze.
This paper focuses on improving privacy bounds for shuffling models and one-iteration differentially private gradient descent (DP-GD) with random initializations using $f$-DP.
We derive a closed-form expression of the trade-off function for shuffling models that outperforms the most up-to-date results based on $(\epsilon,\delta)$-DP.
Moreover, we investigate the effects of random initialization on the privacy of one-iteration DP-GD.
Our numerical computations of the trade-off function indicate that random initialization can enhance the privacy of DP-GD.
Our analysis of $f$-DP guarantees for these mixture mechanisms relies on an inequality for trade-off functions introduced in this paper. This inequality implies the joint convexity of $F$-divergences.
Finally, we study an $f$-DP analog of the advanced joint convexity of the hockey-stick divergence related to $(\epsilon,\delta)$-DP and apply it to analyze the privacy of mixture mechanisms. | Unified Enhancement of Privacy Bounds for Mixture Mechanisms via f-Differential Privacy | [
"Chendi Wang",
"Buxin Su",
"Jiayuan Ye",
"Reza Shokri",
"Weijie J Su"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=pvSKVt3EsM | @inproceedings{
cao2023flowattentionbased,
title={Flow-Attention-based Spatio-Temporal Aggregation Network for 3D Mask Detection},
author={Yuxin Cao and Yian Li and Yumeng Zhu and Derui Wang and Minhui Xue},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=pvSKVt3EsM}
} | Anti-spoofing detection has become a necessity for face recognition systems due to the security threat posed by spoofing attacks. Despite great success in traditional attacks, most deep-learning-based methods perform poorly in 3D masks, which can highly simulate real faces in appearance and structure, suffering generalizability insufficiency while focusing only on the spatial domain with single frame input. This has been mitigated by the recent introduction of a biomedical technology called rPPG (remote photoplethysmography). However, rPPG-based methods are sensitive to noisy interference and require at least one second (> 25 frames) of observation time, which induces high computational overhead. To address these challenges, we propose a novel 3D mask detection framework, called FASTEN (Flow-Attention-based Spatio-Temporal aggrEgation Network). We tailor the network for focusing more on fine-grained details in large movements, which can eliminate redundant spatio-temporal feature interference and quickly capture splicing traces of 3D masks in fewer frames. Our proposed network contains three key modules: 1) a facial optical flow network to obtain non-RGB inter-frame flow information; 2) flow attention to assign different significance to each frame; 3) spatio-temporal aggregation to aggregate high-level spatial features and temporal transition features. Through extensive experiments, FASTEN only requires five frames of input and outperforms eight competitors for both intra-dataset and cross-dataset evaluations in terms of multiple detection metrics. Moreover, FASTEN has been deployed in real-world mobile devices for practical 3D mask detection. | Flow-Attention-based Spatio-Temporal Aggregation Network for 3D Mask Detection | [
"Yuxin Cao",
"Yian Li",
"Yumeng Zhu",
"Derui Wang",
"Minhui Xue"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=pvPujuvjQd | @inproceedings{
daniely2023most,
title={Most Neural Networks Are Almost Learnable},
author={Amit Daniely and Nathan Srebro and Gal Vardi},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=pvPujuvjQd}
} | We present a PTAS for learning random constant-depth networks. We show that for any fixed $\epsilon>0$ and depth $i$, there is a poly-time algorithm that for any distribution on $\sqrt{d} \cdot \mathbb{S}^{d-1}$ learns random Xavier networks of depth $i$, up to an additive error of $\epsilon$. The algorithm runs in time and sample complexity of $(\bar{d})^{\mathrm{poly}(\epsilon^{-1})}$, where $\bar d$ is the size of the network. For some cases of sigmoid and ReLU-like activations the bound can be improved to $(\bar{d})^{\mathrm{polylog}(\epsilon^{-1})}$, resulting in a quasi-poly-time algorithm for learning constant depth random networks. | Most Neural Networks Are Almost Learnable | [
"Amit Daniely",
"Nathan Srebro",
"Gal Vardi"
] | Conference | poster | 2305.16508 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=puupdGOWUp | @inproceedings{
ju2023graphpatcher,
title={GraphPatcher: Mitigating Degree Bias for Graph Neural Networks via Test-time Augmentation},
author={Mingxuan Ju and Tong Zhao and Wenhao Yu and Neil Shah and Yanfang Ye},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=puupdGOWUp}
} | Recent studies have shown that graph neural networks (GNNs) exhibit strong biases towards the node degree: they usually perform satisfactorily on high-degree nodes with rich neighbor information but struggle with low-degree nodes. Existing works tackle this problem by deriving either designated GNN architectures or training strategies specifically for low-degree nodes. Though effective, these approaches unintentionally create an artificial out-of-distribution scenario, where models mainly or even only observe low-degree nodes during the training, leading to a downgraded performance for high-degree nodes that GNNs originally perform well at. In light of this, we propose a test-time augmentation framework, namely GraphPatcher, to enhance test-time generalization of any GNNs on low-degree nodes. Specifically, GraphPatcher iteratively generates virtual nodes to patch artificially created low-degree nodes via corruptions, aiming at progressively reconstructing target GNN's predictions over a sequence of increasingly corrupted nodes. Through this scheme, GraphPatcher not only learns how to enhance low-degree nodes (when the neighborhoods are heavily corrupted) but also preserves the original superior performance of GNNs on high-degree nodes (when lightly corrupted). Additionally, GraphPatcher is model-agnostic and can also mitigate the degree bias for either self-supervised or supervised GNNs. Comprehensive experiments are conducted over seven benchmark datasets and GraphPatcher consistently enhances common GNNs' overall performance by up to 3.6% and low-degree performance by up to 6.5%, significantly outperforming state-of-the-art baselines. The source code is publicly available at https://github.com/jumxglhf/GraphPatcher. | GraphPatcher: Mitigating Degree Bias for Graph Neural Networks via Test-time Augmentation | [
"Mingxuan Ju",
"Tong Zhao",
"Wenhao Yu",
"Neil Shah",
"Yanfang Ye"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=psXVkKO9No | @inproceedings{
catt2023selfpredictive,
title={Self-Predictive Universal {AI}},
author={Elliot Catt and Jordi Grau-Moya and Marcus Hutter and Matthew Aitchison and Tim Genewein and Gregoire Deletang and Li Kevin Wenliang and Joel Veness},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=psXVkKO9No}
} | Reinforcement Learning (RL) algorithms typically utilize learning and/or planning techniques to derive effective policies. The integration of both approaches has proven to be highly successful in addressing complex sequential decision-making challenges, as evidenced by algorithms such as AlphaZero and MuZero, which consolidate the planning process into a parametric search-policy. AIXI, the most potent theoretical universal agent, leverages planning through comprehensive search as its primary means to find an optimal policy. Here we define an alternative universal agent, which we call Self-AIXI, that on the contrary to AIXI, maximally exploits learning to obtain good policies. It does so by self-predicting its own stream of action data, which is generated, similarly to other TD(0) agents, by taking an action maximization step over the current on-policy (universal mixture-policy) Q-value estimates. We prove that Self-AIXI converges to AIXI, and inherits a series of properties like maximal Legg-Hutter intelligence and the self-optimizing property. | Self-Predictive Universal AI | [
"Elliot Catt",
"Jordi Grau-Moya",
"Marcus Hutter",
"Matthew Aitchison",
"Tim Genewein",
"Gregoire Deletang",
"Li Kevin Wenliang",
"Joel Veness"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=prftZp6mDH | @inproceedings{
jha2023label,
title={Label Poisoning is All You Need},
author={Rishi Dev Jha and Jonathan Hayase and Sewoong Oh},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=prftZp6mDH}
} | In a backdoor attack, an adversary injects corrupted data into a model's training dataset in order to gain control over its predictions on images with a specific attacker-defined trigger. A typical corrupted training example requires altering both the image, by applying the trigger, and the label. Models trained on clean images, therefore, were considered safe from backdoor attacks. However, in some common machine learning scenarios, the training labels are provided by potentially malicious third-parties. This includes crowd-sourced annotation and knowledge distillation. We, hence, investigate a fundamental question: can we launch a successful backdoor attack by only corrupting labels? We introduce a novel approach to design label-only backdoor attacks, which we call FLIP, and demonstrate its strengths on three datasets (CIFAR-10, CIFAR-100, and Tiny-ImageNet) and four architectures (ResNet-32, ResNet-18, VGG-19, and Vision Transformer). With only 2% of CIFAR-10 labels corrupted, FLIP achieves a near-perfect attack success rate of 99.4% while suffering only a 1.8% drop in the clean test accuracy. Our approach builds upon the recent advances in trajectory matching, originally introduced for dataset distillation. | Label Poisoning is All You Need | [
"Rishi Dev Jha",
"Jonathan Hayase",
"Sewoong Oh"
] | Conference | poster | 2310.18933 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=prIwYTU9PV | @inproceedings{
cai2023distributional,
title={Distributional Pareto-Optimal Multi-Objective Reinforcement Learning},
author={Xin-Qiang Cai and Pushi Zhang and Li Zhao and Jiang Bian and Masashi Sugiyama and Ashley Juan Llorens},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=prIwYTU9PV}
} | Multi-objective reinforcement learning (MORL) has been proposed to learn control policies over multiple competing objectives with each possible preference over returns. However, current MORL algorithms fail to account for distributional preferences over the multi-variate returns, which are particularly important in real-world scenarios such as autonomous driving. To address this issue, we extend the concept of Pareto-optimality in MORL into distributional Pareto-optimality, which captures the optimality of return distributions, rather than the expectations. Our proposed method, called Distributional Pareto-Optimal Multi-Objective Reinforcement Learning~(DPMORL), is capable of learning distributional Pareto-optimal policies that balance multiple objectives while considering the return uncertainty. We evaluated our method on several benchmark problems and demonstrated its effectiveness in discovering distributional Pareto-optimal policies and satisfying diverse distributional preferences compared to existing MORL methods. | Distributional Pareto-Optimal Multi-Objective Reinforcement Learning | [
"Xin-Qiang Cai",
"Pushi Zhang",
"Li Zhao",
"Jiang Bian",
"Masashi Sugiyama",
"Ashley Juan Llorens"
] | Conference | poster | [
"https://github.com/zpschang/dpmorl"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=ppJuFSOAnM | @inproceedings{
wang2023prolificdreamer,
title={ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation},
author={Zhengyi Wang and Cheng Lu and Yikai Wang and Fan Bao and Chongxuan Li and Hang Su and Jun Zhu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ppJuFSOAnM}
} | Score distillation sampling (SDS) has shown great promise in text-to-3D generation by distilling pretrained large-scale text-to-image diffusion models, but suffers from over-saturation, over-smoothing, and low-diversity problems. In this work, we propose to model the 3D parameter as a random variable instead of a constant as in SDS and present *variational score distillation* (VSD), a principled particle-based variational framework to explain and address the aforementioned issues in text-to-3D generation. We show that SDS is a special case of VSD and leads to poor samples with both small and large CFG weights. In comparison, VSD works well with various CFG weights as ancestral sampling from diffusion models and simultaneously improves the diversity and sample quality with a common CFG weight (i.e., 7.5). We further present various improvements in the design space for text-to-3D such as distillation time schedule and density initialization, which are orthogonal to the distillation algorithm yet not well explored. Our overall approach, dubbed *ProlificDreamer*, can generate high rendering resolution (i.e., 512$\times$512) and high-fidelity NeRF with rich structure and complex effects (e.g., smoke and drops). Further, initialized from NeRF, meshes fine-tuned by VSD are meticulously detailed and photo-realistic. | ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation | [
"Zhengyi Wang",
"Cheng Lu",
"Yikai Wang",
"Fan Bao",
"Chongxuan Li",
"Hang Su",
"Jun Zhu"
] | Conference | spotlight | 2305.16213 | [
"https://github.com/threestudio-project/threestudio"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=pjSzKhSrfs | @inproceedings{
neklyudov2023wasserstein,
title={Wasserstein Quantum Monte Carlo: A Novel Approach for Solving the Quantum Many-Body Schr\"odinger Equation},
author={Kirill Neklyudov and Jannes Nys and Luca Thiede and Juan Felipe Carrasquilla Alvarez and qiang liu and Max Welling and Alireza Makhzani},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=pjSzKhSrfs}
} | Solving the quantum many-body Schrödinger equation is a fundamental and challenging problem in the fields of quantum physics, quantum chemistry, and material sciences. One of the common computational approaches to this problem is Quantum Variational Monte Carlo (QVMC), in which ground-state solutions are obtained by minimizing the energy of the system within a restricted family of parameterized wave functions. Deep learning methods partially address the limitations of traditional QVMC by representing a rich family of wave functions in terms of neural networks. However, the optimization objective in QVMC remains notoriously hard to minimize and requires second-order optimization methods such as natural gradient. In this paper, we first reformulate energy functional minimization in the space of Born distributions corresponding to particle-permutation (anti-)symmetric wave functions, rather than the space of wave functions. We then interpret QVMC as the Fisher--Rao gradient flow in this distributional space, followed by a projection step onto the variational manifold. This perspective provides us with a principled framework to derive new QMC algorithms, by endowing the distributional space with better metrics, and following the projected gradient flow induced by those metrics. More specifically, we propose "Wasserstein Quantum Monte Carlo" (WQMC), which uses the gradient flow induced by the Wasserstein metric, rather than the Fisher--Rao metric, and corresponds to *transporting* the probability mass, rather than *teleporting* it. We demonstrate empirically that the dynamics of WQMC results in faster convergence to the ground state of molecular systems. | Wasserstein Quantum Monte Carlo: A Novel Approach for Solving the Quantum Many-Body Schrödinger Equation | [
"Kirill Neklyudov",
"Jannes Nys",
"Luca Thiede",
"Juan Felipe Carrasquilla Alvarez",
"qiang liu",
"Max Welling",
"Alireza Makhzani"
] | Conference | spotlight | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=pirH9ycaNg | @inproceedings{
vakili2023kernelized,
title={Kernelized Reinforcement Learning with Order Optimal Regret Bounds},
author={Sattar Vakili and Julia Olkhovskaya},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=pirH9ycaNg}
} | Modern reinforcement learning (RL) has shown empirical success in various real world settings with complex models and large state-action spaces. The existing analytical results, however, typically focus on settings with a small number of state-actions or simple models such as linearly modeled state-action value functions. To derive RL policies that efficiently handle large state-action spaces with more general value functions, some recent works have considered nonlinear function approximation using kernel ridge regression. We propose $\pi$-KRVI, an optimistic modification of least-squares value iteration, when the action-value function is represented by an RKHS. We prove the first order-optimal regret guarantees under a general setting. Our results show a significant polynomial in the number of episodes improvement over the state of the art. In particular, with highly non-smooth kernels (such as Neural Tangent kernel or some Matérn kernels) the existing results lead to trivial (superlinear in the number of episodes) regret bounds. We show a sublinear regret bound that is order optimal in the cases where a lower bound on regret is known (which includes the kernels mentioned above). | Kernelized Reinforcement Learning with Order Optimal Regret Bounds | [
"Sattar Vakili",
"Julia Olkhovskaya"
] | Conference | poster | 2306.07745 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=phnN1eu5AX | @inproceedings{
kim2023learning,
title={Learning Probabilistic Symmetrization for Architecture Agnostic Equivariance},
author={Jinwoo Kim and Dat Tien Nguyen and Ayhan Suleymanzade and Hyeokjun An and Seunghoon Hong},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=phnN1eu5AX}
} | We present a novel framework to overcome the limitations of equivariant architectures in learning functions with group symmetries. In contrary to equivariant architectures, we use an arbitrary base model such as an MLP or a transformer and symmetrize it to be equivariant to the given group by employing a small equivariant network that parameterizes the probabilistic distribution underlying the symmetrization. The distribution is end-to-end trained with the base model which can maximize performance while reducing sample complexity of symmetrization. We show that this approach ensures not only equivariance to given group but also universal approximation capability in expectation. We implement our method on various base models, including patch-based transformers that can be initialized from pretrained vision transformers, and test them for a wide range of symmetry groups including permutation and Euclidean groups and their combinations. Empirical tests show competitive results against tailored equivariant architectures, suggesting the potential for learning equivariant functions for diverse groups using a non-equivariant universal base architecture. We further show evidence of enhanced learning in symmetric modalities, like graphs, when pretrained from non-symmetric modalities, like vision. Code is available at https://github.com/jw9730/lps. | Learning Probabilistic Symmetrization for Architecture Agnostic Equivariance | [
"Jinwoo Kim",
"Dat Tien Nguyen",
"Ayhan Suleymanzade",
"Hyeokjun An",
"Seunghoon Hong"
] | Conference | spotlight | 2306.02866 | [
"https://github.com/jw9730/lps"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=phnGilhPH8 | @inproceedings{
yang2023fedfed,
title={FedFed: Feature Distillation against Data Heterogeneity in Federated Learning},
author={Zhiqin Yang and Yonggang Zhang and Yu Zheng and Xinmei Tian and Hao Peng and Tongliang Liu and Bo Han},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=phnGilhPH8}
} | Federated learning (FL) typically faces data heterogeneity, i.e., distribution shifting among clients.
Sharing clients' information has shown great potentiality in mitigating data heterogeneity, yet incurs a dilemma in preserving privacy and promoting model performance. To alleviate the dilemma, we raise a fundamental question: Is it possible to share partial features in the data to tackle data heterogeneity?
In this work, we give an affirmative answer to this question by proposing a novel approach called **Fed**erated **Fe**ature **d**istillation (FedFed).
Specifically, FedFed partitions data into performance-sensitive features (i.e., greatly contributing to model performance) and performance-robust features (i.e., limitedly contributing to model performance).
The performance-sensitive features are globally shared to mitigate data heterogeneity, while the performance-robust features are kept locally.
FedFed enables clients to train models over local and shared data. Comprehensive experiments demonstrate the efficacy of FedFed in promoting model performance. | FedFed: Feature Distillation against Data Heterogeneity in Federated Learning | [
"Zhiqin Yang",
"Yonggang Zhang",
"Yu Zheng",
"Xinmei Tian",
"Hao Peng",
"Tongliang Liu",
"Bo Han"
] | Conference | poster | 2310.05077 | [
"https://github.com/visitworld123/fedfed"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=pefAAzu8an | @inproceedings{
beck2023recurrent,
title={Recurrent Hypernetworks are Surprisingly Strong in Meta-{RL}},
author={Jacob Beck and Risto Vuorio and Zheng Xiong and Shimon Whiteson},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=pefAAzu8an}
} | Deep reinforcement learning (RL) is notoriously impractical to deploy due to sample inefficiency. Meta-RL directly addresses this sample inefficiency by learning to perform few-shot learning when a distribution of related tasks is available for meta-training. While many specialized meta-RL methods have been proposed, recent work suggests that end-to-end learning in conjunction with an off-the-shelf sequential model, such as a recurrent network, is a surprisingly strong baseline. However, such claims have been controversial due to limited supporting evidence, particularly in the face of prior work establishing precisely the opposite. In this paper, we conduct an empirical investigation. While we likewise find that a recurrent network can achieve strong performance, we demonstrate that the use of hypernetworks is crucial to maximizing their potential. Surprisingly, when combined with hypernetworks, the recurrent baselines that are far simpler than existing specialized methods actually achieve the strongest performance of all methods evaluated. We provide code at https://github.com/jacooba/hyper. | Recurrent Hypernetworks are Surprisingly Strong in Meta-RL | [
"Jacob Beck",
"Risto Vuorio",
"Zheng Xiong",
"Shimon Whiteson"
] | Conference | poster | 2309.14970 | [
"https://github.com/jacooba/hyper"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=pcuC65JWAa | @inproceedings{
grand-cl{\'e}ment2023reducing,
title={Reducing Blackwell and Average Optimality to Discounted {MDP}s via the Blackwell Discount Factor},
author={Julien Grand-Cl{\'e}ment and Marek Petrik},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=pcuC65JWAa}
} | We introduce the Blackwell discount factor for Markov Decision Processes (MDPs). Classical objectives for MDPs include discounted, average, and Blackwell optimality. Many existing approaches to computing average-optimal policies solve for discount-optimal policies with a discount factor close to $1$, but they only work under strong or hard-to-verify assumptions on the MDP structure such as unichain or ergodicity. We are the first to highlight the shortcomings of the classical definition of Blackwell optimality, which does not lead to simple algorithms for computing Blackwell-optimal policies and overlooks the pathological behaviors of optimal policies as regards the discount factors. To resolve this issue, in this paper, we show that when the discount factor is larger than the Blackwell discount factor $\gamma_{\sf bw}$, all discount-optimal policies become Blackwell- and average-optimal, and we derive a general upper bound on $\gamma_{\sf bw}$. Our upper bound on $\gamma_{\sf bw}$, parametrized by the bit-size of the rewards and transition probabilities of the MDP instance, provides the first reduction from average and Blackwell optimality to discounted optimality, without any assumptions, along with new polynomial-time algorithms. Our work brings new ideas from polynomials and algebraic numbers to the analysis of MDPs. Our results also apply to robust MDPs, enabling the first algorithms to compute robust Blackwell-optimal policies. | Reducing Blackwell and Average Optimality to Discounted MDPs via the Blackwell Discount Factor | [
"Julien Grand-Clément",
"Marek Petrik"
] | Conference | poster | 2302.00036 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=pcpjtYNJCH | @inproceedings{
magen2023initializationdependent,
title={Initialization-Dependent Sample Complexity of Linear Predictors and Neural Networks},
author={Roey Magen and Ohad Shamir},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=pcpjtYNJCH}
} | We provide several new results on the sample complexity of vector-valued linear predictors (parameterized by a matrix), and more generally neural networks. Focusing on size-independent bounds, where only the Frobenius norm distance of the parameters from some fixed reference matrix $W_0$ is controlled, we show that the sample complexity behavior can be surprisingly different than what we may expect considering the well-studied setting of scalar-valued linear predictors. This also leads to new sample complexity bounds for feed-forward neural networks, tackling some open questions in the literature, and establishing a new convex linear prediction problem that is provably learnable without uniform convergence. | Initialization-Dependent Sample Complexity of Linear Predictors and Neural Networks | [
"Roey Magen",
"Ohad Shamir"
] | Conference | poster | 2305.16475 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=pcKwgdVAlq | @inproceedings{
cai2023binarized,
title={Binarized Spectral Compressive Imaging},
author={Yuanhao Cai and Yuxin Zheng and Jing Lin and Xin Yuan and Yulun Zhang and Haoqian Wang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=pcKwgdVAlq}
} | Existing deep learning models for hyperspectral image (HSI) reconstruction achieve good performance but require powerful hardwares with enormous memory and computational resources. Consequently, these methods can hardly be deployed on resource-limited mobile devices. In this paper, we propose a novel method, Binarized Spectral-Redistribution Network (BiSRNet), for efficient and practical HSI restoration from compressed measurement in snapshot compressive imaging (SCI) systems. Firstly, we redesign a compact and easy-to-deploy base model to be binarized. Then we present the basic unit, Binarized Spectral-Redistribution Convolution (BiSR-Conv). BiSR-Conv can adaptively redistribute the HSI representations before binarizing activation and uses a scalable hyperbolic tangent function to closer approximate the Sign function in backpropagation. Based on our BiSR-Conv, we customize four binarized convolutional modules to address the dimension mismatch and propagate full-precision information throughout the whole network. Finally, our BiSRNet is derived by using the proposed techniques to binarize the base model. Comprehensive quantitative and qualitative experiments manifest that our proposed BiSRNet outperforms state-of-the-art binarization algorithms. Code and models are publicly available at https://github.com/caiyuanhao1998/BiSCI | Binarized Spectral Compressive Imaging | [
"Yuanhao Cai",
"Yuxin Zheng",
"Jing Lin",
"Xin Yuan",
"Yulun Zhang",
"Haoqian Wang"
] | Conference | poster | 2305.10299 | [
"https://github.com/caiyuanhao1998/bisci"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=pb1OwZNgr2 | @inproceedings{
huang2023learning,
title={Learning Generalizable Agents via Saliency-guided Features Decorrelation},
author={Sili Huang and Yanchao Sun and Jifeng Hu and Siyuan Guo and Hechang Chen and Yi Chang and Lichao Sun and Bo Yang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=pb1OwZNgr2}
} | In visual-based Reinforcement Learning (RL), agents often struggle to generalize well to environmental variations in the state space that were not observed during training. The variations can arise in both task-irrelevant features, such as background noise, and task-relevant features, such as robot configurations, that are related to the optimal decisions. To achieve generalization in both situations, agents are required to accurately understand the impact of changed features on the decisions, i.e., establishing the true associations between changed features and decisions in the policy model. However, due to the inherent correlations among features in the state space, the associations between features and decisions become entangled, making it difficult for the policy to distinguish them. To this end, we propose Saliency-Guided Features Decorrelation (SGFD) to eliminate these correlations through sample reweighting. Concretely, SGFD consists of two core techniques: Random Fourier Functions (RFF) and the saliency map. RFF is utilized to estimate the complex non-linear correlations in high-dimensional images, while the saliency map is designed to identify the changed features. Under the guidance of the saliency map, SGFD employs sample reweighting to minimize the estimated correlations related to changed features, thereby achieving decorrelation in visual RL tasks. Our experimental results demonstrate that SGFD can generalize well on a wide range of test environments and significantly outperforms state-of-the-art methods in handling both task-irrelevant variations and task-relevant variations. | Learning Generalizable Agents via Saliency-guided Features Decorrelation | [
"Sili Huang",
"Yanchao Sun",
"Jifeng Hu",
"Siyuan Guo",
"Hechang Chen",
"Yi Chang",
"Lichao Sun",
"Bo Yang"
] | Conference | spotlight | 2310.05086 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=paa2OU5jN8 | @inproceedings{
huang2023freebloom,
title={Free-Bloom: Zero-Shot Text-to-Video Generator with {LLM} Director and {LDM} Animator},
author={Hanzhuo Huang and Yufan Feng and Cheng Shi and Lan Xu and Jingyi Yu and Sibei Yang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=paa2OU5jN8}
} | Text-to-video is a rapidly growing research area that aims to generate a semantic, identical, and temporal coherence sequence of frames that accurately align with the input text prompt. This study focuses on zero-shot text-to-video generation considering the data- and cost-efficient. To generate a semantic-coherent video, exhibiting a rich portrayal of temporal semantics such as the whole process of flower blooming rather than a set of ``moving images'', we propose a novel Free-Bloom pipeline that harnesses large language models (LLMs) as the director to generate a semantic-coherence prompt sequence, while pre-trained latent diffusion models (LDMs) as the animator to generate the high fidelity frames. Furthermore, to ensure temporal and identical coherence while maintaining semantic coherence, we propose a series of annotative modifications to adapting LDMs in the reverse process, including joint noise sampling, step-aware attention shift, and dual-path interpolation. Without any video data and training requirements, Free-Bloom generates vivid and high-quality videos, awe-inspiring in generating complex scenes with semantic meaningful frame sequences. In addition, Free-Bloom is naturally compatible with LDMs-based extensions. | Free-Bloom: Zero-Shot Text-to-Video Generator with LLM Director and LDM Animator | [
"Hanzhuo Huang",
"Yufan Feng",
"Cheng Shi",
"Lan Xu",
"Jingyi Yu",
"Sibei Yang"
] | Conference | poster | 2309.14494 | [
"https://github.com/soolab/free-bloom"
] | https://huggingface.co/papers/2309.14494 | 0 | 1 | 0 | 6 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=paTESG8iSE | @inproceedings{
gerber2023kernelbased,
title={Kernel-Based Tests for Likelihood-Free Hypothesis Testing},
author={Patrik Robert Gerber and Tianze Jiang and Yury Polyanskiy and Rui Sun},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=paTESG8iSE}
} | Given $n$ observations from two balanced classes, consider the task of labeling an additional $m$ inputs that are known to all belong to \emph{one} of the two classes.
Special cases of this problem are well-known: with complete
knowledge of class distributions ($n=\infty$) the
problem is solved optimally by the likelihood-ratio test; when
$m=1$ it corresponds to binary classification; and when $m\approx n$ it is equivalent to two-sample testing. The intermediate settings occur in the field of likelihood-free inference, where labeled samples are obtained by running forward simulations and the unlabeled sample is collected experimentally. In recent work it was discovered that there is a fundamental trade-off
between $m$ and $n$: increasing the data sample $m$ reduces the amount $n$ of training/simulation
data needed. In this work we (a) introduce a generalization where unlabeled samples
come from a mixture of the two classes -- a case often encountered in practice; (b) study the minimax sample complexity for non-parametric classes of densities under \textit{maximum mean
discrepancy} (MMD) separation; and (c) investigate the empirical performance of kernels parameterized by neural networks on two tasks: detection
of the Higgs boson and detection of planted DDPM generated images amidst
CIFAR-10 images. For both problems we confirm the existence of the theoretically predicted asymmetric $m$ vs $n$ trade-off. | Kernel-Based Tests for Likelihood-Free Hypothesis Testing | [
"Patrik Robert Gerber",
"Tianze Jiang",
"Yury Polyanskiy",
"Rui Sun"
] | Conference | poster | 2308.09043 | [
"https://github.com/sr-11/lfi"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=pZ2Ww45GkL | @inproceedings{
chen2023enhancing,
title={Enhancing Robot Program Synthesis Through Environmental Context},
author={Tianyi Chen and Qidi Wang and Zhen Dong and Liwei Shen and Xin Peng},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=pZ2Ww45GkL}
} | Program synthesis aims to automatically generate an executable program that conforms to the given specification. Recent advancements have demonstrated that deep neural methodologies and large-scale pretrained language models are highly proficient in capturing program semantics.
For robot programming, prior works have facilitated program synthesis by incorporating global environments. However, the assumption of acquiring a comprehensive understanding of the entire environment is often excessively challenging to achieve.
In this work, we present a framework that learns to synthesize a program by rectifying potentially erroneous code segments, with the aid of partially observed environments. To tackle the issue of inadequate attention to partial observations, we propose to first learn an environment embedding space that can implicitly evaluate the impacts of each program token based on the precondition. Furthermore, by employing a graph structure, the model can aggregate both environmental and syntactic information flow and furnish smooth program rectification guidance.
Extensive experimental evaluations and ablation studies on the partially observed VizDoom domain authenticate that our method offers superior generalization capability across various tasks and greater robustness when encountering noises. | Enhancing Robot Program Synthesis Through Environmental Context | [
"Tianyi Chen",
"Qidi Wang",
"Zhen Dong",
"Liwei Shen",
"Xin Peng"
] | Conference | poster | 2312.08250 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=pXtVyj4R33 | @inproceedings{
hu2023the,
title={The Best of Both Worlds in Network Population Games: Reaching Consensus and Convergence to Equilibrium},
author={Shuyue Hu and Harold Soh and Georgios Piliouras},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=pXtVyj4R33}
} | Reaching consensus and convergence to equilibrium are two major challenges of multi-agent systems. Although each has attracted significant attention, relatively few studies address both challenges at the same time. This paper examines the connection between the notions of consensus and equilibrium in a multi-agent system where multiple interacting sub-populations coexist. We argue that consensus can be seen as an intricate component of intra-population stability, whereas equilibrium can be seen as encoding inter-population stability. We show that smooth fictitious play, a well-known learning model in game theory, can achieve both consensus and convergence to equilibrium in diverse multi-agent settings. Moreover, we show that the consensus formation process plays a crucial role in the seminal thorny problem of equilibrium selection in multi-agent learning. | The Best of Both Worlds in Network Population Games: Reaching Consensus and Convergence to Equilibrium | [
"Shuyue Hu",
"Harold Soh",
"Georgios Piliouras"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=pWZ97hUQtQ | @inproceedings{
hoedt2023principled,
title={Principled Weight Initialisation for Input-Convex Neural Networks},
author={Pieter-Jan Hoedt and G{\"u}nter Klambauer},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=pWZ97hUQtQ}
} | Input-Convex Neural Networks (ICNNs) are networks that guarantee convexity in their input-output mapping.
These networks have been successfully applied for energy-based modelling, optimal transport problems and learning invariances.
The convexity of ICNNs is achieved by using non-decreasing convex activation functions and non-negative weights.
Because of these peculiarities, previous initialisation strategies, which implicitly assume centred weights, are not effective for ICNNs.
By studying signal propagation through layers with non-negative weights, we are able to derive a principled weight initialisation for ICNNs.
Concretely, we generalise signal propagation theory by removing the assumption that weights are sampled from a centred distribution.
In a set of experiments, we demonstrate that our principled initialisation effectively accelerates learning in ICNNs and leads to better generalisation.
Moreover, we find that, in contrast to common belief, ICNNs can be trained without skip-connections when initialised correctly.
Finally, we apply ICNNs to a real-world drug discovery task and show that they allow for more effective molecular latent space exploration. | Principled Weight Initialisation for Input-Convex Neural Networks | [
"Pieter-Jan Hoedt",
"Günter Klambauer"
] | Conference | poster | 2312.12474 | [
"https://github.com/ml-jku/convex-init"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=pVlC0reMKq | @inproceedings{
bursztein2023retvec,
title={{RETV}ec: Resilient and Efficient Text Vectorizer},
author={Elie Bursztein and Marina Zhang and Owen Skipper Vallis and Xinyu Jia and Alexey Kurakin},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=pVlC0reMKq}
} | This paper describes RETVec, an efficient, resilient, and multilingual text vectorizer designed for neural-based text processing. RETVec combines a novel character encoding with an optional small embedding model to embed words into a 256-dimensional vector space. The RETVec embedding model is pre-trained using pair-wise metric learning to be robust against typos and character-level adversarial attacks. In this paper, we evaluate and compare RETVec to state-of-the-art vectorizers and word embeddings on popular model architectures and datasets. These comparisons demonstrate that RETVec leads to competitive, multilingual models that are significantly more resilient to typos and adversarial text attacks. RETVec is available under the Apache 2 license at https://github.com/google-research/retvec. | RETVec: Resilient and Efficient Text Vectorizer | [
"Elie Bursztein",
"Marina Zhang",
"Owen Skipper Vallis",
"Xinyu Jia",
"Alexey Kurakin"
] | Conference | poster | 2302.09207 | [
"https://github.com/google-research/retvec"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=pTCZWSDltG | @inproceedings{
lao2023corresnerf,
title={CorresNe{RF}: Image Correspondence Priors for Neural Radiance Fields},
author={Yixing Lao and Xiaogang Xu and zhipeng cai and Xihui Liu and Hengshuang Zhao},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=pTCZWSDltG}
} | Neural Radiance Fields (NeRFs) have achieved impressive results in novel view synthesis and surface reconstruction tasks. However, their performance suffers under challenging scenarios with sparse input views. We present CorresNeRF, a novel method that leverages image correspondence priors computed by off-the-shelf methods to supervise NeRF training. We design adaptive processes for augmentation and filtering to generate dense and high-quality correspondences. The correspondences are then used to regularize NeRF training via the correspondence pixel reprojection and depth loss terms. We evaluate our methods on novel view synthesis and surface reconstruction tasks with density-based and SDF-based NeRF models on different datasets. Our method outperforms previous methods in both photometric and geometric metrics. We show that this simple yet effective technique of using correspondence priors can be applied as a plug-and-play module across different NeRF variants. The project page is at https://yxlao.github.io/corres-nerf/. | CorresNeRF: Image Correspondence Priors for Neural Radiance Fields | [
"Yixing Lao",
"Xiaogang Xu",
"zhipeng cai",
"Xihui Liu",
"Hengshuang Zhao"
] | Conference | poster | 2312.06642 | [
"https://github.com/yxlao/corres-nerf"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=pT8DIhsJCw | @inproceedings{
wang2023parameterefficient,
title={Parameter-efficient Tuning of Large-scale Multimodal Foundation Model},
author={Haixin Wang and Xinlong Yang and Jianlong Chang and Dian Jin and Jinan Sun and Shikun Zhang and Xiao Luo and Qi Tian},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=pT8DIhsJCw}
} | Driven by the progress of large-scale pre-training, parameter-efficient transfer learning has gained immense popularity across different subfields of Artificial Intelligence. The core is to adapt the model to downstream tasks with only a small set of parameters. Recently, researchers have leveraged such proven techniques in multimodal tasks and achieve promising results. However, two critical issues remain unresolved: how to further reduce the complexity with lightweight design and how to boost alignment between modalities under extremely low parameters. In this paper, we propose A gracefUl pRompt framewOrk for cRoss-modal trAnsfer (AURORA) to overcome these challenges. Considering the redundancy in existing architectures, we first utilize the mode approximation to generate 0.1M trainable parameters to implement the multimodal parameter-efficient tuning, which explores the low intrinsic dimension with only 0.04% parameters of the pre-trained model. Then, for better modality alignment, we propose the Informative Context Enhancement and Gated Query Transformation module under extremely few parameters scenes. A thorough evaluation on six cross-modal benchmarks shows that it not only outperforms the state-of-the-art but even outperforms the full fine-tuning approach. Our code is available at: https://github.com/WillDreamer/Aurora. | Parameter-efficient Tuning of Large-scale Multimodal Foundation Model | [
"Haixin Wang",
"Xinlong Yang",
"Jianlong Chang",
"Dian Jin",
"Jinan Sun",
"Shikun Zhang",
"Xiao Luo",
"Qi Tian"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=pQvAL40Cdj | @inproceedings{
cao2023detecting,
title={Detecting Any Human-Object Interaction Relationship: Universal {HOI} Detector with Spatial Prompt Learning on Foundation Models},
author={Yichao Cao and Qingfei Tang and Xiu Su and Song Chen and Shan You and Xiaobo Lu and Chang Xu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=pQvAL40Cdj}
} | Human-object interaction (HOI) detection aims to comprehend the intricate relationships between humans and objects, predicting <human, action, object> triplets, and serving as the foundation for numerous computer vision tasks. The complexity and diversity of human-object interactions in the real world, however, pose significant challenges for both annotation and recognition, particularly in recognizing interactions within an open world context. This study explores the universal interaction recognition in an open-world setting through the use of Vision-Language (VL) foundation models and large language models (LLMs). The proposed method is dubbed as UniHOI. We conduct a deep analysis of the three hierarchical features inherent in visual HOI detectors and propose a method for high-level relation extraction aimed at VL foundation models, which we call HO prompt-based learning. Our design includes an HO Prompt-guided Decoder (HOPD), facilitates the association of high-level relation representations in the foundation model with various HO pairs within the image. Furthermore, we utilize a LLM (i.e. GPT) for interaction interpretation, generating a richer linguistic understanding for complex HOIs. For open-category interaction recognition, our method supports either of two input types: interaction phrase or interpretive sentence. Our efficient architecture design and learning methods effectively unleash the potential of the VL foundation models and LLMs, allowing UniHOI to surpass all existing methods with a substantial margin, under both supervised and zero-shot settings. The code and pre-trained weights will be made publicly available. | Detecting Any Human-Object Interaction Relationship: Universal HOI Detector with Spatial Prompt Learning on Foundation Models | [
"Yichao Cao",
"Qingfei Tang",
"Xiu Su",
"Song Chen",
"Shan You",
"Xiaobo Lu",
"Chang Xu"
] | Conference | poster | 2311.03799 | [
"https://github.com/caoyichao/unihoi"
] | https://huggingface.co/papers/2311.03799 | 0 | 0 | 0 | 7 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=pQF9kbM8Ea | @inproceedings{
huang2023leveraging,
title={Leveraging Vision-Centric Multi-Modal Expertise for 3D Object Detection},
author={Linyan Huang and Zhiqi Li and Chonghao Sima and Wenhai Wang and Jingdong Wang and Yu Qiao and Hongyang Li},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=pQF9kbM8Ea}
} | Current research is primarily dedicated to advancing the accuracy of camera-only 3D object detectors (apprentice) through the knowledge transferred from LiDAR- or multi-modal-based counterparts (expert). However, the presence of the domain gap between LiDAR and camera features, coupled with the inherent incompatibility in temporal fusion, significantly hinders the effectiveness of distillation-based enhancements for apprentices. Motivated by the success of uni-modal distillation, an apprentice-friendly expert model would predominantly rely on camera features, while still achieving comparable performance to multi-modal models. To this end, we introduce VCD, a framework to improve the camera-only apprentice model, including an apprentice-friendly multi-modal expert and temporal-fusion-friendly distillation supervision. The multi-modal expert VCD-E adopts an identical structure as that of the camera-only apprentice in order to alleviate the feature disparity, and leverages LiDAR input as a depth prior to reconstruct the 3D scene, achieving the performance on par with other heterogeneous multi-modal experts. Additionally, a fine-grained trajectory-based distillation module is introduced with the purpose of individually rectifying the motion misalignment for each object in the scene. With those improvements, our camera-only apprentice VCD-A sets new state-of-the-art on nuScenes with a score of 63.1% NDS. The code will be released at https://github.com/OpenDriveLab/Birds-eye-view-Perception. | Leveraging Vision-Centric Multi-Modal Expertise for 3D Object Detection | [
"Linyan Huang",
"Zhiqi Li",
"Chonghao Sima",
"Wenhai Wang",
"Jingdong Wang",
"Yu Qiao",
"Hongyang Li"
] | Conference | poster | 2310.15670 | [
"https://github.com/opendrivelab/birds-eye-view-perception"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=pO7d6iFdnc | @inproceedings{
huang2023essen,
title={{ESSEN}: Improving Evolution State Estimation for Temporal Networks using Von Neumann Entropy},
author={Qiyao Huang and Yingyue Zhang and Zhihong Zhang and Edwin Hancock},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=pO7d6iFdnc}
} | Temporal networks are widely used as abstract graph representations for real-world dynamic systems. Indeed, recognizing the network evolution states is crucial in understanding and analyzing temporal networks. For instance, social networks will generate the clustering and formation of tightly-knit groups or communities over time, relying on the triadic closure theory. However, the existing methods often struggle to account for the time-varying nature of these network structures, hindering their performance when applied to networks with complex evolution states. To mitigate this problem, we propose a novel framework called ESSEN, an Evolution StateS awarE Network, to measure temporal network evolution using von Neumann entropy and thermodynamic temperature. The developed framework utilizes a von Neumann entropy aware attention mechanism and network evolution state contrastive learning in the graph encoding. In addition, it employs a unique decoder the so-called Mixture of Thermodynamic Experts (MoTE) for decoding. ESSEN extracts local and global network evolution information using thermodynamic features and adaptively recognizes the network evolution states. Moreover, the proposed method is evaluated on link prediction tasks under both transductive and inductive settings, with the corresponding results demonstrating its effectiveness compared to various state-of-the-art baselines. | ESSEN: Improving Evolution State Estimation for Temporal Networks using Von Neumann Entropy | [
"Qiyao Huang",
"Yingyue Zhang",
"Zhihong Zhang",
"Edwin Hancock"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=pNtG6NAmx0 | @inproceedings{
dong2023statistical,
title={Statistical Knowledge Assessment for Large Language Models},
author={Qingxiu Dong and Jingjing Xu and Lingpeng Kong and Zhifang Sui and Lei Li},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=pNtG6NAmx0}
} | Given varying prompts regarding a factoid question, can a large language model (LLM) reliably generate factually correct answers? Existing LLMs may generate distinct responses for different prompts. In this paper, we study the problem of quantifying knowledge contained in an LLM regarding a given set of facts. We propose KaRR, a statistical approach to assess factual knowledge for LLMs. The main idea is to estimate the ratio of LLM generating text corresponding to the answer entity given diverse prompts of the subject and the querying relation, versus it generating by random chances. Our assessment suite contains a comprehensive set of 994,123 entities and 600 relations, with 1,395,905 text aliases. We use our method to evaluate 20 LLMs of various sizes, including LLaMA, Alpaca, OPT, etc. Experiments show that our results have a strong correlation (0.43 Kendall's $\tau$) with the results of human assessment on LLMs. Our results reveal that the knowledge in LLMs with the same backbone architecture adheres to the scaling law, while tuning on instruction-following data sometimes compromises the model's capability to generate factually correct text reliably. | Statistical Knowledge Assessment for Large Language Models | [
"Qingxiu Dong",
"Jingjing Xu",
"Lingpeng Kong",
"Zhifang Sui",
"Lei Li"
] | Conference | poster | 2305.10519 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=pLwYhNNnoR | @inproceedings{
huang2023prodigy,
title={{PRODIGY}: Enabling In-context Learning Over Graphs},
author={Qian Huang and Hongyu Ren and Peng Chen and Gregor Kr{\v{z}}manc and Daniel Zeng and Percy Liang and Jure Leskovec},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=pLwYhNNnoR}
} | In-context learning is the ability of a pretrained model to adapt to novel and diverse downstream tasks by conditioning on prompt examples, without optimizing any parameters. While large language models have demonstrated this ability, how in-context learning could be performed over graphs is unexplored. In this paper, we develop \textbf{Pr}etraining \textbf{O}ver \textbf{D}iverse \textbf{I}n-Context \textbf{G}raph S\textbf{y}stems (PRODIGY), the first pretraining framework that enables in-context learning over graphs. The key idea of our framework is to formulate in-context learning over graphs with a novel \emph{prompt graph} representation, which connects prompt examples and queries. We then propose a graph neural network architecture over the prompt graph and a corresponding family of in-context pretraining objectives. With PRODIGY, the pretrained model can directly perform novel downstream classification tasks on unseen graphs via in-context learning. We provide empirical evidence of the effectiveness of our framework by showcasing its strong in-context learning performance on tasks involving citation networks and knowledge graphs. Our approach outperforms the in-context learning accuracy of contrastive pretraining baselines with hard-coded adaptation by 18\% on average across all setups. Moreover, it also outperforms standard finetuning with limited data by 33\% on average with in-context learning. | PRODIGY: Enabling In-context Learning Over Graphs | [
"Qian Huang",
"Hongyu Ren",
"Peng Chen",
"Gregor Kržmanc",
"Daniel Zeng",
"Percy Liang",
"Jure Leskovec"
] | Conference | spotlight | 2305.12600 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=pLsPFxqn7J | @inproceedings{
bonnier2023kernelized,
title={Kernelized Cumulants: Beyond Kernel Mean Embeddings},
author={Patric Bonnier and Harald Oberhauser and Zolt{\'a}n Szab{\'o}},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=pLsPFxqn7J}
} | In $\mathbb{R}^d$, it is well-known that cumulants provide an alternative to moments that can achieve the same goals with numerous benefits such as lower variance estimators. In this paper we extend cumulants to reproducing kernel Hilbert spaces (RKHS) using tools from tensor algebras and show that they are computationally tractable by a kernel trick. These kernelized cumulants provide a new set of all-purpose statistics; the classical maximum mean discrepancy and Hilbert-Schmidt independence criterion arise as the degree one objects in our general construction. We argue both theoretically and empirically (on synthetic, environmental, and traffic data analysis) that going beyond degree one has several advantages and can be achieved with the same computational complexity and minimal overhead in our experiments. | Kernelized Cumulants: Beyond Kernel Mean Embeddings | [
"Patric Bonnier",
"Harald Oberhauser",
"Zoltán Szabó"
] | Conference | spotlight | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=pLcSrn8NpJ | @inproceedings{
aznag2023an,
title={An active learning framework for multi-group mean estimation},
author={Abdellah Aznag and Rachel Cummings and Adam N. Elmachtoub},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=pLcSrn8NpJ}
} | We consider a fundamental problem where there are multiple groups whose data distributions are unknown, and an analyst would like to learn the mean of each group. We consider an active learning framework to sequentially collect $T$ samples with bandit, each period observing a sample from a chosen group. After observing a sample, the analyst may update their estimate of the mean and variance of that group and choose the next group accordingly. The objective is to dynamically collect samples to minimize the $p$-norm of the vector of variances of our mean estimators after $T$ rounds. We propose an algorithm, Variance-UCB, that selects groups according to a an upper bound on the variance estimate adjusted to the $p$-norm chosen. We show that the regret of Variance-UCB is $O(T^{-2})$ for finite $p$, and prove that no algorithm can do better. When $p$ is infinite, we recover the $O(T^{-1.5})$ obtained in \cite{activelearning, carpentier2011upper} and provide a new lower bound showing that no algorithm can do better. | An active learning framework for multi-group mean estimation | [
"Abdellah Aznag",
"Rachel Cummings",
"Adam N. Elmachtoub"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=pLOWV1UGF6 | @inproceedings{
hu2023nonsmooth,
title={Non-Smooth Weakly-Convex Finite-sum Coupled Compositional Optimization},
author={Quanqi Hu and Dixian Zhu and Tianbao Yang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=pLOWV1UGF6}
} | This paper investigates new families of compositional optimization problems, called non-smooth weakly-convex finite-sum coupled compositional optimization (NSWC FCCO). There has been a growing interest in FCCO due to its wide-ranging applications in machine learning and AI, as well as its ability to address the shortcomings of stochastic algorithms based on empirical risk minimization. However, current research on FCCO presumes that both the inner and outer functions are smooth, limiting their potential to tackle a more diverse set of problems. Our research expands on this area by examining non-smooth weakly-convex FCCO, where the outer function is weakly convex and non-decreasing, and the inner function is weakly-convex. We analyze a single-loop algorithm and establish its complexity for finding an $\epsilon$-stationary point of the Moreau envelop of the objective function. Additionally, we also extend the algorithm for solving novel non-smooth weakly-convex tri-level finite-sum coupled compositional optimization problems, which feature a nested arrangement of three functions. Lastly, we explore the applications of our algorithms in deep learning for two-way partial AUC maximization and multi-instance two-way partial AUC maximization, using empirical studies to showcase the effectiveness of the proposed algorithms. | Non-Smooth Weakly-Convex Finite-sum Coupled Compositional Optimization | [
"Quanqi Hu",
"Dixian Zhu",
"Tianbao Yang"
] | Conference | poster | 2310.03234 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=pKnhUWqZTJ | @inproceedings{
char2023pidinspired,
title={{PID}-Inspired Inductive Biases for Deep Reinforcement Learning in Partially Observable Control Tasks},
author={Ian Char and Jeff Schneider},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=pKnhUWqZTJ}
} | Deep reinforcement learning (RL) has shown immense potential for learning to control systems through data alone. However, one challenge deep RL faces is that the full state of the system is often not observable. When this is the case, the policy needs to leverage the history of observations to infer the current state. At the same time, differences between the training and testing environments makes it critical for the policy not to overfit to the sequence of observations it sees at training time. As such, there is an important balancing act between having the history encoder be flexible enough to extract relevant information, yet be robust to changes in the environment. To strike this balance, we look to the PID controller for inspiration. We assert the PID controller's success shows that only summing and differencing are needed to accumulate information over time for many control tasks. Following this principle, we propose two architectures for encoding history: one that directly uses PID features and another that extends these core ideas and can be used in arbitrary control tasks. When compared with prior approaches, our encoders produce policies that are often more robust and achieve better performance on a variety of tracking tasks. Going beyond tracking tasks, our policies achieve 1.7x better performance on average over previous state-of-the-art methods on a suite of locomotion control tasks. | PID-Inspired Inductive Biases for Deep Reinforcement Learning in Partially Observable Control Tasks | [
"Ian Char",
"Jeff Schneider"
] | Conference | poster | 2307.05891 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=pJbEXBBN88 | @inproceedings{
melamed2023adversarial,
title={Adversarial Examples Exist in Two-Layer Re{LU} Networks for Low Dimensional Linear Subspaces},
author={Odelia Melamed and Gilad Yehudai and Gal Vardi},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=pJbEXBBN88}
} | Despite a great deal of research, it is still not well-understood why trained neural networks are highly vulnerable to adversarial examples.
In this work we focus on two-layer neural networks trained using data which lie on a low dimensional linear subspace.
We show that standard gradient methods lead to non-robust neural networks, namely, networks which have large gradients in directions orthogonal to the data subspace, and are susceptible to small adversarial $L_2$-perturbations in these directions.
Moreover, we show that decreasing the initialization scale of the training algorithm, or adding $L_2$ regularization, can make the trained network more robust to adversarial perturbations orthogonal to the data. | Adversarial Examples Exist in Two-Layer ReLU Networks for Low Dimensional Linear Subspaces | [
"Odelia Melamed",
"Gilad Yehudai",
"Gal Vardi"
] | Conference | poster | 2303.00783 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=pJQu0zpKCS | @inproceedings{
wagenmaker2023optimal,
title={Optimal Exploration for Model-Based {RL} in Nonlinear Systems},
author={Andrew Wagenmaker and Guanya Shi and Kevin Jamieson},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=pJQu0zpKCS}
} | Learning to control unknown nonlinear dynamical systems is a fundamental problem in reinforcement learning and control theory. A commonly applied approach is to first explore the environment (exploration), learn an accurate model of it (system identification), and then compute an optimal controller with the minimum cost on this estimated system (policy optimization). While existing work has shown that it is possible to learn a uniformly good model of the system (Mania et al., 2020), in practice, if we aim to learn a good controller with a low cost on the actual system, certain system parameters may be significantly more critical than others, and we therefore ought to focus our exploration on learning such parameters.
In this work, we consider the setting of nonlinear dynamical systems and seek to formally quantify, in such settings, (a) which parameters are most relevant to learning a good controller, and (b) how we can best explore so as to minimize uncertainty in such parameters. Inspired by recent work in linear systems (Wagenmaker et al., 2021), we show that minimizing the controller loss in nonlinear systems translates to estimating the system parameters in a particular, task-dependent metric. Motivated by this, we develop an algorithm able to efficiently explore the system to reduce uncertainty in this metric, and prove a lower bound showing that our approach learns a controller at a near-instance-optimal rate. Our algorithm relies on a general reduction from policy optimization to optimal experiment design in arbitrary systems, and may be of independent interest. We conclude with experiments demonstrating the effectiveness of our method in realistic nonlinear robotic systems. | Optimal Exploration for Model-Based RL in Nonlinear Systems | [
"Andrew Wagenmaker",
"Guanya Shi",
"Kevin Jamieson"
] | Conference | spotlight | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=pIXTMrBe7f | @inproceedings{
zhang2023what,
title={What Makes Good Examples for Visual In-Context Learning?},
author={Yuanhan Zhang and Kaiyang Zhou and Ziwei Liu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=pIXTMrBe7f}
} | Large vision models with billions of parameters and trained on broad data have great potential in numerous downstream applications. However, these models are typically difficult to adapt due to their large parameter size and sometimes lack of accesss to their weights---entities able to develop large vision models often provide APIs only. In this paper, we study how to better utilize large vision models through the lens of in-context learning, a concept that has been well-known in natural language processing but has only been studied very recently in computer vision. In-context learning refers to the ability to perform inference on tasks never seen during training by simply conditioning on in-context examples (i.e., input-output pairs) without updating any internal model parameters. To demystify in-context learning in computer vision, we conduct an extensive research and identify a critical problem: downstream performance is highly sensitivie to the choice of visual in-context examples. To address this problem, we propose a prompt retrieval framework specifically for large vision models, allowing the selection of in-context examples to be fully automated. Concretely, we provide two implementations: (i) an unsupervised prompt retrieval method based on nearest example search using an off-the-shelf model, and (ii) a supervised prompt retrieval method, which trains a neural network to choose examples that directly maximize in-context learning performance. Both methods do not require access to the internal weights of large vision models. Our results demonstrate that our methods can bring non-trivial improvements to visual in-context learning in comparison to the commonly-used random selection. Code and models will be released. | What Makes Good Examples for Visual In-Context Learning? | [
"Yuanhan Zhang",
"Kaiyang Zhou",
"Ziwei Liu"
] | Conference | poster | 2301.13670 | [
"https://github.com/zhangyuanhan-ai/visual_prompt_retrieval"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=pH4Fv7C3yC | @inproceedings{
akbari2023causal,
title={Causal Effect Identification in Uncertain Causal Networks},
author={Sina Akbari and Fateme Jamshidi and Ehsan Mokhtarian and Matthew James Vowels and Jalal Etesami and Negar Kiyavash},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=pH4Fv7C3yC}
} | Causal identification is at the core of the causal inference literature, where complete algorithms have been proposed to identify causal queries of interest. The validity of these algorithms hinges on the restrictive assumption of having access to a correctly specified causal structure. In this work, we study the setting where a probabilistic model of the causal structure is available. Specifically, the edges in a causal graph exist with uncertainties which may, for example, represent degree of belief from domain experts. Alternatively, the uncertainty about an edge may reflect the confidence of a particular statistical test. The question that naturally arises in this setting is: Given such a probabilistic graph and a specific causal effect of interest, what is the subgraph which has the highest plausibility and for which the causal effect is identifiable? We show that answering this question reduces to solving an NP-hard combinatorial optimization problem which we call the edge ID problem. We propose efficient algorithms to approximate this problem and evaluate them against both real-world networks and randomly generated graphs. | Causal Effect Identification in Uncertain Causal Networks | [
"Sina Akbari",
"Fateme Jamshidi",
"Ehsan Mokhtarian",
"Matthew James Vowels",
"Jalal Etesami",
"Negar Kiyavash"
] | Conference | poster | 2208.04627 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=pE3yaP0Eqg | @inproceedings{
huang2023flatmatch,
title={FlatMatch: Bridging Labeled Data and Unlabeled Data with Cross-Sharpness for Semi-Supervised Learning},
author={Zhuo Huang and Li Shen and Jun Yu and Bo Han and Tongliang Liu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=pE3yaP0Eqg}
} | Semi-Supervised Learning (SSL) has been an effective way to leverage abundant unlabeled data with extremely scarce labeled data. However, most SSL methods are commonly based on instance-wise consistency between different data transformations. Therefore, the label guidance on labeled data is hard to be propagated to unlabeled data. Consequently, the learning process on labeled data is much faster than on unlabeled data which is likely to fall into a local minima that does not favor unlabeled data, leading to sub-optimal generalization performance. In this paper, we propose FlatMatch which minimizes a cross-sharpness measure to ensure consistent learning performance between the two datasets. Specifically, we increase the empirical risk on labeled data to obtain a worst-case model which is a failure case needing to be enhanced. Then, by leveraging the richness of unlabeled data, we penalize the prediction difference (i.e., cross-sharpness) between the worst-case model and the original model so that the learning direction is beneficial to generalization on unlabeled data. Therefore, we can calibrate the learning process without being limited to insufficient label information. As a result, the mismatched learning performance can be mitigated, further enabling the effective exploitation of unlabeled data and improving SSL performance. Through comprehensive validation, we show FlatMatch achieves state-of-the-art results in many SSL settings. | FlatMatch: Bridging Labeled Data and Unlabeled Data with Cross-Sharpness for Semi-Supervised Learning | [
"Zhuo Huang",
"Li Shen",
"Jun Yu",
"Bo Han",
"Tongliang Liu"
] | Conference | poster | 2310.16412 | [
"https://github.com/tmllab/2023_neurips_flatmatch"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=pBa70rGHlr | @inproceedings{
maiorca2023latent,
title={Latent Space Translation via Semantic Alignment},
author={Valentino Maiorca and Luca Moschella and Antonio Norelli and Marco Fumero and Francesco Locatello and Emanuele Rodol{\`a}},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=pBa70rGHlr}
} | While different neural models often exhibit latent spaces that are alike when exposed to semantically related data, this intrinsic similarity is not always immediately discernible. Towards a better understanding of this phenomenon, our work shows how representations learned from these neural modules can be translated between different pre-trained networks via simpler transformations than previously thought. An advantage of this approach is the ability to estimate these transformations using standard, well-understood algebraic procedures that have closed-form solutions. Our method directly estimates a transformation between two given latent spaces, thereby enabling effective stitching of encoders and decoders without additional training. We extensively validate the adaptability of this translation procedure in different experimental settings: across various trainings, domains, architectures (e.g., ResNet, CNN, ViT), and in multiple downstream tasks (classification, reconstruction). Notably, we show how it is possible to zero-shot stitch text encoders and vision decoders, or vice-versa, yielding surprisingly good classification performance in this multimodal setting. | Latent Space Translation via Semantic Alignment | [
"Valentino Maiorca",
"Luca Moschella",
"Antonio Norelli",
"Marco Fumero",
"Francesco Locatello",
"Emanuele Rodolà"
] | Conference | poster | 2311.00664 | [
"https://github.com/flegyas/latent-translation"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=p9k5MS0JAL | @inproceedings{
jeong2023demystifying,
title={Demystifying the Optimal Performance of Multi-Class Classification},
author={Minoh Jeong and Martina Cardone and Alex Dytso},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=p9k5MS0JAL}
} | Classification is a fundamental task in science and engineering on which machine learning methods have shown outstanding performances. However, it is challenging to determine whether such methods have achieved the Bayes error rate, that is, the lowest error rate attained by any classifier. This is mainly due to the fact that the Bayes error rate is not known in general and hence, effectively estimating it is paramount. Inspired by the work by Ishida et al. (2023), we propose an estimator for the Bayes error rate of supervised multi-class classification problems. We analyze several theoretical aspects of such estimator, including its consistency, unbiasedness, convergence rate, variance, and robustness. We also propose a denoising method that reduces the noise that potentially corrupts the data labels, and we improve the robustness of the proposed estimator to outliers by incorporating the median-of-means estimator. Our analysis demonstrates the consistency, asymptotic unbiasedness, convergence rate, and robustness of the proposed estimators. Finally, we validate the effectiveness of our theoretical results via experiments both on synthetic data under various noise settings and on real data. | Demystifying the Optimal Performance of Multi-Class Classification | [
"Minoh Jeong",
"Martina Cardone",
"Alex Dytso"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=p8lowHbuv8 | @inproceedings{
yan2023from,
title={From Trainable Negative Depth to Edge Heterophily in Graphs},
author={Yuchen Yan and Yuzhong Chen and Huiyuan Chen and Minghua Xu and Mahashweta Das and Hao Yang and Hanghang Tong},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=p8lowHbuv8}
} | Finding the proper depth $d$ of a graph convolutional network (GCN) that provides strong representation ability has drawn significant attention, yet nonetheless largely remains an open problem for the graph learning community. Although noteworthy progress has been made, the depth or the number of layers of a corresponding GCN is realized by a series of graph convolution operations, which naturally makes $d$ a positive integer ($d \in \mathbb{N}+$). An interesting question is whether breaking the constraint of $\mathbb{N}+$ by making $d$ a real number ($d \in \mathbb{R}$) can bring new insights into graph learning mechanisms. In this work, by redefining GCN's depth $d$ as a trainable parameter continuously adjustable within $(-\infty,+\infty)$, we open a new door of controlling its signal processing capability to model graph homophily/heterophily (nodes with similar/dissimilar labels/attributes tend to be inter-connected). A simple and powerful GCN model TEDGCN, is proposed to retain the simplicity of GCN and meanwhile automatically search for the optimal $d$ without the prior knowledge regarding whether the input graph is homophilic or heterophilic. Negative-valued $d$ intrinsically enables high-pass frequency filtering functionality via augmented topology for graph heterophily. Extensive experiments demonstrate the superiority of TEDGCN on node classification tasks for a variety of homophilic and heterophilic graphs. | From Trainable Negative Depth to Edge Heterophily in Graphs | [
"Yuchen Yan",
"Yuzhong Chen",
"Huiyuan Chen",
"Minghua Xu",
"Mahashweta Das",
"Hao Yang",
"Hanghang Tong"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=p8gTWkFIvx | @inproceedings{
lai2023modalityindependent,
title={Modality-Independent Teachers Meet Weakly-Supervised Audio-Visual Event Parser},
author={Yung-Hsuan Lai and Yen-Chun Chen and Yu-Chiang Frank Wang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=p8gTWkFIvx}
} | Audio-visual learning has been a major pillar of multi-modal machine learning, where the community mostly focused on its $\textit{modality-aligned}$ setting, $\textit{i.e.}$, the audio and visual modality are $\textit{both}$ assumed to signal the prediction target.
With the Look, Listen, and Parse dataset (LLP), we investigate the under-explored $\textit{unaligned}$ setting, where the goal is to recognize audio and visual events in a video with only weak labels observed.
Such weak video-level labels only tell what events happen without knowing the modality they are perceived (audio, visual, or both).
To enhance learning in this challenging setting, we incorporate large-scale contrastively pre-trained models as the modality teachers. A simple, effective, and generic method, termed $\textbf{V}$isual-$\textbf{A}$udio $\textbf{L}$abel Elab$\textbf{or}$ation (VALOR), is innovated to harvest modality labels for the training events.
Empirical studies show that the harvested labels significantly improve an attentional baseline by $\textbf{8.0}$ in average F-score (Type@AV).
Surprisingly, we found that modality-independent teachers outperform their modality-fused counterparts since they are noise-proof from the other potentially unaligned modality.
Moreover, our best model achieves the new state-of-the-art on all metrics of LLP by a substantial margin ($\textbf{+5.4}$ F-score for Type@AV). VALOR is further generalized to Audio-Visual Event Localization and achieves the new state-of-the-art as well. | Modality-Independent Teachers Meet Weakly-Supervised Audio-Visual Event Parser | [
"Yung-Hsuan Lai",
"Yen-Chun Chen",
"Yu-Chiang Frank Wang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=p53QDxSIc5 | @inproceedings{
pourreza2023dinsql,
title={{DIN}-{SQL}: Decomposed In-Context Learning of Text-to-{SQL} with Self-Correction},
author={Mohammadreza Pourreza and Davood Rafiei},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=p53QDxSIc5}
} | There is currently a significant gap between the performance of fine-tuned models and prompting approaches using Large Language Models (LLMs) on the challenging task of text-to-SQL, as evaluated on datasets such as Spider. To improve the performance of LLMs in the reasoning process, we study how decomposing the task into smaller sub-tasks can be effective. In particular, we show that breaking down the generation problem into sub-problems and feeding the solutions of those sub-problems into LLMs can be an effective approach for significantly improving their performance. Our experiments with three LLMs show that this approach consistently improves their simple few-shot performance by roughly 10%, pushing the accuracy of LLMs towards SOTA or surpassing it. On the holdout test set of Spider, the SOTA, in terms of execution accuracy, was 79.9 and the new SOTA at the time of this writing using our approach is 85.3. Our approach with in-context learning beats many heavily fine-tuned models by at least 5%. Additionally, when evaluated on the BIRD benchmark, our approach achieved an execution accuracy of 55.9%, setting a new SOTA on its holdout test set. | DIN-SQL: Decomposed In-Context Learning of Text-to-SQL with Self-Correction | [
"Mohammadreza Pourreza",
"Davood Rafiei"
] | Conference | poster | 2304.11015 | [
"https://github.com/mohammadrezapourreza/few-shot-nl2sql-with-prompting"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=p4SjKPchJy | @inproceedings{
hsieh2023riemannian,
title={Riemannian stochastic optimization methods avoid strict saddle points},
author={Ya-Ping Hsieh and Mohammad Reza Karimi Jaghargh and Andreas Krause and Panayotis Mertikopoulos},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=p4SjKPchJy}
} | Many modern machine learning applications - from online principal component analysis to covariance matrix identification and dictionary learning - can be formulated as minimization problems on Riemannian manifolds, typically solved with a Riemannian stochastic gradient method (or some variant thereof). However, in many cases of interest, the resulting minimization problem is _not_ geodesically convex, so the convergence of the chosen solver to a desirable solution - i.e., a local minimizer - is by no means guaranteed. In this paper, we study precisely this question, that is, whether stochastic Riemannian optimization algorithms are guaranteed to avoid saddle points with probability $1$. For generality, we study a family of retraction-based methods which, in addition to having a potentially much lower per-iteration cost relative to Riemannian gradient descent, include other widely used algorithms, such as natural policy gradient methods and mirror descent in ordinary convex spaces. In this general setting, we show that, under mild assumptions for the ambient manifold and the oracle providing gradient information, the policies under study avoid strict saddle points / submanifolds with probability $1$, from any initial condition. This result provides an important sanity check for the use of gradient methods on manifolds as it shows that, almost always, the end state of a stochastic Riemannian algorithm can only be a local minimizer. | Riemannian stochastic optimization methods avoid strict saddle points | [
"Ya-Ping Hsieh",
"Mohammad Reza Karimi Jaghargh",
"Andreas Krause",
"Panayotis Mertikopoulos"
] | Conference | poster | 2311.02374 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=p4PckNQR8k | @inproceedings{
hanna2023how,
title={How does {GPT}-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained language model},
author={Michael Hanna and Ollie Liu and Alexandre Variengien},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=p4PckNQR8k}
} | Pre-trained language models can be surprisingly adept at tasks they were not explicitly trained on, but how they implement these capabilities is poorly understood. In this paper, we investigate the basic mathematical abilities often acquired by pre-trained language models. Concretely, we use mechanistic interpretability techniques to explain the (limited) mathematical abilities of GPT-2 small. As a case study, we examine its ability to take in sentences such as "The war lasted from the year 1732 to the year 17", and predict valid two-digit end years (years > 32). We first identify a circuit, a small subset of GPT-2 small's computational graph that computes this task's output. Then, we explain the role of each circuit component, showing that GPT-2 small's final multi-layer perceptrons boost the probability of end years greater than the start year. Finally, we find related tasks that activate our circuit. Our results suggest that GPT-2 small computes greater-than using a complex but general mechanism that activates across diverse contexts. | How does GPT-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained language model | [
"Michael Hanna",
"Ollie Liu",
"Alexandre Variengien"
] | Conference | poster | 2305.00586 | [
"https://github.com/hannamw/gpt2-greater-than"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=p40XRfBX96 | @inproceedings{
sun2023principledriven,
title={Principle-Driven Self-Alignment of Language Models from Scratch with Minimal Human Supervision},
author={Zhiqing Sun and Yikang Shen and Qinhong Zhou and Hongxin Zhang and Zhenfang Chen and David Daniel Cox and Yiming Yang and Chuang Gan},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=p40XRfBX96}
} | Recent AI-assistant agents, such as ChatGPT, predominantly rely on supervised fine-tuning (SFT) with human annotations and reinforcement learning from human feedback (RLHF) to align the output of large language models (LLMs) with human intentions, ensuring they are helpful, ethical, and reliable. However, this dependence can significantly constrain the true potential of AI-assistant agents due to the high cost of obtaining human supervision and the related issues on quality, reliability, diversity, self-consistency, and undesirable biases. To address these challenges, we propose a novel approach called SELF-ALIGN, which combines principle-driven reasoning and the generative power of LLMs for the self-alignment of AI agents with minimal human supervision. Our approach encompasses four stages: first, we use an LLM to generate synthetic prompts, and a topic-guided method to augment the prompt diversity; second, we use a small set of human-written principles for AI models to follow, and guide the LLM through in-context learning from demonstrations (of principles application) to produce helpful, ethical, and reliable responses to user's queries; third, we fine-tune the original LLM with the high-quality self-aligned responses so that the resulting model can generate desirable responses for each query directly without the principle set and the demonstrations anymore; and finally, we offer a refinement step to address the issues of overly-brief or indirect responses. Applying SELF-ALIGN to the LLaMA-65b base language model, we develop an AI assistant named Dromedary. With fewer than 300 lines of human annotations (including < 200 seed prompts, 16 generic principles, and 5 exemplars for in-context learning). Dromedary significantly surpasses the performance of several state-of-the-art AI systems, including Text-Davinci-003 and Alpaca, on benchmark datasets with various settings. | Principle-Driven Self-Alignment of Language Models from Scratch with Minimal Human Supervision | [
"Zhiqing Sun",
"Yikang Shen",
"Qinhong Zhou",
"Hongxin Zhang",
"Zhenfang Chen",
"David Daniel Cox",
"Yiming Yang",
"Chuang Gan"
] | Conference | spotlight | 2305.03047 | [
"https://github.com/IBM/Dromedary"
] | https://huggingface.co/papers/2305.03047 | 2 | 1 | 5 | 8 | 1 | [] | [
"zhiqings/dromedary-65b-verbose-clone-v0"
] | [
"osanseviero/test_arxiv2"
] |
null | https://openreview.net/forum?id=p1gzxzJ4Y5 | @inproceedings{
brahmanage2023flowpg,
title={Flow{PG}: Action-constrained Policy Gradient with Normalizing Flows},
author={Janaka Chathuranga Brahmanage and Jiajing Ling and Akshat Kumar},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=p1gzxzJ4Y5}
} | Action-constrained reinforcement learning (ACRL) is a popular approach for solving safety-critical and resource-allocation related decision making problems. A major challenge in ACRL is to ensure agent taking a valid action satisfying constraints in each RL step. Commonly used approach of using a projection layer on top of the policy network requires solving an optimization program which can result in longer training time, slow convergence, and zero gradient problem. To address this, first we use a normalizing flow model to learn an invertible, differentiable mapping between the feasible action space and the support of a simple distribution on a latent variable, such as Gaussian. Second, learning the flow model requires sampling from the feasible action space, which is also challenging. We develop multiple methods, based on Hamiltonian Monte-Carlo and probabilistic sentential decision diagrams for such action sampling for convex and non-convex constraints. Third, we integrate the learned normalizing flow with the DDPG algorithm. By design, a well-trained normalizing flow will transform policy output into a valid action without requiring an optimization solver. Empirically, our approach results in significantly fewer constraint violations (upto an order-of-magnitude for several instances) and is multiple times faster on a variety of continuous control tasks. | FlowPG: Action-constrained Policy Gradient with Normalizing Flows | [
"Janaka Chathuranga Brahmanage",
"Jiajing Ling",
"Akshat Kumar"
] | Conference | poster | 2402.05149 | [
"https://github.com/rlr-smu/flow-pg"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=oyV9FslE3j | @inproceedings{
zhou2023temperature,
title={Temperature Balancing, Layer-wise Weight Analysis, and Neural Network Training},
author={Yefan Zhou and Tianyu Pang and Keqin Liu and charles h martin and Michael W. Mahoney and Yaoqing Yang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=oyV9FslE3j}
} | Regularization in modern machine learning is crucial, and it can take various forms in algorithmic design: training set, model family, error function, regularization terms, and optimizations.
In particular, the learning rate, which can be interpreted as a temperature-like parameter within the statistical mechanics of learning, plays a crucial role in neural network training.
Indeed, many widely adopted training strategies basically just define the decay of the learning rate over time.
This process can be interpreted as decreasing a temperature, using either a global learning rate (for the entire model) or a learning rate that varies for each parameter.
This paper proposes TempBalance, a straightforward yet effective layer-wise learning rate method. TempBalance is based on Heavy-Tailed Self-Regularization (HT-SR) Theory, an approach which characterizes the implicit self-regularization of different layers in trained models.
We demonstrate the efficacy of using HT-SR-motivated metrics to guide the scheduling and balancing of temperature across all network layers during model training, resulting in improved performance during testing.
We implement TempBalance on CIFAR10, CIFAR100, SVHN, and TinyImageNet datasets using ResNets, VGGs and WideResNets with various depths and widths.
Our results show that TempBalance significantly outperforms ordinary SGD and carefully-tuned spectral norm regularization.
We also show that TempBalance outperforms a number of state-of-the-art optimizers and learning rate schedulers. | Temperature Balancing, Layer-wise Weight Analysis, and Neural Network Training | [
"Yefan Zhou",
"Tianyu Pang",
"Keqin Liu",
"charles h martin",
"Michael W. Mahoney",
"Yaoqing Yang"
] | Conference | spotlight | 2312.00359 | [
"https://github.com/yefanzhou/tempbalance"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=oyFyOPZUCs | @inproceedings{
ranasinghe2023languagebased,
title={Language-based Action Concept Spaces Improve Video Self-Supervised Learning},
author={Kanchana Ranasinghe and Michael S Ryoo},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=oyFyOPZUCs}
} | Recent contrastive language image pre-training has led to learning highly transferable and robust image representations. However, adapting these models to video domain with minimal supervision remains an open problem. We explore a simple step in that direction, using language tied self-supervised learning to adapt an image CLIP model to the video domain. A backbone modified for temporal modeling is trained under self-distillation settings with train objectives operating in an action concept space. Feature vectors of various action concepts extracted from a language encoder using relevant textual prompts construct this space. A large language model aware of actions and their attributes generates the relevant textual prompts.
We introduce two train objectives, concept distillation and concept alignment, that retain generality of original representations while enforcing relations between actions and their attributes. Our approach improves zero-shot and linear probing performance on three action recognition benchmarks. | Language-based Action Concept Spaces Improve Video Self-Supervised Learning | [
"Kanchana Ranasinghe",
"Michael S Ryoo"
] | Conference | poster | 2307.10922 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=ox7aynitoW | @inproceedings{
menet2023mimonets,
title={{MIMON}ets: Multiple-Input-Multiple-Output Neural Networks Exploiting Computation in Superposition},
author={Nicolas Menet and Michael Hersche and Geethan Karunaratne and Luca Benini and Abu Sebastian and Abbas Rahimi},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ox7aynitoW}
} | With the advent of deep learning, progressively larger neural networks have been designed to solve complex tasks. We take advantage of these capacity-rich models to lower the cost of inference by exploiting computation in superposition. To reduce the computational burden per input, we propose Multiple-Input-Multiple-Output Neural Networks (MIMONets) capable of handling many inputs at once. MIMONets augment various deep neural network architectures with variable binding mechanisms to represent an arbitrary number of inputs in a compositional data structure via fixed-width distributed representations. Accordingly, MIMONets adapt nonlinear neural transformations to process the data structure holistically, leading to a speedup nearly proportional to the number of superposed input items in the data structure. After processing in superposition, an unbinding mechanism recovers each transformed input of interest. MIMONets also provide a dynamic trade-off between accuracy and throughput by an instantaneous on-demand switching between a set of accuracy-throughput operating points, yet within a single set of fixed parameters. We apply the concept of MIMONets to both CNN and Transformer architectures resulting in MIMOConv and MIMOFormer, respectively. Empirical evaluations show that MIMOConv achieves $\approx 2$–$4\times$ speedup at an accuracy delta within [+0.68, -3.18]% compared to WideResNet CNNs on CIFAR10 and CIFAR100.
Similarly, MIMOFormer can handle $2$–$4$ inputs at once while maintaining a high average accuracy within a [-1.07, -3.43]% delta on the long range arena benchmark.
Finally, we provide mathematical bounds on the interference between superposition channels in MIMOFormer. Our code is available at https://github.com/IBM/multiple-input-multiple-output-nets. | MIMONets: Multiple-Input-Multiple-Output Neural Networks Exploiting Computation in Superposition | [
"Nicolas Menet",
"Michael Hersche",
"Geethan Karunaratne",
"Luca Benini",
"Abu Sebastian",
"Abbas Rahimi"
] | Conference | poster | 2312.02829 | [
"https://github.com/ibm/multiple-input-multiple-output-nets"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=ouLe91yibj | @inproceedings{
zhang2023on,
title={On the Properties of Kullback-Leibler Divergence Between Multivariate Gaussian Distributions},
author={Yufeng Zhang and Jialu Pan and Kenli Li and Wanwei Liu and Zhenbang Chen and Xinwang Liu and J Wang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ouLe91yibj}
} | Kullback-Leibler (KL) divergence is one of the most important measures to calculate the difference between probability distributions. In this paper, we theoretically study several properties of KL divergence between multivariate Gaussian distributions. Firstly, for any two $n$-dimensional Gaussian distributions $\mathcal{N}_1$ and $\mathcal{N}_2$, we prove that when $KL(\mathcal{N}_2||\mathcal{N}_1)\leq \varepsilon\ (\varepsilon>0)$ the supremum of $KL(\mathcal{N}_1||\mathcal{N}_2)$ is $(1/2)\left((-W_{0}(-e^{-(1+2\varepsilon)}))^{-1}+\log(-W_{0}(-e^{-(1+2\varepsilon)})) -1 \right)$, where $W_0$ is the principal branch of Lambert $W$ function. For small $\varepsilon$, the supremum is $\varepsilon + 2\varepsilon^{1.5} + O(\varepsilon^2)$. This quantifies the approximate symmetry of small KL divergence between Gaussian distributions. We further derive the infimum of $KL(\mathcal{N}_1||\mathcal{N}_2)$ when $KL(\mathcal{N}_2||\mathcal{N}_1)\geq M\ (M>0)$. We give the conditions when the supremum and infimum can be attained. Secondly, for any three $n$-dimensional Gaussian distributions $\mathcal{N}_1$, $\mathcal{N}_2$, and $\mathcal{N}_3$, we theoretically show that an upper bound of $KL(\mathcal{N}_1||\mathcal{N}_3)$ is $3\varepsilon_1+3\varepsilon_2+2\sqrt{\varepsilon_1\varepsilon_2}+o(\varepsilon_1)+o(\varepsilon_2)$ when $KL(\mathcal{N}_1||\mathcal{N}_2)\leq \varepsilon_1$ and $KL(\mathcal{N}_2||\mathcal{N}_3)\leq \varepsilon_2$ ($\varepsilon_1,\varepsilon_2\ge 0$). This reveals that KL divergence between Gaussian distributions follows a relaxed triangle inequality. Note that, all these bounds in the theorems presented in this work are independent of the dimension $n$. Finally, we discuss several applications of our theories in deep learning, reinforcement learning, and sample complexity research. | On the Properties of Kullback-Leibler Divergence Between Multivariate Gaussian Distributions | [
"Yufeng Zhang",
"Jialu Pan",
"Kenli Li",
"Wanwei Liu",
"Zhenbang Chen",
"Xinwang Liu",
"J Wang"
] | Conference | poster | 2102.05485 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=oss2jXD1Zs | @inproceedings{
huang2023linear,
title={Linear Time Algorithms for k-means with Multi-Swap Local Search},
author={Junyu Huang and Qilong Feng and Ziyun Huang and Jinhui Xu and Jianxin Wang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=oss2jXD1Zs}
} | The local search methods have been widely used to solve the clustering problems. In practice, local search algorithms for clustering problems mainly adapt the single-swap strategy, which enables them to handle large-scale datasets and achieve linear running time in the data size. However, compared with multi-swap local search algorithms, there is a considerable gap on the approximation ratios of the single-swap local search algorithms. Although the current multi-swap local search algorithms provide small constant approximation, the proposed algorithms tend to have large polynomial running time, which cannot be used to handle large-scale datasets. In this paper, we propose a multi-swap local search algorithm for the $k$-means problem with linear running time in the data size. Given a swap size $t$, our proposed algorithm can achieve a $(50(1+\frac{1}{t})+\epsilon)$-approximation, which improves the current best result 509 (ICML 2019) with linear running time in the data size. Our proposed method, compared with previous multi-swap local search algorithms, is the first one to achieve linear running time in the data size. To obtain a more practical algorithm for the problem with better clustering quality and running time, we propose a sampling-based method which accelerates the process of clustering cost update during swaps. Besides, a recombination mechanism is proposed to find potentially better solutions. Empirical experiments show that our proposed algorithms achieve better performances compared with branch and bound solver (NeurIPS 2022) and other existing state-of-the-art local search algorithms on both small and large datasets. | Linear Time Algorithms for k-means with Multi-Swap Local Search | [
"Junyu Huang",
"Qilong Feng",
"Ziyun Huang",
"Jinhui Xu",
"Jianxin Wang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=os2BdbiGwX | @inproceedings{
pham2023model,
title={Model and Feature Diversity for Bayesian Neural Networks in Mutual Learning},
author={Cuong Pham and Cuong C. Nguyen and Trung Le and Dinh Phung and Gustavo Carneiro and Thanh-Toan Do},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=os2BdbiGwX}
} | Bayesian Neural Networks (BNNs) offer probability distributions for model parameters, enabling uncertainty quantification in predictions. However, they often underperform compared to deterministic neural networks. Utilizing mutual learning can effectively enhance the performance of peer BNNs. In this paper, we propose a novel approach to improve BNNs performance through deep mutual learning. The proposed approaches aim to increase diversity in both network parameter distributions and feature distributions, promoting peer networks to acquire distinct features that capture different characteristics of the input, which enhances the effectiveness of mutual learning. Experimental results demonstrate significant improvements in the classification accuracy, negative log-likelihood, and expected calibration error when compared to traditional mutual learning for BNNs. | Model and Feature Diversity for Bayesian Neural Networks in Mutual Learning | [
"Cuong Pham",
"Cuong C. Nguyen",
"Trung Le",
"Dinh Phung",
"Gustavo Carneiro",
"Thanh-Toan Do"
] | Conference | poster | 2407.02721 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=orh4e0AO9R | @inproceedings{
liu2023bypassing,
title={Bypassing the Simulator: Near-Optimal Adversarial Linear Contextual Bandits},
author={Haolin Liu and Chen-Yu Wei and Julian Zimmert},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=orh4e0AO9R}
} | We consider the adversarial linear contextual bandit problem,
where the loss vectors are selected fully adversarially and the per-round action set (i.e. the context) is drawn from a fixed distribution. Existing methods for this problem either require access to a simulator to generate free i.i.d. contexts, achieve a sub-optimal regret no better than $\tilde{\mathcal{O}}(T^{\frac{5}{6}})$, or are computationally inefficient.
We greatly improve these results by achieving a regret of $\tilde{\mathcal{O}}(\sqrt{T})$ without a simulator, while maintaining computational efficiency when the action set in each round is small.
In the special case of sleeping bandits with adversarial loss and stochastic arm availability, our result answers affirmatively the open question by [SGV20] on whether there exists a polynomial-time algorithm with $poly(d)\sqrt{T}$ regret. Our approach naturally handles the case where the loss is linear up to an additive misspecification error, and our regret shows near-optimal dependence on the magnitude of the error. | Bypassing the Simulator: Near-Optimal Adversarial Linear Contextual Bandits | [
"Haolin Liu",
"Chen-Yu Wei",
"Julian Zimmert"
] | Conference | poster | 2309.00814 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=oqDSDKLd3S | @inproceedings{
wang2023sampleconditioned,
title={Sample-Conditioned Hypothesis Stability Sharpens Information-Theoretic Generalization Bounds},
author={Ziqiao Wang and Yongyi Mao},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=oqDSDKLd3S}
} | We present new information-theoretic generalization guarantees through the a novel construction of the "neighboring-hypothesis" matrix and a new family of stability notions termed sample-conditioned hypothesis (SCH) stability. Our approach yields sharper bounds that improve upon previous information-theoretic bounds in various learning scenarios. Notably, these bounds address the limitations of existing information-theoretic bounds in the context of stochastic convex optimization (SCO) problems, as explored in the recent work by Haghifam et al. (2023). | Sample-Conditioned Hypothesis Stability Sharpens Information-Theoretic Generalization Bounds | [
"Ziqiao Wang",
"Yongyi Mao"
] | Conference | poster | 2310.20102 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=ooXpTZYwXa | @inproceedings{
fang2023explore,
title={Explore In-Context Learning for 3D Point Cloud Understanding},
author={Zhongbin Fang and Xiangtai Li and Xia Li and Joachim M. Buhmann and Chen Change Loy and Mengyuan Liu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ooXpTZYwXa}
} | With the rise of large-scale models trained on broad data, in-context learning has become a new learning paradigm that has demonstrated significant potential in natural language processing and computer vision tasks. Meanwhile, in-context learning is still largely unexplored in the 3D point cloud domain. Although masked modeling has been successfully applied for in-context learning in 2D vision, directly extending it to 3D point clouds remains a formidable challenge. In the case of point clouds, the tokens themselves are the point cloud positions (coordinates) that are masked during inference. Moreover, position embedding in previous works may inadvertently introduce information leakage. To address these challenges, we introduce a novel framework, named Point-In-Context, designed especially for in-context learning in 3D point clouds, where both inputs and outputs are modeled as coordinates for each task. Additionally, we propose the Joint Sampling module, carefully designed to work in tandem with the general point sampling operator, effectively resolving the aforementioned technical issues. We conduct extensive experiments to validate the versatility and adaptability of our proposed methods in handling a wide range of tasks. Furthermore, with a more effective prompt selection strategy, our framework surpasses the results of individually trained models. | Explore In-Context Learning for 3D Point Cloud Understanding | [
"Zhongbin Fang",
"Xiangtai Li",
"Xia Li",
"Joachim M. Buhmann",
"Chen Change Loy",
"Mengyuan Liu"
] | Conference | spotlight | 2306.08659 | [
"https://github.com/fanglaosi/point-in-context"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=oi45JlpSOT | @inproceedings{
wang2023multifidelity,
title={Multi-Fidelity Multi-Armed Bandits Revisited},
author={Xuchuang Wang and Qingyun Wu and Wei Chen and John C.S. Lui},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=oi45JlpSOT}
} | We study the multi-fidelity multi-armed bandit ($\texttt{MF-MAB}$), an extension of the canonical multi-armed bandit (MAB) problem.
$\texttt{MF-MAB}$ allows each arm to be pulled with different costs (fidelities) and observation accuracy.
We study both the best arm identification with fixed confidence ($\texttt{BAI}$) and the regret minimization objectives.
For $\texttt{BAI}$, we present (a) a cost complexity lower bound, (b) an algorithmic framework with two alternative fidelity selection procedures,
and (c) both procedures' cost complexity upper bounds.
From both cost complexity bounds of $\texttt{MF-MAB}$,
one can recover the standard sample complexity bounds of the classic (single-fidelity) MAB.
For regret minimization of $\texttt{MF-MAB}$, we propose a new regret definition, prove its problem-independent regret lower bound $\Omega(K^{1/3}\Lambda^{2/3})$ and problem-dependent lower bound $\Omega(K\log \Lambda)$, where $K$ is the number of arms and $\Lambda$ is the decision budget in terms of cost, and devise an elimination-based algorithm whose worst-cost regret upper bound matches its corresponding lower bound up to some logarithmic terms and, whose problem-dependent bound matches its corresponding lower bound in terms of $\Lambda$. | Multi-Fidelity Multi-Armed Bandits Revisited | [
"Xuchuang Wang",
"Qingyun Wu",
"Wei Chen",
"John C.S. Lui"
] | Conference | poster | 2306.07761 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=ohKbQp0jIY | @inproceedings{
yu2023successorpredecessor,
title={Successor-Predecessor Intrinsic Exploration},
author={Changmin Yu and Neil Burgess and Maneesh Sahani and Samuel Gershman},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ohKbQp0jIY}
} | Exploration is essential in reinforcement learning, particularly in environments where external rewards are sparse. Here we focus on exploration with intrinsic rewards, where the agent transiently augments the external rewards with self-generated intrinsic rewards. Although the study of intrinsic rewards has a long history, existing methods focus on composing the intrinsic reward based on measures of future prospects of states, ignoring the information contained in the retrospective structure of transition sequences. Here we argue that the agent can utilise retrospective information to generate explorative behaviour with structure-awareness, facilitating efficient exploration based on global instead of local information. We propose Successor-Predecessor Intrinsic Exploration (SPIE), an exploration algorithm based on a novel intrinsic reward combining prospective and retrospective information. We show that SPIE yields more efficient and ethologically plausible exploratory behaviour in environments with sparse rewards and bottleneck states than competing methods. We also implement SPIE in deep reinforcement learning agents, and show that the resulting agent achieves stronger empirical performance than existing methods on sparse-reward Atari games. | Successor-Predecessor Intrinsic Exploration | [
"Changmin Yu",
"Neil Burgess",
"Maneesh Sahani",
"Samuel Gershman"
] | Conference | poster | 2305.15277 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=ogPBujRhiN | @inproceedings{
chen2023disentangling,
title={Disentangling Cognitive Diagnosis with Limited Exercise Labels},
author={Xiangzhi Chen and Le Wu and Fei Liu and Lei Chen and Kun Zhang and Richang Hong and Meng Wang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ogPBujRhiN}
} | Cognitive diagnosis is an important task in intelligence education, which aims at measuring students’ proficiency in specific knowledge concepts. Given a fully labeled exercise-concept matrix, most existing models focused on mining students' response records for cognitive diagnosis. Despite their success, due to the huge cost of labeling exercises, a more practical scenario is that limited exercises are labeled with concepts. Performing cognitive diagnosis with limited exercise labels is under-explored and remains pretty much open.
In this paper, we propose Disentanglement based Cognitive Diagnosis (DCD) to address the challenges of limited exercise labels. Specifically, we utilize students' response records to model student proficiency, exercise difficulty and exercise label distribution.
Then, we introduce two novel modules - group-based disentanglement and limited-labeled alignment modules - to disentangle the factors relevant to concepts and align them with real limited labels.
Particularly, we introduce the tree-like structure of concepts with negligible cost for group-based disentangling, as concepts of different levels exhibit different independence relationships.
Extensive experiments on widely used benchmarks demonstrate the superiority of our proposed model. | Disentangling Cognitive Diagnosis with Limited Exercise Labels | [
"Xiangzhi Chen",
"Le Wu",
"Fei Liu",
"Lei Chen",
"Kun Zhang",
"Richang Hong",
"Meng Wang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=og9V7NgOrQ | @inproceedings{
yerxa2023learning,
title={Learning Efficient Coding of Natural Images with Maximum Manifold Capacity Representations},
author={Thomas Edward Yerxa and Yilun Kuang and Eero P Simoncelli and SueYeon Chung},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=og9V7NgOrQ}
} | The efficient coding hypothesis proposes that the response properties of sensory systems are adapted to the statistics of their inputs such that they capture maximal information about the environment, subject to biological constraints. While elegant, information theoretic properties are notoriously difficult to measure in practical settings or to employ as objective functions in optimization. This difficulty has necessitated that computational models designed to test the hypothesis employ several different information metrics ranging from approximations and lower bounds to proxy measures like reconstruction error. Recent theoretical advances have characterized a novel and ecologically relevant efficiency metric, the ``manifold capacity,” which is the number of object categories that may be represented in a linearly separable fashion. However, calculating manifold capacity is a computationally intensive iterative procedure that until now has precluded its use as an objective. Here we outline the simplifying assumptions that allow manifold capacity to be optimized directly, yielding Maximum Manifold Capacity Representations (MMCR). The resulting method is closely related to and inspired by advances in the field of self supervised learning (SSL), and we demonstrate that MMCRs are competitive with state of the art results on standard SSL benchmarks. Empirical analyses reveal differences between MMCRs and representations learned by other SSL frameworks, and suggest a mechanism by which manifold compression gives rise to class separability. Finally we evaluate a set of SSL methods on a suite of neural predicitivity benchmarks, and find MMCRs are higly competitive as models of the ventral stream. | Learning Efficient Coding of Natural Images with Maximum Manifold Capacity Representations | [
"Thomas Edward Yerxa",
"Yilun Kuang",
"Eero P Simoncelli",
"SueYeon Chung"
] | Conference | poster | 2303.03307 | [
"https://github.com/ThomasYerxa/mmcr"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=ofa1U5BJVJ | @inproceedings{
zhang2023online,
title={Online (Multinomial) Logistic Bandit: Improved Regret and Constant Computation Cost},
author={Yu-Jie Zhang and Masashi Sugiyama},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ofa1U5BJVJ}
} | This paper investigates the logistic bandit problem, a variant of the generalized linear bandit model that utilizes a logistic model to depict the feedback from an action. While most existing research focuses on the binary logistic bandit problem, the multinomial case, which considers more than two possible feedback values, offers increased practical relevance and adaptability for use in complex decision-making problems such as reinforcement learning. In this paper, we provide an algorithm that enjoys both statistical and computational efficiency for the logistic bandit problem. In the binary case, our method improves the state-of-the-art binary logistic bandit method by reducing the per-round computation cost from $\mathcal{O}(\log T)$ to $\mathcal{O}(1)$ with respect to the time horizon $T$, while still preserving the minimax optimal guarantee up to logarithmic factors. In the multinomial case, with $K+1$ potential feedback values, our algorithm achieves an $\tilde{\mathcal{O}}(K\sqrt{T})$ regret bound with $\mathcal{O}(1)$ computational cost per round. The result not only improves the $\tilde{\mathcal{O}}(K\sqrt{\kappa T})$ bound for the best-known tractable algorithm—where the large constant $\kappa$ increases exponentially with the diameter of the parameter domain—but also reduces the $\mathcal{O}(T)$ computational complexity demanded by the previous method. | Online (Multinomial) Logistic Bandit: Improved Regret and Constant Computation Cost | [
"Yu-Jie Zhang",
"Masashi Sugiyama"
] | Conference | spotlight | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=oef30oScVB | @inproceedings{
mao2023demystifying,
title={Demystifying Structural Disparity in Graph Neural Networks: Can One Size Fit All?},
author={Haitao Mao and Zhikai Chen and Wei Jin and Haoyu Han and Yao Ma and Tong Zhao and Neil Shah and Jiliang Tang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=oef30oScVB}
} | Recent studies on Graph Neural Networks(GNNs) provide both empirical and theoretical evidence supporting their effectiveness in capturing structural patterns on both homophilic and certain heterophilic graphs. Notably, most real-world homophilic and heterophilic graphs are comprised of a mixture of nodes in both homophilic and heterophilic structural patterns, exhibiting a structural disparity. However, the analysis of GNN performance with respect to nodes exhibiting different structural patterns, e.g., homophilic nodes in heterophilic graphs, remains rather limited. In the present study, we provide evidence that Graph Neural Networks(GNNs) on node classification typically perform admirably on homophilic nodes within homophilic graphs and heterophilic nodes within heterophilic graphs while struggling on the opposite node set, exhibiting a performance disparity. We theoretically and empirically identify effects of GNNs on testing nodes exhibiting distinct structural patterns. We then propose a rigorous, non-i.i.d PAC-Bayesian generalization bound for GNNs, revealing reasons for the performance disparity, namely the aggregated feature distance and homophily ratio difference between training and testing nodes. Furthermore, we demonstrate the practical implications of our new findings via (1) elucidating the effectiveness of deeper GNNs; and (2) revealing an over-looked distribution shift factor on graph out-of-distribution problem and proposing a new scenario accordingly. | Demystifying Structural Disparity in Graph Neural Networks: Can One Size Fit All? | [
"Haitao Mao",
"Zhikai Chen",
"Wei Jin",
"Haoyu Han",
"Yao Ma",
"Tong Zhao",
"Neil Shah",
"Jiliang Tang"
] | Conference | poster | 2306.01323 | [
"https://github.com/haitaomao/demystify-structural-disparity"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=ody3RBUuJS | @inproceedings{
yao2023fedgcn,
title={Fed{GCN}: Convergence-Communication Tradeoffs in Federated Training of Graph Convolutional Networks},
author={Yuhang Yao and Weizhao Jin and Srivatsan Ravi and Carlee Joe-Wong},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ody3RBUuJS}
} | Methods for training models on graphs distributed across multiple clients have recently grown in popularity, due to the size of these graphs as well as regulations on keeping data where it is generated. However, the cross-client edges naturally exist among clients. Thus, distributed methods for training a model on a single graph incur either significant communication overhead between clients or a loss of available information to the training. We introduce the Federated Graph Convolutional Network (FedGCN) algorithm, which uses federated learning to train GCN models for semi-supervised node classification with fast convergence and little communication. Compared to prior methods that require extra communication among clients at each training round, FedGCN clients only communicate with the central server in one pre-training step, greatly reducing communication costs and allowing the use of homomorphic encryption to further enhance privacy. We theoretically analyze the tradeoff between FedGCN's convergence rate and communication cost under different data distributions. Experimental results show that our FedGCN algorithm achieves better model accuracy with 51.7\% faster convergence on average and at least 100$\times$ less communication compared to prior work. | FedGCN: Convergence-Communication Tradeoffs in Federated Training of Graph Convolutional Networks | [
"Yuhang Yao",
"Weizhao Jin",
"Srivatsan Ravi",
"Carlee Joe-Wong"
] | Conference | poster | 2201.12433 | [
"https://github.com/yh-yao/FedGCN"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=obCNIzeSrg | @inproceedings{
diakonikolas2023sq,
title={{SQ} Lower Bounds for Learning Mixtures of Linear Classifiers},
author={Ilias Diakonikolas and Daniel Kane and Yuxin Sun},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=obCNIzeSrg}
} | We study the problem of learning mixtures of linear classifiers under Gaussian covariates.
Given sample access to a mixture of $r$ distributions on $\mathbb{R}^n$ of the form $(\mathbf{x},y_{\ell})$, $\ell \in [r]$,
where $\mathbf{x}\sim\mathcal{N}(0,\mathbf{I}_n)$ and
$y_\ell=\mathrm{sign}(\langle\mathbf{v}_{\ell},\mathbf{x}\rangle)$
for an unknown unit vector $\mathbf{v}_{\ell}$,
the goal is to learn the underlying distribution in total variation distance. Our main result is a Statistical Query (SQ) lower bound suggesting that known algorithms for this problem are essentially best possible,
even for the special case of uniform mixtures.
In particular, we show that the complexity of any SQ algorithm for the problem is $n^{\mathrm{poly}(1/\Delta) \log(r)}$,
where $\Delta$ is a lower bound on the pairwise $\ell_2$-separation between the $\mathbf{v}_{\ell}$'s.
The key technical ingredient underlying our result is a new construction of spherical designs on the unit sphere that may be of independent interest. | SQ Lower Bounds for Learning Mixtures of Linear Classifiers | [
"Ilias Diakonikolas",
"Daniel Kane",
"Yuxin Sun"
] | Conference | poster | 2310.11876 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=oaJEB5Qcia | @inproceedings{
sun2023fgprompt,
title={{FGP}rompt: Fine-grained Goal Prompting for Image-goal Navigation},
author={Xinyu Sun and Peihao Chen and Jugang Fan and Jian Chen and Thomas H. Li and Mingkui Tan},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=oaJEB5Qcia}
} | Learning to navigate to an image-specified goal is an important but challenging task for autonomous systems like household robots. The agent is required to well understand and reason the location of the navigation goal from a picture shot in the goal position. Existing methods try to solve this problem by learning a navigation policy, which captures semantic features of the goal image and observation image independently and lastly fuses them for predicting a sequence of navigation actions. However, these methods suffer from two major limitations. 1) They may miss detailed information in the goal image, and thus fail to reason the goal location. 2) More critically, it is hard to focus on the goal-relevant regions in the observation image, because they attempt to understand observation without goal conditioning. In this paper, we aim to overcome these limitations by designing a Fine-grained Goal Prompting (\sexyname) method for image-goal navigation. In particular, we leverage fine-grained and high-resolution feature maps in the goal image as prompts to perform conditioned embedding, which preserves detailed information in the goal image and guides the observation encoder to pay attention to goal-relevant regions. Compared with existing methods on the image-goal navigation benchmark, our method brings significant performance improvement on 3 benchmark datasets (\textit{i.e.,} Gibson, MP3D, and HM3D). Especially on Gibson, we surpass the state-of-the-art success rate by 8\% with only 1/50 model size. | FGPrompt: Fine-grained Goal Prompting for Image-goal Navigation | [
"Xinyu Sun",
"Peihao Chen",
"Jugang Fan",
"Jian Chen",
"Thomas H. Li",
"Mingkui Tan"
] | Conference | poster | 2310.07473 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=oaGdsgB18L | @inproceedings{
lin2023tflex,
title={{TFLEX}: Temporal Feature-Logic Embedding Framework for Complex Reasoning over Temporal Knowledge Graph},
author={Xueyuan Lin and Haihong E and Chengjin Xu and Gengxian Zhou and Haoran Luo and Tianyi Hu and Fenglong Su and Ningyuan Li and Mingzhi Sun},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=oaGdsgB18L}
} | Multi-hop logical reasoning over knowledge graph plays a fundamental role in many artificial intelligence tasks. Recent complex query embedding methods for reasoning focus on static KGs, while temporal knowledge graphs have not been fully explored. Reasoning over TKGs has two challenges: 1. The query should answer entities or timestamps; 2. The operators should consider both set logic on entity set and temporal logic on timestamp set.
To bridge this gap, we introduce the multi-hop logical reasoning problem on TKGs and then propose the first temporal complex query embedding named Temporal Feature-Logic Embedding framework (TFLEX) to answer the temporal complex queries. Specifically, we utilize fuzzy logic to compute the logic part of the Temporal Feature-Logic embedding, thus naturally modeling all first-order logic operations on the entity set. In addition, we further extend fuzzy logic on timestamp set to cope with three extra temporal operators (**After**, **Before** and **Between**).
Experiments on numerous query patterns demonstrate the effectiveness of our method. | TFLEX: Temporal Feature-Logic Embedding Framework for Complex Reasoning over Temporal Knowledge Graph | [
"Xueyuan Lin",
"Haihong E",
"Chengjin Xu",
"Gengxian Zhou",
"Haoran Luo",
"Tianyi Hu",
"Fenglong Su",
"Ningyuan Li",
"Mingzhi Sun"
] | Conference | poster | 2205.14307 | [
"https://github.com/linxueyuanstdio/tflex"
] | https://huggingface.co/papers/2205.14307 | 1 | 0 | 0 | 9 | 1 | [] | [
"linxy/ICEWS14",
"linxy/ICEWS05_15",
"linxy/GDELT"
] | [] |
null | https://openreview.net/forum?id=oaCDiKoJ2w | @inproceedings{
wang2023followups,
title={Follow-ups Also Matter: Improving Contextual Bandits via Post-serving Contexts},
author={Chaoqi Wang and Ziyu Ye and Zhe Feng and Ashwinkumar Badanidiyuru and Haifeng Xu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=oaCDiKoJ2w}
} | Standard contextual bandit problem assumes that all the relevant contexts are observed before the algorithm chooses an arm. This modeling paradigm, while useful, often falls short when dealing with problems in which additional valuable contexts can be observed after arm selection. For example, content recommendation platforms like Youtube, Instagram, Tiktok receive much additional features about a user's reward after the user clicks a content (e.g., how long the user stayed, what is the user's watch speed, etc.). To improve online learning efficiency in these applications, we study a novel contextual bandit problem with post-serving contexts and design a new algorithm, poLinUCB, that achieves tight regret under standard assumptions. Core to our technical proof is a robustified and generalized version of the well-known Elliptical Potential Lemma (EPL), which can accommodate noise in data. Such robustification is necessary for tackling our problem, though we believe it could also be of general interest.
Extensive empirical tests on both synthetic and real-world datasets demonstrate the significant benefit of utilitzing post-serving contexts as well as the superior performance of our algorithm over the state-of-the-art approaches. | Follow-ups Also Matter: Improving Contextual Bandits via Post-serving Contexts | [
"Chaoqi Wang",
"Ziyu Ye",
"Zhe Feng",
"Ashwinkumar Badanidiyuru",
"Haifeng Xu"
] | Conference | spotlight | 2309.13896 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=oU4QHdcIWW | @inproceedings{
thuerck2023learning,
title={Learning Cuts via Enumeration Oracles},
author={Daniel Thuerck and Boro Sofranac and Marc Pfetsch and Sebastian Pokutta},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=oU4QHdcIWW}
} | Cutting-planes are one of the most important building blocks for solving large-scale integer programming (IP) problems to (near) optimality. The majority of cutting plane approaches rely on explicit rules to derive valid inequalities that can separate the target point from the feasible set. Local cuts, on the other hand, seek to directly derive the facets of the underlying polyhedron and use them as cutting planes. However, current approaches rely on solving Linear Programming (LP) problems in order to derive such a hyperplane. In this paper, we present a novel generic approach for learning the facets of the underlying polyhedron by accessing it implicitly via an enumeration oracle in a reduced dimension. This is achieved by embedding the oracle in a variant of the Frank-Wolfe algorithm which is capable of generating strong cutting planes, effectively turning the enumeration oracle into a separation oracle. We demonstrate the effectiveness of our approach with a case study targeting the multidimensional knapsack problem (MKP). | Learning Cuts via Enumeration Oracles | [
"Daniel Thuerck",
"Boro Sofranac",
"Marc Pfetsch",
"Sebastian Pokutta"
] | Conference | poster | 2305.12197 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=oScaeIibRx | @inproceedings{
lee2023softmax,
title={Softmax Output Approximation for Activation Memory-Efficient Training of Attention-based Networks},
author={Changhyeon Lee and Seulki Lee},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=oScaeIibRx}
} | In this paper, we propose to approximate the softmax output, which is the key product of the attention mechanism, to reduce its activation memory usage when training attention-based networks (aka Transformers). During the forward pass of the network, the proposed softmax output approximation method stores only a small fraction of the entire softmax output required for back-propagation and evicts the rest of the softmax output from memory. Then, during the backward pass, the evicted softmax activation output is approximated to compose the gradient to perform back-propagation for model training. Considering most attention-based models heavily rely on the softmax-based attention module that usually takes one of the biggest portions of the network, approximating the softmax activation output can be a simple yet effective way to decrease the training memory requirement of many attention-based networks. The experiment with various attention-based models and relevant tasks, i.e., machine translation, text classification, and sentiment analysis, shows that it curtails the activation memory usage of the softmax-based attention module by up to 84% (6.2× less memory) in model training while achieving comparable or better performance, e.g., up to 5.4% higher classification accuracy. | Softmax Output Approximation for Activation Memory-Efficient Training of Attention-based Networks | [
"Changhyeon Lee",
"Seulki Lee"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=oSYjkJKHZx | @inproceedings{
munkhoeva2023neural,
title={Neural Harmonics: Bridging Spectral Embedding and Matrix Completion in Self-Supervised Learning},
author={Marina Munkhoeva and Ivan Oseledets},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=oSYjkJKHZx}
} | Self-supervised methods received tremendous attention thanks to their seemingly heuristic approach to learning representations that respect the semantics of the data without any apparent supervision in the form of labels. A growing body of literature is already being published in an attempt to build a coherent and theoretically grounded understanding of the workings of a zoo of losses used in modern self-supervised representation learning methods.
In this paper, we attempt to provide an understanding from the perspective of a Laplace operator and connect the inductive bias stemming from the augmentation process to a low-rank matrix completion problem.
To this end, we leverage the results from low-rank matrix completion to provide theoretical analysis on the convergence of modern SSL methods and a key property that affects their downstream performance. | Neural Harmonics: Bridging Spectral Embedding and Matrix Completion in Self-Supervised Learning | [
"Marina Munkhoeva",
"Ivan Oseledets"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=oRn953uhFq | @inproceedings{
klug2023analyzing,
title={Analyzing the Sample Complexity of Self-Supervised Image Reconstruction Methods},
author={Tobit Klug and Dogukan Atik and Reinhard Heckel},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=oRn953uhFq}
} | Supervised training of deep neural networks on pairs of clean image and noisy measurement achieves state-of-the-art performance for many image reconstruction tasks, but such training pairs are difficult to collect. Self-supervised methods enable training based on noisy measurements only, without clean images. In this work, we investigate the cost of self-supervised training in terms of sample complexity for a class of self-supervised methods that enable the computation of unbiased estimates of gradients of the supervised loss, including noise2noise methods. We analytically show that a model trained with such self-supervised training is as good as the same model trained in a supervised fashion, but self-supervised training requires more examples than supervised training. We then study self-supervised denoising and accelerated MRI empirically and characterize the cost of self-supervised training in terms of the number of additional samples required, and find that the performance gap between self-supervised and supervised training vanishes as a function of the training examples, at a problem-dependent rate, as predicted by our theory. | Analyzing the Sample Complexity of Self-Supervised Image Reconstruction Methods | [
"Tobit Klug",
"Dogukan Atik",
"Reinhard Heckel"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=oOXZ5JEjPb | @inproceedings{
gong2023activity,
title={Activity Grammars for Temporal Action Segmentation},
author={Dayoung Gong and Joonseok Lee and Deunsol Jung and Suha Kwak and Minsu Cho},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=oOXZ5JEjPb}
} | Sequence prediction on temporal data requires the ability to understand compositional structures of multi-level semantics beyond individual and contextual properties of parts. The task of temporal action segmentation remains challenging for the reason, aiming at translating an untrimmed activity video into a sequence of action segments.
This paper addresses the problem by introducing an effective activity grammar to guide neural predictions for temporal action segmentation.
We propose a novel grammar induction algorithm, dubbed KARI, that extracts a powerful context-free grammar from action sequence data. We also develop an efficient generalized parser, dubbed BEP, that transforms frame-level probability distributions into a reliable sequence of actions according to the induced grammar with recursive rules.
Our approach can be combined with any neural network for temporal action segmentation to enhance the sequence prediction and discover its compositional structure.
Experimental results demonstrate that our method significantly improves temporal action segmentation in terms of both performance and interpretability on two standard benchmarks, Breakfast and 50 Salads. | Activity Grammars for Temporal Action Segmentation | [
"Dayoung Gong",
"Joonseok Lee",
"Deunsol Jung",
"Suha Kwak",
"Minsu Cho"
] | Conference | poster | 2312.04266 | [
"https://github.com/gongda0e/kari"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=oO1IreC6Sd | @inproceedings{
zhong2023neural,
title={Neural Fields with Hard Constraints of Arbitrary Differential Order},
author={Fangcheng Zhong and Kyle Thomas Fogarty and Param Hanji and Tianhao Walter Wu and Alejandro Sztrajman and Andrew Everett Spielberg and Andrea Tagliasacchi and Petra Bosilj and Cengiz Oztireli},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=oO1IreC6Sd}
} | While deep learning techniques have become extremely popular for solving a broad range of optimization problems, methods to enforce hard constraints during optimization, particularly on deep neural networks, remain underdeveloped. Inspired by the rich literature on meshless interpolation and its extension to spectral collocation methods in scientific computing, we develop a series of approaches for enforcing hard constraints on neural fields, which we refer to as Constrained Neural Fields (CNF). The constraints can be specified as a linear operator applied to the neural field and its derivatives. We also design specific model representations and training strategies for problems where standard models may encounter difficulties, such as conditioning of the system, memory consumption, and capacity of the network when being constrained. Our approaches are demonstrated in a wide range of real-world applications. Additionally, we develop a framework that enables highly efficient model and constraint specification, which can be readily applied to any downstream task where hard constraints need to be explicitly satisfied during optimization. | Neural Fields with Hard Constraints of Arbitrary Differential Order | [
"Fangcheng Zhong",
"Kyle Thomas Fogarty",
"Param Hanji",
"Tianhao Walter Wu",
"Alejandro Sztrajman",
"Andrew Everett Spielberg",
"Andrea Tagliasacchi",
"Petra Bosilj",
"Cengiz Oztireli"
] | Conference | poster | 2306.08943 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=oNuam8eFz2 | @inproceedings{
cheng2023particlebased,
title={Particle-based Variational Inference with Generalized Wasserstein Gradient Flow},
author={Ziheng Cheng and Shiyue Zhang and Longlin Yu and Cheng Zhang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=oNuam8eFz2}
} | Particle-based variational inference methods (ParVIs) such as Stein variational gradient descent (SVGD) update the particles based on the kernelized Wasserstein gradient flow for the Kullback-Leibler (KL) divergence. However, the design of kernels is often non-trivial and can be restrictive for the flexibility of the method. Recent works show that functional gradient flow approximations with quadratic form regularization terms can improve performance. In this paper, we propose a ParVI framework, called generalized Wasserstein gradient descent (GWG), based on a generalized Wasserstein gradient flow of the KL divergence, which can be viewed as a functional gradient method with a broader class of regularizers induced by convex functions. We show that GWG exhibits strong convergence guarantees. We also provide an adaptive version that automatically chooses Wasserstein metric to accelerate convergence. In experiments, we demonstrate the effectiveness and efficiency of the proposed framework on both simulated and real data problems. | Particle-based Variational Inference with Generalized Wasserstein Gradient Flow | [
"Ziheng Cheng",
"Shiyue Zhang",
"Longlin Yu",
"Cheng Zhang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=oMm1dfo3tK | @inproceedings{
noble2023unbiased,
title={Unbiased constrained sampling with Self-Concordant Barrier Hamiltonian Monte Carlo},
author={Maxence Noble and Valentin De Bortoli and Alain Durmus},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=oMm1dfo3tK}
} | In this paper, we propose Barrier Hamiltonian Monte Carlo (BHMC), a version of the
HMC algorithm which aims at sampling from a Gibbs distribution $\pi$ on a manifold
$\mathsf{M}$, endowed with a Hessian metric $\mathfrak{g}$ derived from a self-concordant
barrier. Our method relies on Hamiltonian
dynamics which comprises $\mathfrak{g}$. Therefore, it incorporates the constraints defining
$\mathsf{M}$ and is able to exploit its underlying geometry. However,
the corresponding Hamiltonian dynamics is defined via non separable Ordinary Differential Equations (ODEs) in contrast to the Euclidean case. It implies unavoidable bias in existing generalization of HMC to Riemannian manifolds. In this paper, we propose a new filter step, called ``involution checking step'', to address this problem. This step is implemented in two versions of BHMC, coined continuous BHMC (c-bHMC) and numerical BHMC (n-BHMC) respectively.
Our main results establish that these two new algorithms generate reversible Markov
chains with respect to $\pi$ and do not suffer from any bias in comparison to previous implementations. Our conclusions are supported by numerical experiments where
we consider target distributions defined on polytopes. | Unbiased constrained sampling with Self-Concordant Barrier Hamiltonian Monte Carlo | [
"Maxence Noble",
"Valentin De Bortoli",
"Alain Durmus"
] | Conference | poster | 2210.11925 | [
"https://github.com/maxencenoble/barrier-hamiltonian-monte-carlo"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=oML3v2cFg2 | @inproceedings{
zeng2023when,
title={When Demonstrations meet Generative World Models: A Maximum Likelihood Framework for Offline Inverse Reinforcement Learning},
author={Siliang Zeng and Chenliang Li and Alfredo Garcia and Mingyi Hong},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=oML3v2cFg2}
} | Offline inverse reinforcement learning (Offline IRL) aims to recover the structure of rewards and environment dynamics that underlie observed actions in a fixed, finite set of demonstrations from an expert agent. Accurate models of expertise in executing a task has applications in safety-sensitive applications such as clinical decision making and autonomous driving. However, the structure of an expert's preferences implicit in observed actions is closely linked to the expert's model of the environment dynamics (i.e. the ``world''). Thus, inaccurate models of the world obtained from finite data with limited coverage could compound inaccuracy in estimated rewards. To address this issue, we propose a bi-level optimization formulation of the estimation task wherein the upper level is likelihood maximization based upon a conservative model of the expert's policy (lower level). The policy model is conservative in that it maximizes reward subject to a penalty that is increasing in the uncertainty of the estimated model of the world. We propose a new algorithmic framework to solve the bi-level optimization problem formulation and provide statistical and computational guarantees of performance for the associated optimal reward estimator. Finally, we demonstrate that the proposed algorithm outperforms the state-of-the-art offline IRL and imitation learning benchmarks by a large margin, over the continuous control tasks in MuJoCo and different datasets in the D4RL benchmark. | When Demonstrations meet Generative World Models: A Maximum Likelihood Framework for Offline Inverse Reinforcement Learning | [
"Siliang Zeng",
"Chenliang Li",
"Alfredo Garcia",
"Mingyi Hong"
] | Conference | oral | 2302.07457 | [
"https://github.com/cloud0723/offline-mlirl"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=oKqaWlEfjY | @inproceedings{
indyk2023worstcase,
title={Worst-case Performance of Popular Approximate Nearest Neighbor Search Implementations: Guarantees and Limitations},
author={Piotr Indyk and Haike Xu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=oKqaWlEfjY}
} | Graph-based approaches to nearest neighbor search are popular and powerful tools for handling large datasets in practice, but they have limited theoretical guarantees.
We study the worst-case performance of recent graph-based approximate nearest neighbor search algorithms, such as HNSW, NSG and DiskANN. For DiskANN, we show that its "slow preprocessing'' version provably supports approximate nearest neighbor search query with constant approximation ratio and poly-logarithmic query time, on data sets with bounded "intrinsic'' dimension.
For the other data structure variants studied, including DiskANN with "fast preprocessing'', HNSW and NSG, we present a family of instances on which the empirical query time required to achieve a "reasonable'' accuracy is linear in instance size. For example, for DiskANN, we show that the query procedure can take at least $0.1 n$ steps on instances of size $n$ before it encounters any of the $5$ nearest neighbors of the query. | Worst-case Performance of Popular Approximate Nearest Neighbor Search Implementations: Guarantees and Limitations | [
"Piotr Indyk",
"Haike Xu"
] | Conference | poster | 2310.19126 | [
"https://github.com/xuhaike/hard-instances-for-popular-approximate-nearest-neighbor-search-implementations"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |