bibtex_url
null
proceedings
stringlengths
42
42
bibtext
stringlengths
197
792
abstract
stringlengths
303
3.45k
title
stringlengths
10
159
authors
sequencelengths
1
28
id
stringclasses
44 values
type
stringclasses
16 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
444 values
n_linked_authors
int64
-1
9
upvotes
int64
-1
42
num_comments
int64
-1
13
n_authors
int64
-1
92
paper_page_exists_pre_conf
int64
0
1
Models
sequencelengths
0
100
Datasets
sequencelengths
0
11
Spaces
sequencelengths
0
100
null
https://openreview.net/forum?id=yBVLXvJ1sb
@inproceedings{ wang2023error, title={Error Discovery By Clustering Influence Embeddings}, author={Fulton Wang and Julius Adebayo and Sarah Tan and Diego Garcia-Olano and Narine Kokhlikyan}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=yBVLXvJ1sb} }
We present a method for identifying groups of test examples---slices---on which a model under-performs, a task now known as slice discovery. We formalize coherence---a requirement that erroneous predictions, within a slice, should be wrong for the same reason---as a key property that any slice discovery method should satisfy. We then use influence functions to derive a new slice discovery method, InfEmbed, which satisfies coherence by returning slices whose examples are influenced similarly by the training data. InfEmbed is simple, and consists of applying K-Means clustering to a novel representation we deem influence embeddings. We show InfEmbed outperforms current state-of-the-art methods on 2 benchmarks, and is effective for model debugging across several case studies.
Error Discovery By Clustering Influence Embeddings
[ "Fulton Wang", "Julius Adebayo", "Sarah Tan", "Diego Garcia-Olano", "Narine Kokhlikyan" ]
Conference
poster
2312.04712
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=yAOwkf4FyL
@inproceedings{ jiang2023operationlevel, title={Operation-Level Early Stopping for Robustifying Differentiable {NAS}}, author={Shen Jiang and Zipeng Ji and Guanghui Zhu and Chunfeng Yuan and Yihua Huang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=yAOwkf4FyL} }
Differentiable NAS (DARTS) is a simple and efficient neural architecture search method that has been extensively adopted in various machine learning tasks. % Nevertheless, DARTS still encounters several robustness issues, mainly the domination of skip connections. % The resulting architectures are full of parametric-free operations, leading to performance collapse. % Existing methods suggest that the skip connection has additional advantages in optimization compared to other parametric operations and propose to alleviate the domination of skip connections by eliminating these additional advantages. % In this paper, we analyze this issue from a simple and straightforward perspective and propose that the domination of skip connections results from parametric operations overfitting the training data while architecture parameters are trained on the validation data, leading to undesired behaviors. % Based on this observation, we propose the operation-level early stopping (OLES) method to overcome this issue and robustify DARTS without introducing any computation overhead. % Extensive experimental results can verify our hypothesis and the effectiveness of OLES.
Operation-Level Early Stopping for Robustifying Differentiable NAS
[ "Shen Jiang", "Zipeng Ji", "Guanghui Zhu", "Chunfeng Yuan", "Yihua Huang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=y9U0IJ2uFr
@inproceedings{ assel2023snekhorn, title={{SNE}khorn: Dimension Reduction with Symmetric Entropic Affinities}, author={Hugues Van Assel and Titouan Vayer and R{\'e}mi Flamary and Nicolas Courty}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=y9U0IJ2uFr} }
Many approaches in machine learning rely on a weighted graph to encode the similarities between samples in a dataset. Entropic affinities (EAs), which are notably used in the popular Dimensionality Reduction (DR) algorithm t-SNE, are particular instances of such graphs. To ensure robustness to heterogeneous sampling densities, EAs assign a kernel bandwidth parameter to every sample in such a way that the entropy of each row in the affinity matrix is kept constant at a specific value, whose exponential is known as perplexity. EAs are inherently asymmetric and row-wise stochastic, but they are used in DR approaches after undergoing heuristic symmetrization methods that violate both the row-wise constant entropy and stochasticity properties. In this work, we uncover a novel characterization of EA as an optimal transport problem, allowing a natural symmetrization that can be computed efficiently using dual ascent. The corresponding novel affinity matrix derives advantages from symmetric doubly stochastic normalization in terms of clustering performance, while also effectively controlling the entropy of each row thus making it particularly robust to varying noise levels. Following, we present a new DR algorithm, SNEkhorn, that leverages this new affinity matrix. We show its clear superiority to state-of-the-art approaches with several indicators on both synthetic and real-world datasets.
SNEkhorn: Dimension Reduction with Symmetric Entropic Affinities
[ "Hugues Van Assel", "Titouan Vayer", "Rémi Flamary", "Nicolas Courty" ]
Conference
poster
2305.13797
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=y8UAQQHVTX
@inproceedings{ naor2023private, title={Private Everlasting Prediction}, author={Moni Naor and Kobbi Nissim and Uri Stemmer and Chao Yan}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=y8UAQQHVTX} }
A private learner is trained on a sample of labeled points and generates a hypothesis that can be used for predicting the labels of newly sampled points while protecting the privacy of the training set [Kasiviswannathan et al., FOCS 2008]. Past research uncovered that private learners may need to exhibit significantly higher sample complexity than non-private learners as is the case of learning of one-dimensional threshold functions [Bun et al., FOCS 2015, Alon et al., STOC 2019]. We explore prediction as an alternative to learning. A predictor answers a stream of classification queries instead of outputting a hypothesis. Earlier work has considered a private prediction model with a single classification query [Dwork and Feldman, COLT 2018]. We observe that when answering a stream of queries, a predictor must modify the hypothesis it uses over time, and in a manner that cannot rely solely on the training set. We introduce {\em private everlasting prediction} taking into account the privacy of both the training set {\em and} the (adaptively chosen) queries made to the predictor. We then present a generic construction of private everlasting predictors in the PAC model. The sample complexity of the initial training sample in our construction is quadratic (up to polylog factors) in the VC dimension of the concept class. Our construction allows prediction for all concept classes with finite VC dimension, and in particular threshold functions over infinite domains, for which (traditional) private learning is known to be impossible.
Private Everlasting Prediction
[ "Moni Naor", "Kobbi Nissim", "Uri Stemmer", "Chao Yan" ]
Conference
oral
2305.09579
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=y5duN2j9s6
@inproceedings{ jiang2023on, title={On the Importance of Exploration for Generalization in Reinforcement Learning}, author={Yiding Jiang and J Zico Kolter and Roberta Raileanu}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=y5duN2j9s6} }
Existing approaches for improving generalization in deep reinforcement learning (RL) have mostly focused on representation learning, neglecting RL-specific aspects such as exploration. We hypothesize that the agent's exploration strategy plays a key role in its ability to generalize to new environments. Through a series of experiments in a tabular contextual MDP, we show that exploration is helpful not only for efficiently finding the optimal policy for the training environments but also for acquiring knowledge that helps decision making in unseen environments. Based on these observations, we propose EDE: Exploration via Distributional Ensemble, a method that encourages the exploration of states with high epistemic uncertainty through an ensemble of Q-value distributions. The proposed algorithm is the first value-based approach to achieve strong performance on both Procgen and Crafter, two benchmarks for generalization in RL with high-dimensional observations. The open-sourced implementation can be found at https://github.com/facebookresearch/ede.
On the Importance of Exploration for Generalization in Reinforcement Learning
[ "Yiding Jiang", "J Zico Kolter", "Roberta Raileanu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=y50AnAbKp1
@inproceedings{ chang2023csot, title={{CSOT}: Curriculum and Structure-Aware Optimal Transport for Learning with Noisy Labels}, author={Wanxing Chang and Ye Shi and Jingya Wang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=y50AnAbKp1} }
Learning with noisy labels (LNL) poses a significant challenge in training a well-generalized model while avoiding overfitting to corrupted labels. Recent advances have achieved impressive performance by identifying clean labels and correcting corrupted labels for training. However, the current approaches rely heavily on the model’s predictions and evaluate each sample independently without considering either the global or local structure of the sample distribution. These limitations typically result in a suboptimal solution for the identification and correction processes, which eventually leads to models overfitting to incorrect labels. In this paper, we propose a novel optimal transport (OT) formulation, called Curriculum and Structure-aware Optimal Transport (CSOT). CSOT concurrently considers the inter- and intra-distribution structure of the samples to construct a robust denoising and relabeling allocator. During the training process, the allocator incrementally assigns reliable labels to a fraction of the samples with the highest confidence. These labels have both global discriminability and local coherence. Notably, CSOT is a new OT formulation with a nonconvex objective function and curriculum constraints, so it is not directly compatible with classical OT solvers. Here, we develop a lightspeed computational method that involves a scaling iteration within a generalized conditional gradient framework to solve CSOT efficiently. Extensive experiments demonstrate the superiority of our method over the current state-of-the-arts in LNL.
CSOT: Curriculum and Structure-Aware Optimal Transport for Learning with Noisy Labels
[ "Wanxing Chang", "Ye Shi", "Jingya Wang" ]
Conference
poster
2312.06221
[ "https://github.com/changwxx/csot-for-lnl" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=y0OlQSZsyp
@inproceedings{ mameche2023learning, title={Learning Causal Models under Independent Changes}, author={Sarah Mameche and David Kaltenpoth and Jilles Vreeken}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=y0OlQSZsyp} }
In many scientific applications, we observe a system in different conditions in which its components may change, rather than in isolation. In our work, we are interested in explaining the generating process of such a multi-context system using a finite mixture of causal mechanisms. Recent work shows that this causal model is identifiable from data, but is limited to settings where the sparse mechanism shift hypothesis holds and only a subset of the causal conditionals change. As this assumption is not easily verifiable in practice, we study the more general principle that mechanism shifts are independent, which we formalize using the algorithmic notion of independence. We introduce an approach for causal discovery beyond partially directed graphs using Gaussian Process models, and give conditions under which we provably identify the correct causal model. In our experiments, we show that our method performs well in a range of synthetic settings, on realistic gene expression simulations, as well as on real-world cell signaling data.
Learning Causal Models under Independent Changes
[ "Sarah Mameche", "David Kaltenpoth", "Jilles Vreeken" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=y08bkEtNBK
@inproceedings{ jia2023witran, title={{WITRAN}: Water-wave Information Transmission and Recurrent Acceleration Network for Long-range Time Series Forecasting}, author={Yuxin Jia and Youfang Lin and Xinyan Hao and Yan Lin and Shengnan Guo and Huaiyu Wan}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=y08bkEtNBK} }
Capturing semantic information is crucial for accurate long-range time series forecasting, which involves modeling global and local correlations, as well as discovering long- and short-term repetitive patterns. Previous works have partially addressed these issues separately, but have not been able to address all of them simultaneously. Meanwhile, their time and memory complexities are still not sufficiently low for long-range forecasting. To address the challenge of capturing different types of semantic information, we propose a novel Water-wave Information Transmission (WIT) framework. This framework captures both long- and short-term repetitive patterns through bi-granular information transmission. It also models global and local correlations by recursively fusing and selecting information using Horizontal Vertical Gated Selective Unit (HVGSU). In addition, to improve the computing efficiency, we propose a generic Recurrent Acceleration Network (RAN) which reduces the time complexity to $\mathcal{O}(\sqrt{L})$ while maintaining the memory complexity at $\mathcal{O}(L)$. Our proposed method, called Water-wave Information Transmission and Recurrent Acceleration Network (WITRAN), outperforms the state-of-the-art methods by 5.80% and 14.28% on long-range and ultra-long-range time series forecasting tasks respectively, as demonstrated by experiments on four benchmark datasets. The code is available at: https://github.com/Water2sea/WITRAN.
WITRAN: Water-wave Information Transmission and Recurrent Acceleration Network for Long-range Time Series Forecasting
[ "Yuxin Jia", "Youfang Lin", "Xinyan Hao", "Yan Lin", "Shengnan Guo", "Huaiyu Wan" ]
Conference
spotlight
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xzmaFfw6oh
@inproceedings{ du2023molecule, title={Molecule Joint Auto-Encoding: Trajectory Pretraining with 2D and 3D Diffusion}, author={weitao Du and Jiujiu Chen and Xuecang Zhang and Zhi-Ming Ma and Shengchao Liu}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xzmaFfw6oh} }
Recently, artificial intelligence for drug discovery has raised increasing interest in both machine learning and chemistry domains. The fundamental building block for drug discovery is molecule geometry and thus, the molecule's geometrical representation is the main bottleneck to better utilize machine learning techniques for drug discovery. In this work, we propose a pretraining method for molecule joint auto-encoding (MoleculeJAE). MoleculeJAE can learn both the 2D bond (topology) and 3D conformation (geometry) information, and a diffusion process model is applied to mimic the augmented trajectories of such two modalities, based on which, MoleculeJAE will learn the inherent chemical structure in a self-supervised manner. Thus, the pretrained geometrical representation in MoleculeJAE is expected to benefit downstream geometry-related tasks. Empirically, MoleculeJAE proves its effectiveness by reaching state-of-the-art performance on 15 out of 20 tasks by comparing it with 12 competitive baselines.
Molecule Joint Auto-Encoding: Trajectory Pretraining with 2D and 3D Diffusion
[ "weitao Du", "Jiujiu Chen", "Xuecang Zhang", "Zhi-Ming Ma", "Shengchao Liu" ]
Conference
poster
2312.03475
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xz8j3r3oUA
@inproceedings{ lengyel2023color, title={Color Equivariant Convolutional Networks}, author={Attila Lengyel and Ombretta Strafforello and Robert-Jan Bruintjes and Alexander Gielisse and Jan van Gemert}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xz8j3r3oUA} }
Color is a crucial visual cue readily exploited by Convolutional Neural Networks (CNNs) for object recognition. However, CNNs struggle if there is data imbalance between color variations introduced by accidental recording conditions. Color invariance addresses this issue but does so at the cost of removing all color information, which sacrifices discriminative power. In this paper, we propose Color Equivariant Convolutions (CEConvs), a novel deep learning building block that enables shape feature sharing across the color spectrum while retaining important color information. We extend the notion of equivariance from geometric to photometric transformations by incorporating parameter sharing over hue-shifts in a neural network. We demonstrate the benefits of CEConvs in terms of downstream performance to various tasks and improved robustness to color changes, including train-test distribution shifts. Our approach can be seamlessly integrated into existing architectures, such as ResNets, and offers a promising solution for addressing color-based domain shifts in CNNs.
Color Equivariant Convolutional Networks
[ "Attila Lengyel", "Ombretta Strafforello", "Robert-Jan Bruintjes", "Alexander Gielisse", "Jan van Gemert" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xxfHMqNcum
@inproceedings{ lyu2023towards, title={Towards Hybrid-grained Feature Interaction Selection for Deep Sparse Network}, author={Fuyuan Lyu and Xing Tang and Dugang Liu and Chen Ma and Weihong Luo and Liang Chen and xiuqiang He and Xue Liu}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xxfHMqNcum} }
Deep sparse networks are widely investigated as a neural network architecture for prediction tasks with high-dimensional sparse features, with which feature interaction selection is a critical component. While previous methods primarily focus on how to search feature interaction in a coarse-grained space, less attention has been given to a finer granularity. In this work, we introduce a hybrid-grained feature interaction selection approach that targets both feature field and feature value for deep sparse networks. To explore such expansive space, we propose a decomposed space which is calculated on the fly. We then develop a selection algorithm called OptFeature, which efficiently selects the feature interaction from both the feature field and the feature value simultaneously. Results from experiments on three large real-world benchmark datasets demonstrate that OptFeature performs well in terms of accuracy and efficiency. Additional studies support the feasibility of our method. All source code are publicly available\footnote{https://anonymous.4open.science/r/OptFeature-Anonymous}.
Towards Hybrid-grained Feature Interaction Selection for Deep Sparse Network
[ "Fuyuan Lyu", "Xing Tang", "Dugang Liu", "Chen Ma", "Weihong Luo", "Liang Chen", "xiuqiang He", "Xue Liu" ]
Conference
poster
2310.15342
[ "https://github.com/fuyuanlyu/optfeature" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xx3qRKvG0T
@inproceedings{ ni2023basisformer, title={BasisFormer: Attention-based Time Series Forecasting with Learnable and Interpretable Basis}, author={Zelin Ni and Hang Yu and Shizhan Liu and Jianguo Li and Weiyao Lin}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xx3qRKvG0T} }
Bases have become an integral part of modern deep learning-based models for time series forecasting due to their ability to act as feature extractors or future references. To be effective, a basis must be tailored to the specific set of time series data and exhibit distinct correlation with each time series within the set. However, current state-of-the-art methods are limited in their ability to satisfy both of these requirements simultaneously. To address this challenge, we propose BasisFormer, an end-to-end time series forecasting architecture that leverages learnable and interpretable bases. This architecture comprises three components: First, we acquire bases through adaptive self-supervised learning, which treats the historical and future sections of the time series as two distinct views and employs contrastive learning. Next, we design a Coef module that calculates the similarity coefficients between the time series and bases in the historical view via bidirectional cross-attention. Finally, we present a Forecast module that selects and consolidates the bases in the future view based on the similarity coefficients, resulting in accurate future predictions. Through extensive experiments on six datasets, we demonstrate that BasisFormer outperforms previous state-of-the-art methods by 11.04% and 15.78% respectively for univariate and multivariate forecasting tasks. Code is available at: https://github.com/nzl5116190/Basisformer.
BasisFormer: Attention-based Time Series Forecasting with Learnable and Interpretable Basis
[ "Zelin Ni", "Hang Yu", "Shizhan Liu", "Jianguo Li", "Weiyao Lin" ]
Conference
poster
2310.20496
[ "https://github.com/nzl5116190/basisformer" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xx3QgKyghS
@inproceedings{ wu2023unsupervised, title={Unsupervised Polychromatic Neural Representation for {CT} Metal Artifact Reduction}, author={Qing Wu and Lixuan Chen and Ce Wang and Hongjiang Wei and S Kevin Zhou and Jingyi Yu and Yuyao Zhang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xx3QgKyghS} }
Emerging neural reconstruction techniques based on tomography (e.g., NeRF, NeAT, and NeRP) have started showing unique capabilities in medical imaging. In this work, we present a novel Polychromatic neural representation (Polyner) to tackle the challenging problem of CT imaging when metallic implants exist within the human body. CT metal artifacts arise from the drastic variation of metal's attenuation coefficients at various energy levels of the X-ray spectrum, leading to a nonlinear metal effect in CT measurements. Recovering CT images from metal-affected measurements hence poses a complicated nonlinear inverse problem where empirical models adopted in previous metal artifact reduction (MAR) approaches lead to signal loss and strongly aliased reconstructions. Polyner instead models the MAR problem from a nonlinear inverse problem perspective. Specifically, we first derive a polychromatic forward model to accurately simulate the nonlinear CT acquisition process. Then, we incorporate our forward model into the implicit neural representation to accomplish reconstruction. Lastly, we adopt a regularizer to preserve the physical properties of the CT images across different energy levels while effectively constraining the solution space. Our Polyner is an unsupervised method and does not require any external training data. Experimenting with multiple datasets shows that our Polyner achieves comparable or better performance than supervised methods on in-domain datasets while demonstrating significant performance improvements on out-of-domain datasets. To the best of our knowledge, our Polyner is the first unsupervised MAR method that outperforms its supervised counterparts. The code for this work is available at: https://github.com/iwuqing/Polyner.
Unsupervised Polychromatic Neural Representation for CT Metal Artifact Reduction
[ "Qing Wu", "Lixuan Chen", "Ce Wang", "Hongjiang Wei", "S Kevin Zhou", "Jingyi Yu", "Yuyao Zhang" ]
Conference
poster
2306.15203
[ "https://github.com/iwuqing/polyner" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xw6Szwu4xz
@inproceedings{ liang2023personalized, title={Personalized Dictionary Learning for Heterogeneous Datasets}, author={Geyu Liang and Naichen Shi and Raed Al Kontar and Salar Fattahi}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xw6Szwu4xz} }
We introduce a relevant yet challenging problem named Personalized Dictionary Learning (PerDL), where the goal is to learn sparse linear representations from heterogeneous datasets that share some commonality. In PerDL, we model each dataset's shared and unique features as global and local dictionaries. Challenges for PerDL not only are inherited from classical dictionary learning(DL), but also arise due to the unknown nature of the shared and unique features. In this paper, we rigorously formulate this problem and provide conditions under which the global and local dictionaries can be provably disentangled. Under these conditions, we provide a meta-algorithm called Personalized Matching and Averaging (PerMA) that can recover both global and local dictionaries from heterogeneous datasets. PerMA is highly efficient; it converges to the ground truth at a linear rate under suitable conditions. Moreover, it automatically borrows strength from strong learners to improve the prediction of weak learners. As a general framework for extracting global and local dictionaries, we show the application of PerDL in different learning tasks, such as training with imbalanced datasets and video surveillance.
Personalized Dictionary Learning for Heterogeneous Datasets
[ "Geyu Liang", "Naichen Shi", "Raed Al Kontar", "Salar Fattahi" ]
Conference
poster
2305.15311
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xtaX3WyCj1
@inproceedings{ yadav2023tiesmerging, title={{TIES}-Merging: Resolving Interference When Merging Models}, author={Prateek Yadav and Derek Tam and Leshem Choshen and Colin Raffel and Mohit Bansal}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xtaX3WyCj1} }
Transfer learning – i.e., further fine-tuning a pre-trained model on a downstream task – can confer significant advantages, including improved downstream performance, faster convergence, and better sample efficiency. These advantages have led to a proliferation of task-specific fine-tuned models, which typically can only perform a single task and do not benefit from one another. Recently, model merging techniques have emerged as a solution to combine multiple task-specific models into a single multitask model without performing additional training. However, existing merging methods often ignore the interference between parameters of different models, resulting in large performance drops when merging multiple models. In this paper, we demonstrate that prior merging techniques inadvertently lose valuable information due to two major sources of interference: (a) interference due to redundant parameter values and (b) disagreement on the sign of a given parameter’s values across models. To address this, we propose our method, TrIm, Elect Sign & Merge (TIES-Merging), which introduces three novel steps when merging models: (1) resetting parameters that only changed a small amount during fine-tuning, (2) resolving sign conflicts, and (3) merging only the parameters that are in alignment with the final agreed-upon sign. We find that TIES-Merging outperforms existing methods in diverse settings covering a range of modalities, domains, number of tasks, model sizes, architectures, and fine-tuning settings. We further analyze the impact of different types of interference on model parameters, highlight the importance of signs, and show that estimating the signs using the validation data could further improve performance.
TIES-Merging: Resolving Interference When Merging Models
[ "Prateek Yadav", "Derek Tam", "Leshem Choshen", "Colin Raffel", "Mohit Bansal" ]
Conference
poster
2306.01708
[ "https://github.com/prateeky2806/ties-merging" ]
https://huggingface.co/papers/2306.01708
3
12
0
5
1
[ "brucethemoose/Yi-34B-200K-RPMerge", "Undi95/Llama-3-LewdPlay-8B-evo", "ycros/BagelMIsteryTour-v2-8x7B-GGUF", "brucethemoose/Yi-34B-200K-DARE-megamerge-v8", "Undi95/Llama-3-LewdPlay-8B-evo-GGUF", "brucethemoose/Yi-34B-200K-RPMerge-exl2-40bpw", "NeverSleep/NoromaidxOpenGPT4-2-GGUF-iMatrix", "BioMistral/BioMistral-7B-DARE", "ycros/BagelMIsteryTour-v2-8x7B", "Dampfinchen/Llama-3-8B-Ultra-Instruct", "RJuro/munin-neuralbeagle-7b", "chargoddard/Chronorctypus-Limarobormes-13b", "TheBloke/Yi-34B-200K-DARE-megamerge-v8-GGUF", "DataPilot/Llama3.1-ArrowSE-v0.4", "NeverSleep/NoromaidxOpenGPT4-2", "Aratako/Ninja-v1-RP-expressive", "yunconglong/DARE_TIES_13B", "BioMistral/BioMistral-7B-DARE-GGUF", "Steelskull/Etheria-55b-v0.1", "Masterjp123/SnowyRP-FinalV1-L2-13B-GGUF", "FoxEngineAi/Mega-Destroyer-8x7B", "Doctor-Shotgun/CalliopeDS-L2-13B", "TheBloke/Etheria-55b-v0.1-GGUF", "TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES-GPTQ", "jeiku/NarrativeNexus_7B", "NeverSleep/NoromaidxOpenGPT4-1-GGUF-iMatrix", "MarinaraSpaghetti/Nemomix-v1.0-12B", "TheBloke/Chronorctypus-Limarobormes-13b-GPTQ", "nold/Yi-34B-200K-RPMerge-GGUF", "Masterjp123/SnowyRP-FinalV1-L2-13B", "Doctor-Shotgun/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES", "ChaoticNeutrals/Prima-LelantaclesV5-7b", "saishf/West-Hermes-7B-GGUF", "Nitral-AI/Prima-LelantaclesV6-7b", "Lewdiculous/Prima-LelantaclesV6-7b-GGUF-IQ-Imatrix", "TheBloke/CalliopeDS-L2-13B-GGUF", "iRASC/BioLlama-Ko-8B", "sethuiyer/Medichat-V2-Llama3-8B", "rAIfle/BagelMIsteryTour-v2-8x7B-exl2-rpcal", "ycros/DonutHole-8x7B-GGUF", "jeiku/ToxicNoRobotsRosaHermesBoros_3B_GGUF", "jeiku/Elly_7B", "Aratako/AntlerStar-RP", "S-miguel/The-Trinity-Coder-7B", "NeverSleep/NoromaidxOpenGPT4-1", "Lewdiculous/Elly_7B-GGUF-IQ-Imatrix", "soramikaduki/StarAntler-RP-WestLake-chatvector", "Lewdiculous/Kunocchini-1.2-7b-longtext-GGUF-Imatrix", "lighteternal/Llama3-merge-biomed-8b", "benk04/NoromaidxOpenGPT4-2-3.75bpw-h6-exl2", "TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES-GGUF", "Undi95/BagelMix-8x7B-GGUF", "Lewdiculous/Prima-LelantaclesV5-7b-GGUF", "Noodlz/DolphinLake-7B", "brucethemoose/Yi-34B-200K-DARE-merge-v7", "ockerman0/MN-12B-Starcannon-v5-unofficial", "saishf/West-Hermes-7B", "ycros/BagelMIsteryTour-8x7B", "Netrve/Loyal-Silicon-Maid-7B", "TheBloke/Chronorctypus-Limarobormes-13b-GGML", "Azazelle/L3-RP_io", "jeiku/ToxicNoRobotsRosaHermesBoros_3B", "intervitens/BagelMIsteryTour-v2-8x7B-3.7bpw-h6-exl2-rpcal", "TheBloke/CalliopeDS-L2-13B-GPTQ", "TheBloke/Yi-34B-200K-DARE-megamerge-v8-GPTQ", "Masterjp123/SnowyRP-FinalV1-L2-13B-GPTQ", "starble-dev/Starlight-V3-12B", "nothingiisreal/MN-12B-Starcannon-v3", "RJuro/munin-neuralbeagle-7b-GGUF", "jeiku/SmarterAdult_3B_GGUF", "TheBloke/CalliopeDS-L2-13B-AWQ", "LoneStriker/Yi-34B-200K-RPMerge-GPTQ", "icefog72/IceCocoaRP-7b", "ycros/BagelMIsteryTour-v2-8x7B-AWQ", "TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES-AWQ", "yuuko-eth/Monsoon-7B-exp-1-GGUF", "Locutusque/Llama-3-Yggdrasil-8B", "anakin87/Llama-3-8b-ita-ties", "aloobun/CosmicBun-8B", "seyf1elislam/WestKunai-Hermes-7b", "zaq-hack/Llama-3-LewdPlay-8B-evo-8bpw-exl2", "TheBloke/Etheria-55b-v0.1-GPTQ", "kuotient/EEVE-Instruct-Math-10.8B", "nitky/Superswallow-70b-v0.1", "Masterjp123/Llama-3-SnowyRP-8B-V1-B", "grimjim/kunoichi-lemon-royale-7B-GGUF", "Inv/Konstanta-V4-Alpha-7B", "grimjim/kunoichi-lemon-royale-7B", "ND911/Maiden-Unquirked-20B-gguf", "kuotient/Llama-3-Ko-8B-ties", "BioMistral/BioMistral-7B-TIES", "TheBloke/Yi-34B-200K-DARE-megamerge-v8-AWQ", "Envoid/Yousei-22B", "intervitens/BagelMIsteryTour-v2-8x7B-3.5bpw-h6-exl2-rpcal", "TheBloke/Chronorctypus-Limarobormes-13b-AWQ", "NickyNicky/TinyDolphin-2.8-1.1b_oasst2_chatML_all_Cluster_dare_ties_v2", "ycros/DonutHole-8x7B", "ChaoticNeutrals/This_is_fine_7B", "yuuko-eth/Monsoon-7B-exp-1", "jeiku/SmarterAdult_3B" ]
[]
[ "open-llm-leaderboard/open_llm_leaderboard", "arcee-ai/mergekit-gui", "Intel/low_bit_open_llm_leaderboard", "eduagarcia/open_pt_llm_leaderboard", "BAAI/open_cn_llm_leaderboard", "gsaivinay/open_llm_leaderboard", "featherless-ai/try-this-model", "GTBench/GTBench", "felixz/open_llm_leaderboard", "OPTML-Group/UnlearnCanvas-Benchmark", "Vikhrmodels/small-shlepa-lb", "mmnga/vocabviewer", "neubla/neubla-llm-evaluation-board", "rodrigomasini/data_only_open_llm_leaderboard", "Docfile/open_llm_leaderboard", "asdfasdfasdfasdf43tr543/Azazelle-L3-RP_io", "Darok/Featherless-Feud", "Njpilot78/NeverSleep-NoromaidxOpenGPT4-2", "smothiki/open_llm_leaderboard", "0x1668/open_llm_leaderboard", "pngwn/open_llm_leaderboard-check", "asir0z/open_llm_leaderboard", "kbmlcoding/open_llm_leaderboard_free", "aichampions/open_llm_leaderboard", "Adeco/open_llm_leaderboard", "anirudh937/open_llm_leaderboard", "smothiki/open_llm_leaderboard2", "hamxa500/sethuiyer-Medichat-V2-Llama3-8B", "K00B404/mergekit_allow_crimes_gui", "DavidAU/mergekit-gui", "Nymbo/mergekit-gui" ]
null
https://openreview.net/forum?id=xtQ9IGRzIW
@inproceedings{ oki2023faster, title={Faster Discrete Convex Function Minimization with Predictions: The M-Convex Case}, author={Taihei Oki and Shinsaku Sakaue}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xtQ9IGRzIW} }
Recent years have seen a growing interest in accelerating optimization algorithms with machine-learned predictions. Sakaue and Oki (NeurIPS 2022) have developed a general framework that warm-starts the *L-convex function minimization* method with predictions, revealing the idea's usefulness for various discrete optimization problems. In this paper, we present a framework for using predictions to accelerate *M-convex function minimization*, thus complementing previous research and extending the range of discrete optimization algorithms that can benefit from predictions. Our framework is particularly effective for an important subclass called *laminar convex minimization*, which appears in many operations research applications. Our methods can improve time complexity bounds upon the best worst-case results by using predictions and even have potential to go beyond a lower-bound result.
Faster Discrete Convex Function Minimization with Predictions: The M-Convex Case
[ "Taihei Oki", "Shinsaku Sakaue" ]
Conference
poster
2306.05865
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xtADRDRsM2
@inproceedings{ zhao2023adversarial, title={Adversarial Robustness in Graph Neural Networks: A Hamiltonian Approach}, author={Kai Zhao and Qiyu Kang and Yang Song and Rui She and Sijie Wang and Wee Peng Tay}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xtADRDRsM2} }
Graph neural networks (GNNs) are vulnerable to adversarial perturbations, including those that affect both node features and graph topology. This paper investigates GNNs derived from diverse neural flows, concentrating on their connection to various stability notions such as BIBO stability, Lyapunov stability, structural stability, and conservative stability. We argue that Lyapunov stability, despite its common use, does not necessarily ensure adversarial robustness. Inspired by physics principles, we advocate for the use of conservative Hamiltonian neural flows to construct GNNs that are robust to adversarial attacks. The adversarial robustness of different neural flow GNNs is empirically compared on several benchmark datasets under a variety of adversarial attacks. Extensive numerical experiments demonstrate that GNNs leveraging conservative Hamiltonian flows with Lyapunov stability substantially improve robustness against adversarial perturbations. The implementation code of experiments is available at \url{https://github.com/zknus/NeurIPS-2023-HANG-Robustness}.
Adversarial Robustness in Graph Neural Networks: A Hamiltonian Approach
[ "Kai Zhao", "Qiyu Kang", "Yang Song", "Rui She", "Sijie Wang", "Wee Peng Tay" ]
Conference
spotlight
2310.06396
[ "https://github.com/zknus/neurips-2023-hang-robustness" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xrk9g5vcXR
@inproceedings{ chee2023quip, title={Qu{IP}: 2-Bit Quantization of Large Language Models With Guarantees}, author={Jerry Chee and Yaohui Cai and Volodymyr Kuleshov and Christopher De Sa}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xrk9g5vcXR} }
This work studies post-training parameter quantization in large language models (LLMs). We introduce quantization with incoherence processing (QuIP), a new method based on the insight that quantization benefits from incoherent weight and Hessian matrices, i.e., from the weights being even in magnitude and the directions in which it is important to round them accurately being unaligned with the coordinate axes. QuIP consists of two steps: (1) an adaptive rounding procedure minimizing a quadratic proxy objective; (2) efficient pre- and post-processing that ensures weight and Hessian incoherence via multiplication by random orthogonal matrices. We complement QuIP with the first theoretical analysis for an LLM-scale quantization algorithm, and show that our theory also applies to an existing method, OPTQ. Empirically, we find that our incoherence preprocessing improves several existing quantization algorithms and yields the first LLM quantization methods that produce viable results using only two bits per weight. Our code can be found at https://github.com/Cornell-RelaxML/QuIP.
QuIP: 2-Bit Quantization of Large Language Models With Guarantees
[ "Jerry Chee", "Yaohui Cai", "Volodymyr Kuleshov", "Christopher De Sa" ]
Conference
spotlight
2307.13304
[ "https://github.com/Cornell-RelaxML/QuIP" ]
https://huggingface.co/papers/2307.13304
0
2
1
4
1
[]
[]
[]
null
https://openreview.net/forum?id=xrK3QA9mLo
@inproceedings{ wang2023facecomposer, title={FaceComposer: A Unified Model for Versatile Facial Content Creation}, author={Jiayu Wang and Kang Zhao and Yifeng Ma and Shiwei Zhang and Yingya Zhang and Yujun Shen and Deli Zhao and Jingren Zhou}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xrK3QA9mLo} }
This work presents FaceComposer, a unified generative model that accomplishes a variety of facial content creation tasks, including text-conditioned face synthesis, text-guided face editing, face animation etc. Based on the latent diffusion framework, FaceComposer follows the paradigm of compositional generation and employs diverse face-specific conditions, e.g., Identity Feature and Projected Normalized Coordinate Code, to release the model creativity at all possible. To support text control and animation, we clean up some existing face image datasets and collect around 500 hours of talking-face videos, forming a high-quality large-scale multi-modal face database. A temporal self-attention module is incorporated into the U-Net structure, which allows learning the denoising process on the mixture of images and videos. Extensive experiments suggest that our approach not only achieves comparable or even better performance than state-of-the-arts on each single task, but also facilitates some combined tasks with one-time forward, demonstrating its potential in serving as a foundation generative model in face domain. We further develop an interface such that users can enjoy our one-step service to create, edit, and animate their own characters. Code, dataset, model, and interface will be made publicly available.
FaceComposer: A Unified Model for Versatile Facial Content Creation
[ "Jiayu Wang", "Kang Zhao", "Yifeng Ma", "Shiwei Zhang", "Yingya Zhang", "Yujun Shen", "Deli Zhao", "Jingren Zhou" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xr3KAzboHY
@inproceedings{ lu2023calibrating, title={Calibrating {\textquotedblleft}Cheap Signals{\textquotedblright} in Peer Review without a Prior}, author={Yuxuan Lu and Yuqing Kong}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xr3KAzboHY} }
Peer review lies at the core of the academic process, but even well-intentioned reviewers can still provide noisy ratings. While ranking papers by average ratings may reduce noise, varying noise levels and systematic biases stemming from ``cheap'' signals (e.g. author identity, proof length) can lead to unfairness. Detecting and correcting bias is challenging, as ratings are subjective and unverifiable. Unlike previous works relying on prior knowledge or historical data, we propose a one-shot noise calibration process without any prior information. We ask reviewers to predict others' scores and use these predictions for calibration. Assuming reviewers adjust their predictions according to the noise, we demonstrate that the calibrated score results in a more robust ranking compared to average ratings, even with varying noise levels and biases. In detail, we show that the error probability of the calibrated score approaches zero as the number of reviewers increases and is significantly lower compared to average ratings when the number of reviewers is small.
Calibrating “Cheap Signals” in Peer Review without a Prior
[ "Yuxuan Lu", "Yuqing Kong" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xq1QvViDdW
@inproceedings{ jung2023beyond, title={Beyond Unimodal: Generalising Neural Processes for Multimodal Uncertainty Estimation}, author={Myong Chol Jung and He Zhao and Joanna Dipnall and Lan Du}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xq1QvViDdW} }
Uncertainty estimation is an important research area to make deep neural networks (DNNs) more trustworthy. While extensive research on uncertainty estimation has been conducted with unimodal data, uncertainty estimation for multimodal data remains a challenge. Neural processes (NPs) have been demonstrated to be an effective uncertainty estimation method for unimodal data by providing the reliability of Gaussian processes with efficient and powerful DNNs. While NPs hold significant potential for multimodal uncertainty estimation, the adaptation of NPs for multimodal data has not been carefully studied. To bridge this gap, we propose Multimodal Neural Processes (MNPs) by generalising NPs for multimodal uncertainty estimation. Based on the framework of NPs, MNPs consist of several novel and principled mechanisms tailored to the characteristics of multimodal data. In extensive empirical evaluation, our method achieves state-of-the-art multimodal uncertainty estimation performance, showing its appealing robustness against noisy samples and reliability in out-of-distribution detection with faster computation time compared to the current state-of-the-art multimodal uncertainty estimation method.
Beyond Unimodal: Generalising Neural Processes for Multimodal Uncertainty Estimation
[ "Myong Chol Jung", "He Zhao", "Joanna Dipnall", "Lan Du" ]
Conference
poster
2304.01518
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xpjsOQtKqx
@inproceedings{ tian2023stablerep, title={StableRep: Synthetic Images from Text-to-Image Models Make Strong Visual Representation Learners}, author={Yonglong Tian and Lijie Fan and Phillip Isola and Huiwen Chang and Dilip Krishnan}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xpjsOQtKqx} }
We investigate the potential of learning visual representations using synthetic images generated by text-to-image models. This is a natural question in the light of the excellent performance of such models in generating high-quality images. We consider specifically the Stable Diffusion, one of the leading open source text-to-image models. We show that (1) when the generative model is properly configured, training self-supervised methods on synthetic images can match or beat the real image counterpart; (2) by treating the multiple images generated from the same text prompt as positives for each other, we develop a multi-positive contrastive learning method, which we call StableRep. With solely synthetic images, the representations learned by StableRep surpass the performance of representations learned by SimCLR and CLIP using the same set of text prompts and corresponding real images, on large scale datasets. When we further add language supervision, \name~trained with 20M synthetic images (10M captions) achieves better accuracy than CLIP trained with 50M real images (50M captions).
StableRep: Synthetic Images from Text-to-Image Models Make Strong Visual Representation Learners
[ "Yonglong Tian", "Lijie Fan", "Phillip Isola", "Huiwen Chang", "Dilip Krishnan" ]
Conference
poster
2306.00984
[ "https://github.com/google-research/syn-rep-learn" ]
https://huggingface.co/papers/2306.00984
2
4
1
5
1
[]
[]
[]
null
https://openreview.net/forum?id=xo2lbfQE8I
@inproceedings{ yim2023fitting, title={Fitting trees to \${\textbackslash}ell\_1\$-hyperbolic distances}, author={Joon-Hyeok Yim and Anna Gilbert}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xo2lbfQE8I} }
Building trees to represent or to fit distances is a critical component of phylogenetic analysis, metric embeddings, approximation algorithms, geometric graph neural nets, and the analysis of hierarchical data. Much of the previous algorithmic work, however, has focused on generic metric spaces (i.e., those with no \emph{a priori} constraints). Leveraging several ideas from the mathematical analysis of hyperbolic geometry and geometric group theory, we study the tree fitting problem as finding the relation between the hyperbolicity (ultrametricity) vector and the error of tree (ultrametric) embedding. That is, we define a vector of hyperbolicity (ultrametric) values over all triples of points and compare the $\ell_p$ norms of this vector with the $\ell_q$ norm of the distortion of the best tree fit to the distances. This formulation allows us to define the average hyperbolicity (ultrametricity) in terms of a normalized $\ell_1$ norm of the hyperbolicity vector. Furthermore, we can interpret the classical tree fitting result of Gromov as a $p = q = \infty$ result. We present an algorithm \textsc{HCCRootedTreeFit} such that the $\ell_1$ error of the output embedding is analytically bounded in terms of the $\ell_1$-norm of the hyperbolicity vector (i.e., $p = q = 1$) and that this result is tight. Furthermore, this algorithm has significantly different theoretical and empirical performance as compared to Gromov's result and related algorithms. Finally, we show using \textsc{HCCRootedTreeFit} and related tree fitting algorithms, that supposedly standard data sets for hierarchical data analysis and geometric graph neural networks have radically different tree fits than those of synthetic, truly tree-like data sets, suggesting that a much more refined analysis of these standard data sets is called for.
Fitting trees to ℓ_1-hyperbolic distances
[ "Joon-Hyeok Yim", "Anna Gilbert" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xmxgMij3LY
@inproceedings{ zhao2023michelangelo, title={Michelangelo: Conditional 3D Shape Generation based on Shape-Image-Text Aligned Latent Representation}, author={Zibo Zhao and Wen Liu and Xin Chen and Xianfang Zeng and Rui Wang and Pei Cheng and BIN FU and Tao Chen and Gang YU and Shenghua Gao}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xmxgMij3LY} }
We present a novel alignment-before-generation approach to tackle the challenging task of generating general 3D shapes based on 2D images or texts. Directly learning a conditional generative model from images or texts to 3D shapes is prone to producing inconsistent results with the conditions because 3D shapes have an additional dimension whose distribution significantly differs from that of 2D images and texts. To bridge the domain gap among the three modalities and facilitate multi-modal-conditioned 3D shape generation, we explore representing 3D shapes in a shape-image-text-aligned space. Our framework comprises two models: a Shape-Image-Text-Aligned Variational Auto-Encoder (SITA-VAE) and a conditional Aligned Shape Latent Diffusion Model (ASLDM). The former model encodes the 3D shapes into the shape latent space aligned to the image and text and reconstructs the fine-grained 3D neural fields corresponding to given shape embeddings via the transformer-based decoder. The latter model learns a probabilistic mapping function from the image or text space to the latent shape space. Our extensive experiments demonstrate that our proposed approach can generate higher-quality and more diverse 3D shapes that better semantically conform to the visual or textural conditional inputs, validating the effectiveness of the shape-image-text-aligned space for cross-modality 3D shape generation.
Michelangelo: Conditional 3D Shape Generation based on Shape-Image-Text Aligned Latent Representation
[ "Zibo Zhao", "Wen Liu", "Xin Chen", "Xianfang Zeng", "Rui Wang", "Pei Cheng", "BIN FU", "Tao Chen", "Gang YU", "Shenghua Gao" ]
Conference
poster
2306.17115
[ "https://github.com/neuralcarver/michelangelo" ]
https://huggingface.co/papers/2306.17115
4
11
0
10
1
[ "Maikou/Michelangelo" ]
[]
[ "Maikou/Michelangelo" ]
null
https://openreview.net/forum?id=xkkBFePoFn
@inproceedings{ zha2023text, title={Text Alignment Is An Efficient Unified Model for Massive {NLP} Tasks}, author={Yuheng Zha and Yichi Yang and Ruichen Li and Zhiting Hu}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xkkBFePoFn} }
Large language models (LLMs), typically designed as a function of next-word prediction, have excelled across extensive NLP tasks. Despite the generality, next-word prediction is often not an efficient formulation for many of the tasks, demanding an extreme scale of model parameters (10s or 100s of billions) and sometimes yielding suboptimal performance. In practice, it is often desirable to build more efficient models---despite being less versatile, they still apply to a substantial subset of problems, delivering on par or even superior performance with much smaller model sizes. In this paper, we propose text alignment as an efficient unified model for a wide range of crucial tasks involving text entailment, similarity, question answering (and answerability), factual consistency, and so forth. Given a pair of texts, the model measures the degree of alignment between their information. We instantiate an alignment model through lightweight finetuning of RoBERTa (355M parameters) using 5.9M examples from 28 datasets. Despite its compact size, extensive experiments show the model's efficiency and strong performance: (1) On over 20 datasets of aforementioned diverse tasks, the model matches or surpasses FLAN-T5 models that have around 2x or 10x more parameters; the single unified model also outperforms task-specific models finetuned on individual datasets; (2) When applied to evaluate factual consistency of language generation on 23 datasets, our model improves over various baselines, including the much larger GPT-3.5 (ChatGPT) and sometimes even GPT-4; (3) The lightweight model can also serve as an add-on component for LLMs such as GPT-3.5 in question answering tasks, improving the average exact match (EM) score by 17.94 and F1 score by 15.05 through identifying unanswerable questions.
Text Alignment Is An Efficient Unified Model for Massive NLP Tasks
[ "Yuheng Zha", "Yichi Yang", "Ruichen Li", "Zhiting Hu" ]
Conference
poster
2307.02729
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xgzkuTGBTx
@inproceedings{ baek2023asymptotics, title={Asymptotics of Bayesian Uncertainty Estimation in Random Features Regression}, author={Youngsoo Baek and Samuel Berchuck and Sayan Mukherjee}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xgzkuTGBTx} }
In this paper we compare and contrast the behavior of the posterior predictive distribution to the risk of the the maximum a posteriori estimator for the random features regression model in the overparameterized regime. We will focus on the variance of the posterior predictive distribution (Bayesian model average) and compare its asymptotics to that of the risk of the MAP estimator. In the regime where the model dimensions grow faster than any constant multiple of the number of samples, asymptotic agreement between these two quantities is governed by the phase transition in the signal-to-noise ratio. They also asymptotically agree with each other when the number of samples grow faster than any constant multiple of model dimensions. Numerical simulations illustrate finer distributional properties of the two quantities for finite dimensions. We conjecture they have Gaussian fluctuations and exhibit similar properties as found by previous authors in a Gaussian sequence model, this is of independent theoretical interest.
Asymptotics of Bayesian Uncertainty Estimation in Random Features Regression
[ "Youngsoo Baek", "Samuel Berchuck", "Sayan Mukherjee" ]
Conference
poster
2306.03783
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xgY4QcOiEZ
@inproceedings{ chistikov2023learning, title={Learning a Neuron by a Shallow Re{LU} Network: Dynamics and Implicit Bias for Correlated Inputs}, author={Dmitry Chistikov and Matthias Englert and Ranko Lazic}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xgY4QcOiEZ} }
We prove that, for the fundamental regression task of learning a single neuron, training a one-hidden layer ReLU network of any width by gradient flow from a small initialisation converges to zero loss and is implicitly biased to minimise the rank of network parameters. By assuming that the training points are correlated with the teacher neuron, we complement previous work that considered orthogonal datasets. Our results are based on a detailed non-asymptotic analysis of the dynamics of each hidden neuron throughout the training. We also show and characterise a surprising distinction in this setting between interpolator networks of minimal rank and those of minimal Euclidean norm. Finally we perform a range of numerical experiments, which corroborate our theoretical findings.
Learning a Neuron by a Shallow ReLU Network: Dynamics and Implicit Bias for Correlated Inputs
[ "Dmitry Chistikov", "Matthias Englert", "Ranko Lazic" ]
Conference
poster
2306.06479
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xgTV6rmH6n
@inproceedings{ adriaensen2023efficient, title={Efficient Bayesian Learning Curve Extrapolation using Prior-Data Fitted Networks}, author={Steven Adriaensen and Herilalaina Rakotoarison and Samuel M{\"u}ller and Frank Hutter}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xgTV6rmH6n} }
Learning curve extrapolation aims to predict model performance in later epochs of training, based on the performance in earlier epochs. In this work, we argue that, while the inherent uncertainty in the extrapolation of learning curves warrants a Bayesian approach, existing methods are (i) overly restrictive, and/or (ii) computationally expensive. We describe the first application of prior-data fitted neural networks (PFNs) in this context. A PFN is a transformer, pre-trained on data generated from a prior, to perform approximate Bayesian inference in a single forward pass. We propose LC-PFN, a PFN trained to extrapolate 10 million artificial right-censored learning curves generated from a parametric prior proposed in prior art using MCMC. We demonstrate that LC-PFN can approximate the posterior predictive distribution more accurately than MCMC, while being over 10 000 times faster. We also show that the same LC-PFN achieves competitive performance extrapolating a total of 20 000 real learning curves from four learning curve benchmarks (LCBench, NAS-Bench-201, Taskset, and PD1) that stem from training a wide range of model architectures (MLPs, CNNs, RNNs, and Transformers) on 53 different datasets with varying input modalities (tabular, image, text, and protein data). Finally, we investigate its potential in the context of model selection and find that a simple LC-PFN based predictive early stopping criterion obtains 2 - 6x speed-ups on 45 of these datasets, at virtually no overhead.
Efficient Bayesian Learning Curve Extrapolation using Prior-Data Fitted Networks
[ "Steven Adriaensen", "Herilalaina Rakotoarison", "Samuel Müller", "Frank Hutter" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xfBeVGJwyL
@inproceedings{ nicoli2023physicsinformed, title={Physics-Informed Bayesian Optimization of Variational Quantum Circuits}, author={Kim Andrea Nicoli and Christopher J. Anders and Lena Funcke and Tobias Hartung and Karl Jansen and Stefan Kuhn and Klaus Robert Muller and Paolo Stornati and Pan Kessel and Shinichi Nakajima}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xfBeVGJwyL} }
In this paper, we propose a novel and powerful method to harness Bayesian optimization for variational quantum eigensolvers (VQEs) - a hybrid quantum-classical protocol used to approximate the ground state of a quantum Hamiltonian. Specifically, we derive a *VQE-kernel* which incorporates important prior information about quantum circuits: the kernel feature map of the VQE-kernel exactly matches the known functional form of the VQE's objective function and thereby significantly reduces the posterior uncertainty. Moreover, we propose a novel acquisition function for Bayesian optimization called \emph{Expected Maximum Improvement over Confident Regions} (EMICoRe) which can actively exploit the inductive bias of the VQE-kernel by treating regions with low predictive uncertainty as indirectly "observed". As a result, observations at as few as three points in the search domain are sufficient to determine the complete objective function along an entire one-dimensional subspace of the optimization landscape. Our numerical experiments demonstrate that our approach improves over state-of-the-art baselines.
Physics-Informed Bayesian Optimization of Variational Quantum Circuits
[ "Kim Andrea Nicoli", "Christopher J. Anders", "Lena Funcke", "Tobias Hartung", "Karl Jansen", "Stefan Kuhn", "Klaus Robert Muller", "Paolo Stornati", "Pan Kessel", "Shinichi Nakajima" ]
Conference
poster
2406.06150
[ "https://github.com/emicore/emicore" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xdQpmUPNHC
@inproceedings{ guo2023efficient, title={Efficient Symbolic Policy Learning with Differentiable Symbolic Expression}, author={Jiaming Guo and Rui Zhang and Shaohui Peng and Qi Yi and Xing Hu and Ruizhi Chen and Zidong Du and Xishan Zhang and Ling Li and Qi Guo and Yunji Chen}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xdQpmUPNHC} }
Deep reinforcement learning (DRL) has led to a wide range of advances in sequential decision-making tasks. However, the complexity of neural network policies makes it difficult to understand and deploy with limited computational resources. Currently, employing compact symbolic expressions as symbolic policies is a promising strategy to obtain simple and interpretable policies. Previous symbolic policy methods usually involve complex training processes and pre-trained neural network policies, which are inefficient and limit the application of symbolic policies. In this paper, we propose an efficient gradient-based learning method named Efficient Symbolic Policy Learning (ESPL) that learns the symbolic policy from scratch in an end-to-end way. We introduce a symbolic network as the search space and employ a path selector to find the compact symbolic policy. By doing so we represent the policy with a differentiable symbolic expression and train it in an off-policy manner which further improves the efficiency. In addition, in contrast with previous symbolic policies which only work in single-task RL because of complexity, we expand ESPL on meta-RL to generate symbolic policies for unseen tasks. Experimentally, we show that our approach generates symbolic policies with higher performance and greatly improves data efficiency for single-task RL. In meta-RL, we demonstrate that compared with neural network policies the proposed symbolic policy achieves higher performance and efficiency and shows the potential to be interpretable.
Efficient Symbolic Policy Learning with Differentiable Symbolic Expression
[ "Jiaming Guo", "Rui Zhang", "Shaohui Peng", "Qi Yi", "Xing Hu", "Ruizhi Chen", "Zidong Du", "Xishan Zhang", "Ling Li", "Qi Guo", "Yunji Chen" ]
Conference
poster
2311.02104
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xdOoCWCYaY
@inproceedings{ pham2023towards, title={Towards Data-Agnostic Pruning At Initialization: What Makes a Good Sparse Mask?}, author={Hoang Pham and The-Anh Ta and Shiwei Liu and Lichuan Xiang and Dung D. Le and Hongkai Wen and Long Tran-Thanh}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xdOoCWCYaY} }
Pruning at initialization (PaI) aims to remove weights of neural networks before training in pursuit of training efficiency besides the inference. While off-the-shelf PaI methods manage to find trainable subnetworks that outperform random pruning, their performance in terms of both accuracy and computational reduction is far from satisfactory compared to post-training pruning and the understanding of PaI is missing. For instance, recent studies show that existing PaI methods only able to find good layerwise sparsities not weights, as the discovered subnetworks are surprisingly resilient against layerwise random mask shuffling and weight re-initialization. In this paper, we study PaI from a brand-new perspective -- the topology of subnetworks. In particular, we propose a principled framework for analyzing the performance of Pruning and Initialization (PaI) methods with two quantities, namely, the number of effective paths and effective nodes. These quantities allow for a more comprehensive understanding of PaI methods, giving us an accurate assessment of different subnetworks at initialization. We systematically analyze the behavior of various PaI methods through our framework and observe a guiding principle for constructing effective subnetworks: *at a specific sparsity, the top-performing subnetwork always presents a good balance between the number of effective nodes and the number of effective paths.* Inspired by this observation, we present a novel data-agnostic pruning method by solving a multi-objective optimization problem. By conducting extensive experiments across different architectures and datasets, our results demonstrate that our approach outperforms state-of-the-art PaI methods while it is able to discover subnetworks that have much lower inference FLOPs (up to 3.4$\times$). Code will be fully released.
Towards Data-Agnostic Pruning At Initialization: What Makes a Good Sparse Mask?
[ "Hoang Pham", "The-Anh Ta", "Shiwei Liu", "Lichuan Xiang", "Dung D. Le", "Hongkai Wen", "Long Tran-Thanh" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xcGhx9FdxM
@inproceedings{ goel2023adversarial, title={Adversarial Resilience in Sequential Prediction via Abstention}, author={Surbhi Goel and Steve Hanneke and Shay Moran and Abhishek Shetty}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xcGhx9FdxM} }
We study the problem of sequential prediction in the stochastic setting with an adversary that is allowed to inject clean-label adversarial (or out-of-distribution) examples. Algorithms designed to handle purely stochastic data tend to fail in the presence of such adversarial examples, often leading to erroneous predictions. This is undesirable in many high-stakes applications such as medical recommendations, where abstaining from predictions on adversarial examples is preferable to misclassification. On the other hand, assuming fully adversarial data leads to very pessimistic bounds that are often vacuous in practice. To move away from these pessimistic guarantees, we propose a new model of sequential prediction that sits between the purely stochastic and fully adversarial settings by allowing the learner to abstain from making a prediction at no cost on adversarial examples, thereby asking the learner to make predictions with certainty. Assuming access to the marginal distribution on the non-adversarial examples, we design a learner whose error scales with the VC dimension (mirroring the stochastic setting) of the hypothesis class, as opposed to the Littlestone dimension which characterizes the fully adversarial setting. Furthermore, we design learners for VC dimension~1 classes and the class of axis-aligned rectangles, which work even in the absence of access to the marginal distribution. Our key technical contribution is a novel measure for quantifying uncertainty for learning VC classes, which may be of independent interest.
Adversarial Resilience in Sequential Prediction via Abstention
[ "Surbhi Goel", "Steve Hanneke", "Shay Moran", "Abhishek Shetty" ]
Conference
poster
2306.13119
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xbbknN9QFs
@inproceedings{ zhao2023on, title={On Evaluating Adversarial Robustness of Large Vision-Language Models}, author={Yunqing Zhao and Tianyu Pang and Chao Du and Xiao Yang and Chongxuan Li and Ngai-man Cheung and Min Lin}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xbbknN9QFs} }
Large vision-language models (VLMs) such as GPT-4 have achieved unprecedented performance in response generation, especially with visual inputs, enabling more creative and adaptable interaction than large language models such as ChatGPT. Nonetheless, multimodal generation exacerbates safety concerns, since adversaries may successfully evade the entire system by subtly manipulating the most vulnerable modality (e.g., vision). To this end, we propose evaluating the robustness of open-source large VLMs in the most realistic and high-risk setting, where adversaries have only black-box system access and seek to deceive the model into returning the targeted responses. In particular, we first craft targeted adversarial examples against pretrained models such as CLIP and BLIP, and then transfer these adversarial examples to other VLMs such as MiniGPT-4, LLaVA, UniDiffuser, BLIP-2, and Img2Prompt. In addition, we observe that black-box queries on these VLMs can further improve the effectiveness of targeted evasion, resulting in a surprisingly high success rate for generating targeted responses. Our findings provide a quantitative understanding regarding the adversarial vulnerability of large VLMs and call for a more thorough examination of their potential security flaws before deployment in practice. Our project page: https://yunqing-me.github.io/AttackVLM/.
On Evaluating Adversarial Robustness of Large Vision-Language Models
[ "Yunqing Zhao", "Tianyu Pang", "Chao Du", "Xiao Yang", "Chongxuan Li", "Ngai-man Cheung", "Min Lin" ]
Conference
poster
2305.16934
[ "https://github.com/yunqing-me/attackvlm" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xax5eWeObb
@inproceedings{ huang2023practical, title={Practical Equivariances via Relational Conditional Neural Processes}, author={Daolang Huang and Manuel Haussmann and Ulpu Remes and S. T. John and Gr{\'e}goire Clart{\'e} and Kevin Sebastian Luck and Samuel Kaski and Luigi Acerbi}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xax5eWeObb} }
Conditional Neural Processes (CNPs) are a class of metalearning models popular for combining the runtime efficiency of amortized inference with reliable uncertainty quantification. Many relevant machine learning tasks, such as in spatio-temporal modeling, Bayesian Optimization and continuous control, inherently contain equivariances – for example to translation – which the model can exploit for maximal performance. However, prior attempts to include equivariances in CNPs do not scale effectively beyond two input dimensions. In this work, we propose Relational Conditional Neural Processes (RCNPs), an effective approach to incorporate equivariances into any neural process model. Our proposed method extends the applicability and impact of equivariant neural processes to higher dimensions. We empirically demonstrate the competitive performance of RCNPs on a large array of tasks naturally containing equivariances.
Practical Equivariances via Relational Conditional Neural Processes
[ "Daolang Huang", "Manuel Haussmann", "Ulpu Remes", "S. T. John", "Grégoire Clarté", "Kevin Sebastian Luck", "Samuel Kaski", "Luigi Acerbi" ]
Conference
poster
2306.10915
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xZvGrzRq17
@inproceedings{ fujimoto2023for, title={For {SALE}: State-Action Representation Learning for Deep Reinforcement Learning}, author={Scott Fujimoto and Wei-Di Chang and Edward J. Smith and Shixiang Shane Gu and Doina Precup and David Meger}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xZvGrzRq17} }
In reinforcement learning (RL), representation learning is a proven tool for complex image-based tasks, but is often overlooked for environments with low-level states, such as physical control problems. This paper introduces SALE, a novel approach for learning embeddings that model the nuanced interaction between state and action, enabling effective representation learning from low-level states. We extensively study the design space of these embeddings and highlight important design considerations. We integrate SALE and an adaptation of checkpoints for RL into TD3 to form the TD7 algorithm, which significantly outperforms existing continuous control algorithms. On OpenAI gym benchmark tasks, TD7 has an average performance gain of 276.7% and 50.7% over TD3 at 300k and 5M time steps, respectively, and works in both the online and offline settings.
For SALE: State-Action Representation Learning for Deep Reinforcement Learning
[ "Scott Fujimoto", "Wei-Di Chang", "Edward J. Smith", "Shixiang Shane Gu", "Doina Precup", "David Meger" ]
Conference
poster
2306.02451
[ "https://github.com/sfujim/td7" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xXfDB8kJUs
@inproceedings{ ruben2023learning, title={Learning Curves for Noisy Heterogeneous Feature-Subsampled Ridge Ensembles}, author={Benjamin Samuel Ruben and Cengiz Pehlevan}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xXfDB8kJUs} }
Feature bagging is a well-established ensembling method which aims to reduce prediction variance by combining predictions of many estimators trained on subsets or projections of features. Here, we develop a theory of feature-bagging in noisy least-squares ridge ensembles and simplify the resulting learning curves in the special case of equicorrelated data. Using analytical learning curves, we demonstrate that subsampling shifts the double-descent peak of a linear predictor. This leads us to introduce heterogeneous feature ensembling, with estimators built on varying numbers of feature dimensions, as a computationally efficient method to mitigate double-descent. Then, we compare the performance of a feature-subsampling ensemble to a single linear predictor, describing a trade-off between noise amplification due to subsampling and noise reduction due to ensembling. Our qualitative insights carry over to linear classifiers applied to image classification tasks with realistic datasets constructed using a state-of-the-art deep learning feature map.
Learning Curves for Noisy Heterogeneous Feature-Subsampled Ridge Ensembles
[ "Benjamin Samuel Ruben", "Cengiz Pehlevan" ]
Conference
poster
2307.03176
[ "https://github.com/benruben87/Learning-Curves-for-Heterogeneous-Feature-Subsampled-Ridge-Ensembles" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xWCp0uLcpG
@inproceedings{ park2023robust, title={Robust Data Pruning under Label Noise via Maximizing Re-labeling Accuracy}, author={Dongmin Park and Seola Choi and Doyoung Kim and Hwanjun Song and Jae-Gil Lee}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xWCp0uLcpG} }
Data pruning, which aims to downsize a large training set into a small informative subset, is crucial for reducing the enormous computational costs of modern deep learning. Though large-scale data collections invariably contain annotation noise and numerous robust learning methods have been developed, data pruning for the noise-robust learning scenario has received little attention. With state-of-the-art Re-labeling methods that self-correct erroneous labels while training, it is challenging to identify which subset induces the most accurate re-labeling of erroneous labels in the entire training set. In this paper, we formalize the problem of data pruning with re-labeling. We first show that the likelihood of a training example being correctly re-labeled is proportional to the prediction confidence of its neighborhood in the subset. Therefore, we propose a novel data pruning algorithm, Prune4Rel, that finds a subset maximizing the total neighborhood confidence of all training examples, thereby maximizing the re-labeling accuracy and generalization performance. Extensive experiments on four real and one synthetic noisy datasets show that Prune4Rel outperforms the baselines with Re-labeling models by up to 9.1% as well as those with a standard model by up to 21.6%.
Robust Data Pruning under Label Noise via Maximizing Re-labeling Accuracy
[ "Dongmin Park", "Seola Choi", "Doyoung Kim", "Hwanjun Song", "Jae-Gil Lee" ]
Conference
poster
2311.01002
[ "https://github.com/kaist-dmlab/Prune4Rel" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xW0ayZxPWs
@inproceedings{ feng2023fair, title={Fair Graph Distillation}, author={Qizhang Feng and Zhimeng Jiang and Ruiquan Li and Yicheng Wang and Na Zou and Jiang Bian and Xia Hu}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xW0ayZxPWs} }
As graph neural networks (GNNs) struggle with large-scale graphs due to high computational demands, data distillation for graph data promises to alleviate this issue by distilling a large real graph into a smaller distilled graph while maintaining comparable prediction performance for GNNs trained on both graphs. However, we observe that GNNs trained on distilled graphs may exhibit more severe group fairness problems than those trained on real graphs. Motivated by this observation, we propose \textit{fair graph distillation} (\Algnameabbr), an approach for generating small distilled \textit{fair and informative} graphs based on the graph distillation method. The challenge lies in the deficiency of sensitive attributes for nodes in the distilled graph, making most debiasing methods (e.g., regularization and adversarial debiasing) intractable for distilled graphs. We develop a simple yet effective bias metric, called coherence, for distilled graphs. Based on the proposed coherence metric, we introduce a framework for fair graph distillation using a bi-level optimization algorithm. Extensive experiments demonstrate that the proposed algorithm can achieve better prediction performance-fairness trade-offs across various datasets and GNN architectures.
Fair Graph Distillation
[ "Qizhang Feng", "Zhimeng Jiang", "Ruiquan Li", "Yicheng Wang", "Na Zou", "Jiang Bian", "Xia Hu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xUyBP16Q5J
@inproceedings{ yao2023rethinking, title={Rethinking Incentives in Recommender Systems: Are Monotone Rewards Always Beneficial?}, author={Fan Yao and Chuanhao Li and Karthik Abinav Sankararaman and Yiming Liao and Yan Zhu and Qifan Wang and Hongning Wang and Haifeng Xu}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xUyBP16Q5J} }
The past decade has witnessed the flourishing of a new profession as media content creators, who rely on revenue streams from online content recommendation platforms. The reward mechanism employed by these platforms creates a competitive environment among creators which affects their production choices and, consequently, content distribution and system welfare. It is thus crucial to design the platform's reward mechanism in order to steer the creators' competition towards a desirable welfare outcome in the long run. This work makes two major contributions in this regard: first, we uncover a fundamental limit about a class of widely adopted mechanisms, coined \emph{Merit-based Monotone Mechanisms}, by showing that they inevitably lead to a constant fraction loss of the optimal welfare. To circumvent this limitation, we introduce \emph{Backward Rewarding Mechanisms} (BRMs) and show that the competition game resultant from BRMs possesses a potential game structure. BRMs thus naturally induce strategic creators' collective behaviors towards optimizing the potential function, which can be designed to match any given welfare metric. In addition, the class of BRM can be parameterized so that it allows the platform to directly optimize welfare within the feasible mechanism space even when the welfare metric is not explicitly defined.
Rethinking Incentives in Recommender Systems: Are Monotone Rewards Always Beneficial?
[ "Fan Yao", "Chuanhao Li", "Karthik Abinav Sankararaman", "Yiming Liao", "Yan Zhu", "Qifan Wang", "Hongning Wang", "Haifeng Xu" ]
Conference
poster
2306.07893
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xTgM7XLN9P
@inproceedings{ guo2023compact, title={Compact Neural Volumetric Video Representations with Dynamic Codebooks}, author={Haoyu Guo and Sida Peng and Yunzhi Yan and Linzhan Mou and Yujun Shen and Hujun Bao and Xiaowei Zhou}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xTgM7XLN9P} }
This paper addresses the challenge of representing high-fidelity volumetric videos with low storage cost. Some recent feature grid-based methods have shown superior performance of fast learning implicit neural representations from input 2D images. However, such explicit representations easily lead to large model sizes when modeling dynamic scenes. To solve this problem, our key idea is reducing the spatial and temporal redundancy of feature grids, which intrinsically exist due to the self-similarity of scenes. To this end, we propose a novel neural representation, named dynamic codebook, which first merges similar features for the model compression and then compensates for the potential decline in rendering quality by a set of dynamic codes. Experiments on the NHR and DyNeRF datasets demonstrate that the proposed approach achieves state-of-the-art rendering quality, while being able to achieve more storage efficiency. The source code is available at https://github.com/zju3dv/compact_vv.
Compact Neural Volumetric Video Representations with Dynamic Codebooks
[ "Haoyu Guo", "Sida Peng", "Yunzhi Yan", "Linzhan Mou", "Yujun Shen", "Hujun Bao", "Xiaowei Zhou" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xSEhb2j3TK
@inproceedings{ jin2023act, title={Act As You Wish: Fine-Grained Control of Motion Diffusion Model with Hierarchical Semantic Graphs}, author={Peng Jin and Yang Wu and Yanbo Fan and Zhongqian Sun and Yang Wei and Li Yuan}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xSEhb2j3TK} }
Most text-driven human motion generation methods employ sequential modeling approaches, e.g., transformer, to extract sentence-level text representations automatically and implicitly for human motion synthesis. However, these compact text representations may overemphasize the action names at the expense of other important properties and lack fine-grained details to guide the synthesis of subtly distinct motion. In this paper, we propose hierarchical semantic graphs for fine-grained control over motion generation. Specifically, we disentangle motion descriptions into hierarchical semantic graphs including three levels of motions, actions, and specifics. Such global-to-local structures facilitate a comprehensive understanding of motion description and fine-grained control of motion generation. Correspondingly, to leverage the coarse-to-fine topology of hierarchical semantic graphs, we decompose the text-to-motion diffusion process into three semantic levels, which correspond to capturing the overall motion, local actions, and action specifics. Extensive experiments on two benchmark human motion datasets, including HumanML3D and KIT, with superior performances, justify the efficacy of our method. More encouragingly, by modifying the edge weights of hierarchical semantic graphs, our method can continuously refine the generated motion, which may have a far-reaching impact on the community. Code and pre-trained weights are available at https://github.com/jpthu17/GraphMotion.
Act As You Wish: Fine-Grained Control of Motion Diffusion Model with Hierarchical Semantic Graphs
[ "Peng Jin", "Yang Wu", "Yanbo Fan", "Zhongqian Sun", "Yang Wei", "Li Yuan" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xRfTcZdQxq
@inproceedings{ jiang2023robust, title={Robust Model Reasoning and Fitting via Dual Sparsity Pursuit}, author={Xingyu Jiang and Jiayi Ma}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xRfTcZdQxq} }
In this paper, we contribute to solving a threefold problem: outlier rejection, true model reasoning and parameter estimation with a unified optimization modeling. To this end, we first pose this task as a sparse subspace recovering problem, to search a maximum of independent bases under an over-embedded data space. Then we convert the objective into a continuous optimization paradigm that estimates sparse solutions for both bases and errors. Wherein a fast and robust solver is proposed to accurately estimate the sparse subspace parameters and error entries, which is implemented by a proximal approximation method under the alternating optimization framework with the ``optimal'' sub-gradient descent. Extensive experiments regarding known and unknown model fitting on synthetic and challenging real datasets have demonstrated the superiority of our method against the state-of-the-art. We also apply our method to multi-class multi-model fitting and loop closure detection, and achieve promising results both in accuracy and efficiency. Code is released at: https://github.com/StaRainJ/DSP.
Robust Model Reasoning and Fitting via Dual Sparsity Pursuit
[ "Xingyu Jiang", "Jiayi Ma" ]
Conference
spotlight
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xQOHOpe1Fv
@inproceedings{ masarczyk2023the, title={The Tunnel Effect: Building Data Representations in Deep Neural Networks}, author={Wojciech Masarczyk and Mateusz Ostaszewski and Ehsan Imani and Razvan Pascanu and Piotr Mi{\l}o{\'s} and Tomasz Trzcinski}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xQOHOpe1Fv} }
Deep neural networks are widely known for their remarkable effectiveness across various tasks, with the consensus that deeper networks implicitly learn more complex data representations. This paper shows that sufficiently deep networks trained for supervised image classification split into two distinct parts that contribute to the resulting data representations differently. The initial layers create linearly-separable representations, while the subsequent layers, which we refer to as \textit{the tunnel}, compress these representations and have a minimal impact on the overall performance. We explore the tunnel's behavior through comprehensive empirical studies, highlighting that it emerges early in the training process. Its depth depends on the relation between the network's capacity and task complexity. Furthermore, we show that the tunnel degrades out-of-distribution generalization and discuss its implications for continual learning.
The Tunnel Effect: Building Data Representations in Deep Neural Networks
[ "Wojciech Masarczyk", "Mateusz Ostaszewski", "Ehsan Imani", "Razvan Pascanu", "Piotr Miłoś", "Tomasz Trzcinski" ]
Conference
poster
2305.19753
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xPqINp0Eu1
@inproceedings{ wang2023stability, title={Stability of Random Forests and Coverage of Random-Forest Prediction Intervals}, author={Yan Wang and Huaiqing Wu and Dan Nettleton}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xPqINp0Eu1} }
We establish stability of random forests under the mild condition that the squared response ($Y^2$) does not have a heavy tail. In particular, our analysis holds for the practical version of random forests that is implemented in popular packages like \texttt{randomForest} in \texttt{R}. Empirical results show that stability may persist even beyond our assumption and hold for heavy-tailed $Y^2$. Using the stability property, we prove a non-asymptotic lower bound for the coverage probability of prediction intervals constructed from the out-of-bag error of random forests. With another mild condition that is typically satisfied when $Y$ is continuous, we also establish a complementary upper bound, which can be similarly established for the jackknife prediction interval constructed from an arbitrary stable algorithm. We also discuss the asymptotic coverage probability under assumptions weaker than those considered in previous literature. Our work implies that random forests, with its stability property, is an effective machine learning method that can provide not only satisfactory point prediction but also justified interval prediction at almost no extra computational cost.
Stability of Random Forests and Coverage of Random-Forest Prediction Intervals
[ "Yan Wang", "Huaiqing Wu", "Dan Nettleton" ]
Conference
poster
2310.18814
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xPLaXSuSvQ
@inproceedings{ misiakos2023learning, title={Learning {DAG}s from Data with Few Root Causes}, author={Panagiotis Misiakos and Chris Wendler and Markus P{\"u}schel}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xPLaXSuSvQ} }
We present a novel perspective and algorithm for learning directed acyclic graphs (DAGs) from data generated by a linear structural equation model (SEM). First, we show that a linear SEM can be viewed as a linear transform that, in prior work, computes the data from a dense input vector of random valued root causes (as we will call them) associated with the nodes. Instead, we consider the case of (approximately) few root causes and also introduce noise in the measurement of the data. Intuitively, this means that the DAG data is produced by few data generating events whose effect percolates through the DAG. We prove identifiability in this new setting and show that the true DAG is the global minimizer of the $L^0$-norm of the vector of root causes. For data satisfying the few root causes assumption, we show superior performance compared to prior DAG learning methods.
Learning DAGs from Data with Few Root Causes
[ "Panagiotis Misiakos", "Chris Wendler", "Markus Püschel" ]
Conference
poster
2305.15936
[ "https://github.com/pmisiakos/SparseRC" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xOzlW2vUYc
@inproceedings{ huang2023crossgnn, title={Cross{GNN}: Confronting Noisy Multivariate Time Series Via Cross Interaction Refinement}, author={Qihe Huang and Lei Shen and Ruixin Zhang and Shouhong Ding and Binwu Wang and Zhengyang Zhou and Yang Wang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xOzlW2vUYc} }
Recently, multivariate time series (MTS) forecasting techniques have seen rapid development and widespread applications across various fields. Transformer-based and GNN-based methods have shown promising potential due to their strong ability to model interaction of time and variables. However, by conducting a comprehensive analysis of the real-world data, we observe that the temporal fluctuations and heterogeneity between variables are not well handled by existing methods. To address the above issues, we propose CrossGNN, a linear complexity GNN model to refine the cross-scale and cross-variable interaction for MTS. To deal with the unexpected noise in time dimension, an adaptive multi-scale identifier (AMSI) is leveraged to construct multi-scale time series with reduced noise. A Cross-Scale GNN is proposed to extract the scales with clearer trend and weaker noise. Cross-Variable GNN is proposed to utilize the homogeneity and heterogeneity between different variables. By simultaneously focusing on edges with higher saliency scores and constraining those edges with lower scores, the time and space complexity (i.e., $O(L)$) of CrossGNN can be linear with the input sequence length $L$. Extensive experimental results on 8 real-world MTS datasets demonstrate the effectiveness of CrossGNN compared with state-of-the-art methods.
CrossGNN: Confronting Noisy Multivariate Time Series Via Cross Interaction Refinement
[ "Qihe Huang", "Lei Shen", "Ruixin Zhang", "Shouhong Ding", "Binwu Wang", "Zhengyang Zhou", "Yang Wang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xOJUmwwlJc
@inproceedings{ xiong2023proximityinformed, title={Proximity-Informed Calibration for Deep Neural Networks}, author={Miao Xiong and Ailin Deng and Pang Wei Koh and Jiaying Wu and Shen Li and Jianqing Xu and Bryan Hooi}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xOJUmwwlJc} }
Confidence calibration is central to providing accurate and interpretable uncertainty estimates, especially under safety-critical scenarios. However, we find that existing calibration algorithms often overlook the issue of proximity bias, a phenomenon where models tend to be more overconfident in low proximity data (i.e., data lying in the sparse region of the data distribution) compared to high proximity samples, and thus suffer from inconsistent miscalibration across different proximity samples. We examine the problem over $504$ pretrained ImageNet models and observe that: 1) Proximity bias exists across a wide variety of model architectures and sizes; 2) Transformer-based models are relatively more susceptible to proximity bias than CNN-based models; 3) Proximity bias persists even after performing popular calibration algorithms like temperature scaling; 4) Models tend to overfit more heavily on low proximity samples than on high proximity samples. Motivated by the empirical findings, we propose ProCal, a plug-and-play algorithm with a theoretical guarantee to adjust sample confidence based on proximity. To further quantify the effectiveness of calibration algorithms in mitigating proximity bias, we introduce proximity-informed expected calibration error (PIECE) with theoretical analysis. We show that ProCal is effective in addressing proximity bias and improving calibration on balanced, long-tail, and distribution-shift settings under four metrics over various model architectures. We believe our findings on proximity bias will guide the development of fairer and better-calibrated} models, contributing to the broader pursuit of trustworthy AI.
Proximity-Informed Calibration for Deep Neural Networks
[ "Miao Xiong", "Ailin Deng", "Pang Wei Koh", "Jiaying Wu", "Shen Li", "Jianqing Xu", "Bryan Hooi" ]
Conference
spotlight
2306.04590
[ "https://github.com/miaoxiong2320/proximitybias-calibration" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xNyR7DXUzJ
@inproceedings{ scardigli2023rlbased, title={{RL}-based Stateful Neural Adaptive Sampling and Denoising for Real-Time Path Tracing}, author={Antoine Scardigli and Lukas Cavigelli and Lorenz K Muller}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xNyR7DXUzJ} }
Monte-Carlo path tracing is a powerful technique for realistic image synthesis but suffers from high levels of noise at low sample counts, limiting its use in real-time applications. To address this, we propose a framework with end-to-end training of a sampling importance network, a latent space encoder network, and a denoiser network. Our approach uses reinforcement learning to optimize the sampling importance network, thus avoiding explicit numerically approximated gradients. Our method does not aggregate the sampled values per pixel by averaging but keeps all sampled values which are then fed into the latent space encoder. The encoder replaces handcrafted spatiotemporal heuristics by learned representations in a latent space. Finally, a neural denoiser is trained to refine the output image. Our approach increases visual quality on several challenging datasets and reduces rendering times for equal quality by a factor of 1.6x compared to the previous state-of-the-art, making it a promising solution for real-time applications.
RL-based Stateful Neural Adaptive Sampling and Denoising for Real-Time Path Tracing
[ "Antoine Scardigli", "Lukas Cavigelli", "Lorenz K Muller" ]
Conference
poster
2310.03507
[ "https://github.com/ajsvb/rl_path_tracing" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xNUmTRYtV1
@inproceedings{ pak2023optimal, title={Optimal Algorithms for the Inhomogeneous Spiked Wigner Model}, author={Alexander Pak and Justin Ko and Florent Krzakala}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xNUmTRYtV1} }
We study a spiked Wigner problem with an inhomogeneous noise profile. Our aim in this problem is to recover the signal passed through an inhomogeneous low-rank matrix channel. While the information-theoretic performances are well-known, we focus on the algorithmic problem. First, we derive an approximate message-passing algorithm (AMP) for the inhomogeneous problem and show that its rigorous state evolution coincides with the information-theoretic optimal Bayes fixed-point equations. Second, we deduce a simple and efficient spectral method that outperforms PCA and is shown to match the information-theoretic transition.
Optimal Algorithms for the Inhomogeneous Spiked Wigner Model
[ "Alexander Pak", "Justin Ko", "Florent Krzakala" ]
Conference
poster
2302.06665
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xMgO04HDOS
@inproceedings{ yang2023hierarchical, title={Hierarchical Multi-Agent Skill Discovery}, author={Mingyu Yang and Yaodong Yang and Zhenbo Lu and Wengang Zhou and Houqiang Li}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xMgO04HDOS} }
Skill discovery has shown significant progress in unsupervised reinforcement learning. This approach enables the discovery of a wide range of skills without any extrinsic reward, which can be effectively combined to tackle complex tasks. However, such unsupervised skill learning has not been well applied to multi-agent reinforcement learning (MARL) due to two primary challenges. One is how to learn skills not only for the individual agents but also for the entire team, and the other is how to coordinate the skills of different agents to accomplish multi-agent tasks. To address these challenges, we present Hierarchical Multi-Agent Skill Discovery (HMASD), a two-level hierarchical algorithm for discovering both team and individual skills in MARL. The high-level policy employs a transformer structure to realize sequential skill assignment, while the low-level policy learns to discover valuable team and individual skills. We evaluate HMASD on sparse reward multi-agent benchmarks, and the results show that HMASD achieves significant performance improvements compared to strong MARL baselines.
Hierarchical Multi-Agent Skill Discovery
[ "Mingyu Yang", "Yaodong Yang", "Zhenbo Lu", "Wengang Zhou", "Houqiang Li" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xJLEQQrFia
@inproceedings{ kang2023knowledgeaugmented, title={Knowledge-Augmented Reasoning Distillation for Small Language Models in Knowledge-Intensive Tasks}, author={Minki Kang and Seanie Lee and Jinheon Baek and Kenji Kawaguchi and Sung Ju Hwang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xJLEQQrFia} }
Large Language Models (LLMs) have shown promising performance in knowledge-intensive reasoning tasks that require a compound understanding of knowledge. However, deployment of the LLMs in real-world applications can be challenging due to their high computational requirements and concerns on data privacy. Previous studies have focused on building task-specific small Language Models (LMs) by fine-tuning them with labeled data or distilling LLMs. However, these approaches are ill-suited for knowledge-intensive reasoning tasks due to the limited capacity of small LMs in memorizing the knowledge required. Motivated by our theoretical analysis on memorization, we propose Knowledge-Augmented Reasoning Distillation (KARD), a novel method that fine-tunes small LMs to generate rationales obtained from LLMs with augmented knowledge retrieved from an external knowledge base. Moreover, we further propose a neural reranker to obtain documents relevant to rationale generation. We empirically show that KARD significantly improves the performance of small T5 and GPT models on the challenging knowledge-intensive reasoning datasets, namely MedQA-USMLE, StrategyQA, and OpenbookQA. Notably, our method makes the 250M T5 models achieve superior performance against the fine-tuned 3B models, having 12 times larger parameters, on both MedQA-USMLE and StrategyQA benchmarks.
Knowledge-Augmented Reasoning Distillation for Small Language Models in Knowledge-Intensive Tasks
[ "Minki Kang", "Seanie Lee", "Jinheon Baek", "Kenji Kawaguchi", "Sung Ju Hwang" ]
Conference
poster
2305.18395
[ "https://github.com/nardien/kard" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xINPCvgULc
@inproceedings{ saday2023robust, title={Robust Bayesian Satisficing}, author={Artun Saday and Y. Cahit Y{\i}ld{\i}r{\i}m and Cem Tekin}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xINPCvgULc} }
Distributional shifts pose a significant challenge to achieving robustness in contemporary machine learning. To overcome this challenge, robust satisficing (RS) seeks a robust solution to an unspecified distributional shift while achieving a utility above a desired threshold. This paper focuses on the problem of RS in contextual Bayesian optimization when there is a discrepancy between the true and reference distributions of the context. We propose a novel robust Bayesian satisficing algorithm called RoBOS for noisy black-box optimization. Our algorithm guarantees sublinear lenient regret under certain assumptions on the amount of distribution shift. In addition, we define a weaker notion of regret called robust satisficing regret, in which our algorithm achieves a sublinear upper bound independent of the amount of distribution shift. To demonstrate the effectiveness of our method, we apply it to various learning problems and compare it to other approaches, such as distributionally robust optimization.
Robust Bayesian Satisficing
[ "Artun Saday", "Y. Cahit Yıldırım", "Cem Tekin" ]
Conference
poster
2308.08291
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xHNzWHbklj
@inproceedings{ yu2023towards, title={Towards Better Dynamic Graph Learning: New Architecture and Unified Library}, author={Le Yu and Leilei Sun and Bowen Du and Weifeng Lv}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xHNzWHbklj} }
We propose DyGFormer, a new Transformer-based architecture for dynamic graph learning. DyGFormer is conceptually simple and only needs to learn from nodes' historical first-hop interactions by: (1) a neighbor co-occurrence encoding scheme that explores the correlations of the source node and destination node based on their historical sequences; (2) a patching technique that divides each sequence into multiple patches and feeds them to Transformer, allowing the model to effectively and efficiently benefit from longer histories. We also introduce DyGLib, a unified library with standard training pipelines, extensible coding interfaces, and comprehensive evaluating protocols to promote reproducible, scalable, and credible dynamic graph learning research. By performing exhaustive experiments on thirteen datasets for dynamic link prediction and dynamic node classification tasks, we find that DyGFormer achieves state-of-the-art performance on most of the datasets, demonstrating its effectiveness in capturing nodes' correlations and long-term temporal dependencies. Moreover, some results of baselines are inconsistent with previous reports, which may be caused by their diverse but less rigorous implementations, showing the importance of DyGLib. All the used resources are publicly available at https://github.com/yule-BUAA/DyGLib.
Towards Better Dynamic Graph Learning: New Architecture and Unified Library
[ "Le Yu", "Leilei Sun", "Bowen Du", "Weifeng Lv" ]
Conference
poster
2303.13047
[ "https://github.com/yule-buaa/dyglib" ]
https://huggingface.co/papers/2303.13047
0
0
0
4
1
[]
[]
[]
null
https://openreview.net/forum?id=xGz0wAIJrS
@inproceedings{ das2023stateexplanation, title={State2Explanation: Concept-Based Explanations to Benefit Agent Learning and User Understanding}, author={Devleena Das and Sonia Chernova and Been Kim}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xGz0wAIJrS} }
As more non-AI experts use complex AI systems for daily tasks, there has been an increasing effort to develop methods that produce explanations of AI decision making that are understandable by non-AI experts. Towards this effort, leveraging higher-level concepts and producing concept-based explanations have become a popular method. Most concept-based explanations have been developed for classification techniques, and we posit that the few existing methods for sequential decision making are limited in scope. In this work, we first contribute a desiderata for defining ``concepts'' in sequential decision making settings. Additionally, inspired by the Protege Effect which states explaining knowledge often reinforces one's self-learning, we explore how concept-based explanations of an RL agent's decision making can in turn improve the agent's learning rate, as well as improve end-user understanding of the agent's decision making. To this end, we contribute a unified framework, State2Explanation (S2E), that involves learning a joint embedding model between state-action pairs and concept-based explanations, and leveraging such learned model to both (1) inform reward shaping during an agent's training, and (2) provide explanations to end-users at deployment for improved task performance. Our experimental validations, in Connect 4 and Lunar Lander, demonstrate the success of S2E in providing a dual-benefit, successfully informing reward shaping and improving agent learning rate, as well as significantly improving end user task performance at deployment time.
State2Explanation: Concept-Based Explanations to Benefit Agent Learning and User Understanding
[ "Devleena Das", "Sonia Chernova", "Been Kim" ]
Conference
poster
2309.12482
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xFtuNq23D5
@inproceedings{ yu2023boosting, title={Boosting Spectral Clustering on Incomplete Data via Kernel Correction and Affinity Learning}, author={Fangchen Yu and Runze Zhao and Zhan Shi and Yiwen Lu and Jicong Fan and Yicheng Zeng and Jianfeng Mao and Wenye Li}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xFtuNq23D5} }
Spectral clustering has gained popularity for clustering non-convex data due to its simplicity and effectiveness. It is essential to construct a similarity graph using a high-quality affinity measure that models the local neighborhood relations among the data samples. However, incomplete data can lead to inaccurate affinity measures, resulting in degraded clustering performance. To address these issues, we propose an imputation-free framework with two novel approaches to improve spectral clustering on incomplete data. Firstly, we introduce a new kernel correction method that enhances the quality of the kernel matrix estimated on incomplete data with a theoretical guarantee, benefiting classical spectral clustering on pre-defined kernels. Secondly, we develop a series of affinity learning methods that equip the self-expressive framework with $\ell_p$-norm to construct an intrinsic affinity matrix with an adaptive extension. Our methods outperform existing data imputation and distance calibration techniques on benchmark datasets, offering a promising solution to spectral clustering on incomplete data in various real-world applications.
Boosting Spectral Clustering on Incomplete Data via Kernel Correction and Affinity Learning
[ "Fangchen Yu", "Runze Zhao", "Zhan Shi", "Yiwen Lu", "Jicong Fan", "Yicheng Zeng", "Jianfeng Mao", "Wenye Li" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xF89MjFbWp
@inproceedings{ qin2023kullbackleibler, title={Kullback-Leibler Maillard Sampling for Multi-armed Bandits with Bounded Rewards}, author={Hao Qin and Kwang-Sung Jun and Chicheng Zhang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xF89MjFbWp} }
We study $K$-armed bandit problems where the reward distributions of the arms are all supported on the $[0,1]$ interval. Maillard sampling\cite{maillard13apprentissage}, an attractive alternative to Thompson sampling, has recently been shown to achieve competitive regret guarantees in the sub-Gaussian reward setting\cite{bian2022maillard} while maintaining closed-form action probabilities, which is useful for offline policy evaluation. In this work, we analyze the Kullback-Leibler Maillard Sampling (KL-MS) algorithm, a natural extension of Maillard sampling {and a special case of Minimum Empirical Divergence (MED)~\cite{honda2011asymptotically}} for achieving a KL-style finite-time gap-dependent regret bound. We show that KL-MS enjoys the asymptotic optimality when the rewards are Bernoulli and has an {adaptive} worst-case regret bound of the form $O(\sqrt{\mu^*(1-\mu^*) K T \ln K} + K \ln T)$, where $\mu^*$ is the expected reward of the optimal arm, and $T$ is the time horizon length; {this is the first time such adaptivity is reported in the literature for an algorithm with asymptotic optimality guarantees.}
Kullback-Leibler Maillard Sampling for Multi-armed Bandits with Bounded Rewards
[ "Hao Qin", "Kwang-Sung Jun", "Chicheng Zhang" ]
Conference
poster
2304.14989
[ "https://github.com/MjolnirT/Kullback-Leibler-Maillard-Sampling" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xEhKwsqxMa
@inproceedings{ li2023dissecting, title={Dissecting Chain-of-Thought: Compositionality through In-Context Filtering and Learning}, author={Yingcong Li and Kartik Sreenivasan and Angeliki Giannou and Dimitris Papailiopoulos and Samet Oymak}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xEhKwsqxMa} }
Chain-of-thought (CoT) is a method that enables language models to handle complex reasoning tasks by decomposing them into simpler steps. Despite its success, the underlying mechanics of CoT are not yet fully understood. In an attempt to shed light on this, our study investigates the impact of CoT on the ability of transformers to in-context learn a simple to study, yet general family of compositional functions: multi-layer perceptrons (MLPs). In this setting, we find that the success of CoT can be attributed to breaking down in-context learning of a compositional function into two distinct phases: focusing on and filtering data related to each step of the composition and in-context learning the single-step composition function. Through both experimental and theoretical evidence, we demonstrate how CoT significantly reduces the sample complexity of in-context learning (ICL) and facilitates the learning of complex functions that non-CoT methods struggle with. Furthermore, we illustrate how transformers can transition from vanilla in-context learning to mastering a compositional function with CoT by simply incorporating additional layers that perform the necessary data-filtering for CoT via the attention mechanism. In addition to these test-time benefits, we show CoT helps accelerate pretraining by learning shortcuts to represent complex functions and filtering plays an important role in this process. These findings collectively provide insights into the mechanics of CoT, inviting further investigation of its role in complex reasoning tasks.
Dissecting Chain-of-Thought: Compositionality through In-Context Filtering and Learning
[ "Yingcong Li", "Kartik Sreenivasan", "Angeliki Giannou", "Dimitris Papailiopoulos", "Samet Oymak" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xE7oH5iVGK
@inproceedings{ nguyen2023lvmmed, title={{LVM}-Med: Learning Large-Scale Self-Supervised Vision Models for Medical Imaging via Second-order Graph Matching}, author={Duy Minh Ho Nguyen and Hoang Nguyen and Nghiem Tuong Diep and Tan Ngoc Pham and Tri Cao and Binh T. Nguyen and Paul Swoboda and Nhat Ho and Shadi Albarqouni and Pengtao Xie and Daniel Sonntag and Mathias Niepert}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xE7oH5iVGK} }
Obtaining large pre-trained models that can be fine-tuned to new tasks with limited annotated samples has remained an open challenge for medical imaging data. While pre-trained networks on ImageNet and vision-language foundation models trained on web-scale data are the prevailing approaches, their effectiveness on medical tasks is limited due to the significant domain shift between natural and medical images. To bridge this gap, we introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets. We have collected approximately 1.3 million medical images from 55 publicly available datasets, covering a large number of organs and modalities such as CT, MRI, X-ray, and Ultrasound. We benchmark several state-of-the-art self-supervised algorithms on this dataset and propose a novel self-supervised contrastive learning algorithm using a graph-matching formulation. The proposed approach makes three contributions: (i) it integrates prior pair-wise image similarity metrics based on local and global information; (ii) it captures the structural constraints of feature embeddings through a loss function constructed through a combinatorial graph-matching objective, and (iii) it can be trained efficiently end-to-end using modern gradient-estimation techniques for black-box solvers. We thoroughly evaluate the proposed LVM-Med on 15 downstream medical tasks ranging from segmentation and classification to object detection, and both for the in and out-of-distribution settings. LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models. For challenging tasks such as Brain Tumor Classification or Diabetic Retinopathy Grading, LVM-Med improves previous vision-language models trained on 1 billion masks by 6-7% while using only a ResNet-50.
LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical Imaging via Second-order Graph Matching
[ "Duy Minh Ho Nguyen", "Hoang Nguyen", "Nghiem Tuong Diep", "Tan Ngoc Pham", "Tri Cao", "Binh T. Nguyen", "Paul Swoboda", "Nhat Ho", "Shadi Albarqouni", "Pengtao Xie", "Daniel Sonntag", "Mathias Niepert" ]
Conference
poster
2306.11925
[ "https://github.com/duyhominhnguyen/LVM-Med" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xDHzQQ4lnC
@inproceedings{ straub2023probabilistic, title={Probabilistic inverse optimal control for non-linear partially observable systems disentangles perceptual uncertainty and behavioral costs}, author={Dominik Straub and Matthias Schultheis and Heinz Koeppl and Constantin A. Rothkopf}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xDHzQQ4lnC} }
Inverse optimal control can be used to characterize behavior in sequential decision-making tasks. Most existing work, however, is limited to fully observable or linear systems, or requires the action signals to be known. Here, we introduce a probabilistic approach to inverse optimal control for partially observable stochastic non-linear systems with unobserved action signals, which unifies previous approaches to inverse optimal control with maximum causal entropy formulations. Using an explicit model of the noise characteristics of the sensory and motor systems of the agent in conjunction with local linearization techniques, we derive an approximate likelihood function for the model parameters, which can be computed within a single forward pass. We present quantitative evaluations on stochastic and partially observable versions of two classic control tasks and two human behavioral tasks. Importantly, we show that our method can disentangle perceptual factors and behavioral costs despite the fact that epistemic and pragmatic actions are intertwined in sequential decision-making under uncertainty, such as in active sensing and active learning. The proposed method has broad applicability, ranging from imitation learning to sensorimotor neuroscience.
Probabilistic inverse optimal control for non-linear partially observable systems disentangles perceptual uncertainty and behavioral costs
[ "Dominik Straub", "Matthias Schultheis", "Heinz Koeppl", "Constantin A. Rothkopf" ]
Conference
poster
2303.16698
[ "https://github.com/rothkopflab/nioc-neurips" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xBqjoG0NxM
@inproceedings{ wang2023soda, title={{SODA}: Robust Training of Test-Time Data Adaptors}, author={Zige Wang and Yonggang Zhang and Zhen Fang and Long Lan and Wenjing Yang and Bo Han}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xBqjoG0NxM} }
Adapting models deployed to test distributions can mitigate the performance degradation caused by distribution shifts. However, privacy concerns may render model parameters inaccessible. One promising approach involves utilizing zeroth-order optimization (ZOO) to train a data adaptor to adapt the test data to fit the deployed models. Nevertheless, the data adaptor trained with ZOO typically brings restricted improvements due to the potential corruption of data features caused by the data adaptor. To address this issue, we revisit ZOO in the context of test-time data adaptation. We find that the issue directly stems from the unreliable estimation of the gradients used to optimize the data adaptor, which is inherently due to the unreliable nature of the pseudo-labels assigned to the test data. Based on this observation, we propose pseudo-label-robust data adaptation (SODA) to improve the performance of data adaptation. Specifically, SODA leverages high-confidence predicted labels as reliable labels to optimize the data adaptor with ZOO for label prediction. For data with low-confidence predictions, SODA encourages the adaptor to preserve data information to mitigate data corruption. Empirical results indicate that SODA can significantly enhance the performance of deployed models in the presence of distribution shifts without requiring access to model parameters.
SODA: Robust Training of Test-Time Data Adaptors
[ "Zige Wang", "Yonggang Zhang", "Zhen Fang", "Long Lan", "Wenjing Yang", "Bo Han" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xBhvMu4J03
@inproceedings{ willig2023do, title={Do Not Marginalize Mechanisms, Rather Consolidate!}, author={Moritz Willig and Matej Ze{\v{c}}evi{\'c} and Devendra Singh Dhami and Kristian Kersting}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xBhvMu4J03} }
Structural causal models (SCMs) are a powerful tool for understanding the complex causal relationships that underlie many real-world systems. As these systems grow in size, the number of variables and complexity of interactions between them does, too. Thus, becoming convoluted and difficult to analyze. This is particularly true in the context of machine learning and artificial intelligence, where an ever increasing amount of data demands for new methods to simplify and compress large scale SCM. While methods for marginalizing and abstracting SCM already exist today, they may destroy the causality of the marginalized model. To alleviate this, we introduce the concept of consolidating causal mechanisms to transform large-scale SCM while preserving consistent interventional behaviour. We show consolidation is a powerful method for simplifying SCM, discuss reduction of computational complexity and give a perspective on generalizing abilities of consolidated SCM.
Do Not Marginalize Mechanisms, Rather Consolidate!
[ "Moritz Willig", "Matej Zečević", "Devendra Singh Dhami", "Kristian Kersting" ]
Conference
poster
2310.08377
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=x9FOu3W6iy
@inproceedings{ zhao2023thrust, title={Thrust: Adaptively Propels Large Language Models with External Knowledge}, author={Xinran Zhao and Hongming Zhang and Xiaoman Pan and Wenlin Yao and Dong Yu and Jianshu Chen}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=x9FOu3W6iy} }
Although large-scale pre-trained language models (PTLMs) are shown to encode rich knowledge in their model parameters, the inherent knowledge in PTLMs can be opaque or static, making external knowledge necessary. However, the existing information retrieval techniques could be costly and may even introduce noisy and sometimes misleading knowledge. To address these challenges, we propose the instance-level adaptive propulsion of external knowledge (IAPEK), where we only conduct the retrieval when necessary. To achieve this goal, we propose to model whether a PTLM contains enough knowledge to solve an instance with a novel metric, Thrust, which leverages the representation distribution of a small amount of seen instances. Extensive experiments demonstrate that Thrust is a good measurement of models' instance-level knowledgeability. Moreover, we can achieve higher cost-efficiency with the Thrust score as the retrieval indicator than the naive usage of external knowledge on 88% of the evaluated tasks with 26% average performance improvement. Such findings shed light on the real-world practice of knowledge-enhanced LMs with a limited budget for knowledge seeking due to computation latency or costs.
Thrust: Adaptively Propels Large Language Models with External Knowledge
[ "Xinran Zhao", "Hongming Zhang", "Xiaoman Pan", "Wenlin Yao", "Dong Yu", "Jianshu Chen" ]
Conference
poster
2307.10442
[ "" ]
https://huggingface.co/papers/2307.10442
0
1
0
6
1
[]
[]
[]
null
https://openreview.net/forum?id=x816mCbWpR
@inproceedings{ lee2023recasting, title={Recasting Continual Learning as Sequence Modeling}, author={Soochan Lee and Jaehyeon Son and Gunhee Kim}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=x816mCbWpR} }
In this work, we aim to establish a strong connection between two significant bodies of machine learning research: continual learning and sequence modeling. That is, we propose to formulate continual learning as a sequence modeling problem, allowing advanced sequence models to be utilized for continual learning. Under this formulation, the continual learning process becomes the forward pass of a sequence model. By adopting the meta-continual learning (MCL) framework, we can train the sequence model at the meta-level, on multiple continual learning episodes. As a specific example of our new formulation, we demonstrate the application of Transformers and their efficient variants as MCL methods. Our experiments on seven benchmarks, covering both classification and regression, show that sequence models can be an attractive solution for general MCL.
Recasting Continual Learning as Sequence Modeling
[ "Soochan Lee", "Jaehyeon Son", "Gunhee Kim" ]
Conference
poster
2310.11952
[ "https://github.com/soochan-lee/cl-as-seq" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=x7q7w07r6Y
@inproceedings{ jin2023lending, title={Lending Interaction Wings to Recommender Systems with Conversational Agents}, author={Jiarui Jin and Xianyu Chen and Fanghua Ye and Mengyue Yang and Yue Feng and Weinan Zhang and Yong Yu and Jun Wang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=x7q7w07r6Y} }
An intelligent conversational agent (a.k.a., chat-bot) could embrace conversational technologies to obtain user preferences online, to overcome inherent limitations of recommender systems trained over the offline historical user behaviors. In this paper, we propose CORE, a new offline-training and online-checking framework to plug a COnversational agent into REcommender systems. Unlike most prior conversational recommendation approaches that systemically combine conversational and recommender parts through a reinforcement learning framework, CORE bridges the conversational agent and recommender system through a unified uncertainty minimization framework, which can be easily applied to any existing recommendation approach. Concretely, CORE treats a recommender system as an offline estimator to produce an estimated relevance score for each item, while CORE regards a conversational agent as an online checker that checks these estimated scores in each online session. We define uncertainty as the sum of unchecked relevance scores. In this regard, the conversational agent acts to minimize uncertainty via querying either attributes or items. Towards uncertainty minimization, we derive the certainty gain of querying each attribute and item, and develop a novel online decision tree algorithm to decide what to query at each turn. Our theoretical analysis reveals the bound of the expected number of turns of CORE in a cold-start setting. Experimental results demonstrate that CORE can be seamlessly employed on a variety of recommendation approaches, and can consistently bring significant improvements in both hot-start and cold-start settings.
Lending Interaction Wings to Recommender Systems with Conversational Agents
[ "Jiarui Jin", "Xianyu Chen", "Fanghua Ye", "Mengyue Yang", "Yue Feng", "Weinan Zhang", "Yong Yu", "Jun Wang" ]
Conference
poster
2310.04230
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=x6cOcxRnxG
@inproceedings{ boral2023neural, title={Neural Ideal Large Eddy Simulation: Modeling Turbulence with Neural Stochastic Differential Equations}, author={Anudhyan Boral and Zhong Yi Wan and Leonardo Zepeda-Nunez and James Lottes and Qing Wang and Yi-Fan Chen and John Roberts Anderson and Fei Sha}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=x6cOcxRnxG} }
We introduce a data-driven learning framework that assimilates two powerful ideas: ideal large eddy simulation (LES) from turbulence closure modeling and neural stochastic differential equations (SDE) for stochastic modeling. The ideal LES models the LES flow by treating each full-order trajectory as a random realization of the underlying dynamics, as such, the effect of small-scales is marginalized to obtain the deterministic evolution of the LES state. However, ideal LES is analytically intractable. In our work, we use a latent neural SDE to model the evolution of the stochastic process and an encoder-decoder pair for transforming between the latent space and the desired ideal flow field. This stands in sharp contrast to other types of neural parameterization of closure models where each trajectory is treated as a deterministic realization of the dynamics. We show the effectiveness of our approach (niLES – neural ideal LES) on two challenging chaotic dynamical systems: Kolmogorov flow at a Reynolds number of 20,000 and flow past a cylinder at Reynolds number 500. Compared to competing methods, our method can handle non-uniform geometries using unstructured meshes seamlessly. In particular, niLES leads to trajectories with more accurate statistics and enhances stability, particularly for long-horizon rollouts. (Source codes and datasets will be made publicly available.)
Neural Ideal Large Eddy Simulation: Modeling Turbulence with Neural Stochastic Differential Equations
[ "Anudhyan Boral", "Zhong Yi Wan", "Leonardo Zepeda-Nunez", "James Lottes", "Qing Wang", "Yi-Fan Chen", "John Roberts Anderson", "Fei Sha" ]
Conference
poster
2306.01174
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=x5fs7TXKDc
@inproceedings{ li2023fednar, title={Fed{NAR}: Federated Optimization with Normalized Annealing Regularization}, author={Junbo Li and Ang Li and Chong Tian and Qirong Ho and Eric Xing and Hongyi Wang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=x5fs7TXKDc} }
Weight decay is a standard technique to improve generalization performance in modern deep neural network optimization, and is also widely adopted in federated learning (FL) to prevent overfitting in local clients. In this paper, we first explore the choices of weight decay and identify that weight decay value appreciably influences the convergence of existing FL algorithms. While preventing overfitting is crucial, weight decay can introduce a different optimization goal towards the global objective, which is further amplified in FL due to multiple local updates and heterogeneous data distribution. To address this challenge, we develop {\it Federated optimization with Normalized Annealing Regularization} (FedNAR), a simple yet effective and versatile algorithmic plug-in that can be seamlessly integrated into any existing FL algorithms. Essentially, we regulate the magnitude of each update by performing co-clipping of the gradient and weight decay. We provide a comprehensive theoretical analysis of FedNAR's convergence rate and conduct extensive experiments on both vision and language datasets with different backbone federated optimization algorithms. Our experimental results consistently demonstrate that incorporating FedNAR into existing FL algorithms leads to accelerated convergence and heightened model accuracy. Moreover, FedNAR exhibits resilience in the face of various hyperparameter configurations. Specifically, FedNAR has the ability to self-adjust the weight decay when the initial specification is not optimal, while the accuracy of traditional FL algorithms would markedly decline. Our codes are released at \href{https://anonymous.4open.science/r/fednar-BE8F}{https://anonymous.4open.science/r/fednar-BE8F}.
FedNAR: Federated Optimization with Normalized Annealing Regularization
[ "Junbo Li", "Ang Li", "Chong Tian", "Qirong Ho", "Eric Xing", "Hongyi Wang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=x5ZruOa4ax
@inproceedings{ boussif2023improving, title={Improving *day-ahead* Solar Irradiance Time Series Forecasting by Leveraging Spatio-Temporal Context}, author={Oussama Boussif and Ghait Boukachab and Dan Assouline and Stefano Massaroli and Tianle Yuan and Loubna Benabbou and Yoshua Bengio}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=x5ZruOa4ax} }
Solar power harbors immense potential in mitigating climate change by substantially reducing CO$_{2}$ emissions. Nonetheless, the inherent variability of solar irradiance poses a significant challenge for seamlessly integrating solar power into the electrical grid. While the majority of prior research has centered on employing purely time series-based methodologies for solar forecasting, only a limited number of studies have taken into account factors such as cloud cover or the surrounding physical context. In this paper, we put forth a deep learning architecture designed to harness spatio-temporal context using satellite data, to attain highly accurate day-ahead time-series forecasting for any given station, with a particular emphasis on forecasting Global Horizontal Irradiance (GHI). We also suggest a methodology to extract a distribution for each time step prediction, which can serve as a very valuable measure of uncertainty attached to the forecast. When evaluating models, we propose a testing scheme in which we separate particularly difficult examples from easy ones, in order to capture the model performances in crucial situations, which in the case of this study are the days suffering from varying cloudy conditions. Furthermore, we present a new multi-modal dataset gathering satellite imagery over a large zone and time series for solar irradiance and other related physical variables from multiple geographically diverse solar stations. Our approach exhibits robust performance in solar irradiance forecasting, including zero-shot generalization tests at unobserved solar stations, and holds great promise in promoting the effective integration of solar power into the grid.
Improving *day-ahead* Solar Irradiance Time Series Forecasting by Leveraging Spatio-Temporal Context
[ "Oussama Boussif", "Ghait Boukachab", "Dan Assouline", "Stefano Massaroli", "Tianle Yuan", "Loubna Benabbou", "Yoshua Bengio" ]
Conference
poster
[ "https://github.com/gitbooo/CrossViVit" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=x5JCDCvR4b
@inproceedings{ lin2023stochastic, title={Stochastic Distributed Optimization under Average Second-order Similarity: Algorithms and Analysis}, author={Dachao Lin and Yuze Han and Haishan Ye and Zhihua Zhang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=x5JCDCvR4b} }
We study finite-sum distributed optimization problems involving a master node and $n-1$ local nodes under the popular $\delta$-similarity and $\mu$-strong convexity conditions. We propose two new algorithms, SVRS and AccSVRS, motivated by previous works. The non-accelerated SVRS method combines the techniques of gradient sliding and variance reduction and achieves a better communication complexity of $\tilde{\mathcal{O}}(n {+} \sqrt{n}\delta/\mu)$ compared to existing non-accelerated algorithms. Applying the framework proposed in Katyusha X, we also develop a directly accelerated version named AccSVRS with the $\tilde{\mathcal{O}}(n {+} n^{3/4}\sqrt{\delta/\mu})$ communication complexity. In contrast to existing results, our complexity bounds are entirely smoothness-free and exhibit superiority in ill-conditioned cases. Furthermore, we establish a nearly matched lower bound to verify the tightness of our AccSVRS method.
Stochastic Distributed Optimization under Average Second-order Similarity: Algorithms and Analysis
[ "Dachao Lin", "Yuze Han", "Haishan Ye", "Zhihua Zhang" ]
Conference
poster
2304.07504
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=x2xQEszznV
@inproceedings{ xu2023online, title={Online Constrained Meta-Learning: Provable Guarantees for Generalization}, author={Siyuan Xu and Minghui Zhu}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=x2xQEszznV} }
Meta-learning has attracted attention due to its strong ability to learn experiences from known tasks, which can speed up and enhance the learning process for new tasks. However, most existing meta-learning approaches only can learn from tasks without any constraint. This paper proposes an online constrained meta-learning framework, which continuously learns meta-knowledge from sequential learning tasks, and the learning tasks are subject to hard constraints. Beyond existing meta-learning analyses, we provide the upper bounds of optimality gaps and constraint violations produced by the proposed framework, which considers the dynamic regret of online learning, as well as the generalization ability of the task-specific models. Moreover, we provide a practical algorithm for the framework, and validate its superior effectiveness through experiments conducted on meta-imitation learning and few-shot image classification.
Online Constrained Meta-Learning: Provable Guarantees for Generalization
[ "Siyuan Xu", "Minghui Zhu" ]
Conference
spotlight
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=x2PH6q32LR
@inproceedings{ cini2023taming, title={Taming Local Effects in Graph-based Spatiotemporal Forecasting}, author={Andrea Cini and Ivan Marisca and Daniele Zambon and Cesare Alippi}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=x2PH6q32LR} }
Spatiotemporal graph neural networks have shown to be effective in time series forecasting applications, achieving better performance than standard univariate predictors in several settings. These architectures take advantage of a graph structure and relational inductive biases to learn a single (global) inductive model to predict any number of the input time series, each associated with a graph node. Despite the gain achieved in computational and data efficiency w.r.t. fitting a set of local models, relying on a single global model can be a limitation whenever some of the time series are generated by a different spatiotemporal stochastic process. The main objective of this paper is to understand the interplay between globality and locality in graph-based spatiotemporal forecasting, while contextually proposing a methodological framework to rationalize the practice of including trainable node embeddings in such architectures. We ascribe to trainable node embeddings the role of amortizing the learning of specialized components. Moreover, embeddings allow for 1) effectively combining the advantages of shared message-passing layers with node-specific parameters and 2) efficiently transferring the learned model to new node sets. Supported by strong empirical evidence, we provide insights and guidelines for specializing graph-based models to the dynamics of each time series and show how this aspect plays a crucial role in obtaining accurate predictions.
Taming Local Effects in Graph-based Spatiotemporal Forecasting
[ "Andrea Cini", "Ivan Marisca", "Daniele Zambon", "Cesare Alippi" ]
Conference
poster
2302.04071
[ "https://github.com/graph-machine-learning-group/taming-local-effects-stgnns" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=wzg0BsV8rQ
@inproceedings{ gu2023fast, title={{FAST}: a Fused and Accurate Shrinkage Tree for Heterogeneous Treatment Effects Estimation}, author={Jia Gu and Caizhi Tang and Han Yan and Qing Cui and Longfei Li and JUN ZHOU}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=wzg0BsV8rQ} }
This paper proposes a novel strategy for estimating the heterogeneous treatment effect called the Fused and Accurate Shrinkage Tree ($\mathrm{FAST}$). Our approach utilizes both trial and observational data to improve the accuracy and robustness of the estimator. Inspired by the concept of shrinkage estimation in statistics, we develop an optimal weighting scheme and a corresponding estimator that balances the unbiased estimator based on the trial data with the potentially biased estimator based on the observational data. Specifically, combined with tree-based techniques, we introduce a new split criterion that utilizes both trial data and observational data to more accurately estimate the treatment effect. Furthermore, we confirm the consistency of our proposed tree-based estimator and demonstrate the effectiveness of our criterion in reducing prediction error through theoretical analysis. The advantageous finite sample performance of the $\mathrm{FAST}$ and its ensemble version over existing methods is demonstrated via simulations and real data analysis.
FAST: a Fused and Accurate Shrinkage Tree for Heterogeneous Treatment Effects Estimation
[ "Jia Gu", "Caizhi Tang", "Han Yan", "Qing Cui", "Longfei Li", "JUN ZHOU" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=wzPcffMZ3b
@inproceedings{ kunstner2023searching, title={Searching for Optimal Per-Coordinate Step-sizes with Multidimensional Backtracking}, author={Frederik Kunstner and Victor S. Portella and Mark Schmidt and Nick Harvey}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=wzPcffMZ3b} }
The backtracking line-search is an effective technique to automatically tune the step-size in smooth optimization. It guarantees similar performance to using the theoretically optimal step-size. Many approaches have been developed to instead tune per-coordinate step-sizes, also known as diagonal preconditioners, but none of the existing methods are provably competitive with the optimal per-coordinate step-sizes. We propose multidimensional backtracking, an extension of the backtracking line-search to find good diagonal preconditioners for smooth convex problems. Our key insight is that the gradient with respect to the step-sizes, also known as hyper-gradients, yields separating hyperplanes that let us search for good preconditioners using cutting-plane methods. As black-box cutting-plane approaches like the ellipsoid method are computationally prohibitive, we develop an efficient algorithm tailored to our setting. Multidimensional backtracking is provably competitive with the best diagonal preconditioner and requires no manual tuning.
Searching for Optimal Per-Coordinate Step-sizes with Multidimensional Backtracking
[ "Frederik Kunstner", "Victor S. Portella", "Mark Schmidt", "Nick Harvey" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=wxkBdtDbmH
@inproceedings{ zhou2023natural, title={Natural Actor-Critic for Robust Reinforcement Learning with Function Approximation}, author={Ruida Zhou and Tao Liu and Min Cheng and Dileep Kalathil and Panganamala Kumar and Chao Tian}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=wxkBdtDbmH} }
We study robust reinforcement learning (RL) with the goal of determining a well-performing policy that is robust against model mismatch between the training simulator and the testing environment. Previous policy-based robust RL algorithms mainly focus on the tabular setting under uncertainty sets that facilitate robust policy evaluation, but are no longer tractable when the number of states scales up. To this end, we propose two novel uncertainty set formulations, one based on double sampling and the other on an integral probability metric. Both make large-scale robust RL tractable even when one only has access to a simulator. We propose a robust natural actor-critic (RNAC) approach that incorporates the new uncertainty sets and employs function approximation. We provide finite-time convergence guarantees for the proposed RNAC algorithm to the optimal robust policy within the function approximation error. Finally, we demonstrate the robust performance of the policy learned by our proposed RNAC approach in multiple MuJoCo environments and a real-world TurtleBot navigation task.
Natural Actor-Critic for Robust Reinforcement Learning with Function Approximation
[ "Ruida Zhou", "Tao Liu", "Min Cheng", "Dileep Kalathil", "Panganamala Kumar", "Chao Tian" ]
Conference
poster
2307.08875
[ "https://github.com/tliu1997/rnac" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=wwmKVO8bsR
@inproceedings{ fan2023federated, title={Federated Learning with Bilateral Curation for Partially Class-Disjoint Data}, author={Ziqing Fan and Ruipeng Zhang and Jiangchao Yao and Bo Han and Ya Zhang and Yanfeng Wang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=wwmKVO8bsR} }
Partially class-disjoint data (PCDD), a common yet under-explored data formation where each client contributes a part of classes (instead of all classes) of samples, severely challenges the performance of federated algorithms. Without full classes, the local objective will contradict the global objective, yielding the angle collapse problem for locally missing classes and the space waste problem for locally existing classes. As far as we know, none of the existing methods can intrinsically mitigate PCDD challenges to achieve holistic improvement in the bilateral views (both global view and local view) of federated learning. To address this dilemma, we are inspired by the strong generalization of simplex Equiangular Tight Frame (ETF) on the imbalanced data, and propose a novel approach called FedGELA where the classifier is globally fixed as a simplex ETF while locally adapted to the personal distributions. Globally, FedGELA provides fair and equal discrimination for all classes and avoids inaccurate updates of the classifier, while locally it utilizes the space of locally missing classes for locally existing classes. We conduct extensive experiments on a range of datasets to demonstrate that our FedGELA achieves promising performance (averaged improvement of 3.9% to FedAvg and 1.5% to best baselines) and provide both local and global convergence guarantees.
Federated Learning with Bilateral Curation for Partially Class-Disjoint Data
[ "Ziqing Fan", "Ruipeng Zhang", "Jiangchao Yao", "Bo Han", "Ya Zhang", "Yanfeng Wang" ]
Conference
poster
2405.18972
[ "https://github.com/mediabrain-sjtu/fedgela" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=wwkQUiaKbo
@inproceedings{ feng2023adapting, title={Adapting Fairness Interventions to Missing Values}, author={Raymond Feng and Flavio Calmon and Hao Wang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=wwkQUiaKbo} }
Missing values in real-world data pose a significant and unique challenge to algorithmic fairness. Different demographic groups may be unequally affected by missing data, and the standard procedure for handling missing values where first data is imputed, then the imputed data is used for classification—a procedure referred to as "impute-then-classify"—can exacerbate discrimination. In this paper, we analyze how missing values affect algorithmic fairness. We first prove that training a classifier from imputed data can significantly worsen the achievable values of group fairness and average accuracy. This is because imputing data results in the loss of the missing pattern of the data, which often conveys information about the predictive label. We present scalable and adaptive algorithms for fair classification with missing values. These algorithms can be combined with any preexisting fairness-intervention algorithm to handle all possible missing patterns while preserving information encoded within the missing patterns. Numerical experiments with state-of-the-art fairness interventions demonstrate that our adaptive algorithms consistently achieve higher fairness and accuracy than impute-then-classify across different datasets.
Adapting Fairness Interventions to Missing Values
[ "Raymond Feng", "Flavio Calmon", "Hao Wang" ]
Conference
poster
2305.19429
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=wv3bHyQbX7
@inproceedings{ chen2023subjectdriven, title={Subject-driven Text-to-Image Generation via Apprenticeship Learning}, author={Wenhu Chen and Hexiang Hu and YANDONG LI and Nataniel Ruiz and Xuhui Jia and Ming-Wei Chang and William W. Cohen}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=wv3bHyQbX7} }
Recent text-to-image generation models like DreamBooth have made remarkable progress in generating highly customized images of a target subject, by fine-tuning an ``expert model'' for a given subject from a few examples. However, this process is expensive, since a new expert model must be learned for each subject. In this paper, we present SuTI, a Subject-driven Text-to-Image generator that replaces subject-specific fine tuning with {in-context} learning. Given a few demonstrations of a new subject, SuTI can instantly generate novel renditions of the subject in different scenes, without any subject-specific optimization. SuTI is powered by {apprenticeship learning}, where a single apprentice model is learned from data generated by a massive number of subject-specific expert models. Specifically, we mine millions of image clusters from the Internet, each centered around a specific visual subject. We adopt these clusters to train a massive number of expert models, each specializing in a different subject. The apprentice model SuTI then learns to imitate the behavior of these fine-tuned experts. SuTI can generate high-quality and customized subject-specific images 20x faster than optimization-based SoTA methods. On the challenging DreamBench and DreamBench-v2, our human evaluation shows that SuTI significantly outperforms existing models like InstructPix2Pix, Textual Inversion, Imagic, Prompt2Prompt, Re-Imagen and DreamBooth.
Subject-driven Text-to-Image Generation via Apprenticeship Learning
[ "Wenhu Chen", "Hexiang Hu", "YANDONG LI", "Nataniel Ruiz", "Xuhui Jia", "Ming-Wei Chang", "William W. Cohen" ]
Conference
poster
2304.00186
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=wqIm0Qsgy0
@inproceedings{ dmochowski2023granger, title={Granger Components Analysis: Unsupervised learning of latent temporal dependencies}, author={Jacek Dmochowski}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=wqIm0Qsgy0} }
A new technique for unsupervised learning of time series data based on the notion of Granger causality is presented. The technique learns pairs of projections of a multivariate data set such that the resulting components -- "driving" and "driven" -- maximize the strength of the Granger causality between the latent time series (how strongly the past of the driving signal predicts the present of the driven signal). A coordinate descent algorithm that learns pairs of coefficient vectors in an alternating fashion is developed and shown to blindly identify the underlying sources (up to scale) on simulated vector autoregressive (VAR) data. The technique is tested on scalp electroencephalography (EEG) data from a motor imagery experiment where the resulting components lateralize with the side of the cued hand, and also on functional magnetic resonance imaging (fMRI) data, where the recovered components express previously reported resting-state networks.
Granger Components Analysis: Unsupervised learning of latent temporal dependencies
[ "Jacek Dmochowski" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=wpfsnu5syT
@inproceedings{ xing2023continuar, title={Continu{AR}: Continuous Autoregression For Infinite-Fidelity Fusion}, author={WEI W. XING and Yuxin Wang and Zheng Xing}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=wpfsnu5syT} }
Multi-fidelity fusion has become an important surrogate technique, which provides insights into expensive computer simulations and effectively improves decision-making, e.g., optimization, with less computational cost. Multi-fidelity fusion is much more computationally efficient compared to traditional single-fidelity surrogates. Despite the fast advancement of multi-fidelity fusion techniques, they lack a systematic framework to make use of the fidelity indicator, deal with high-dimensional and arbitrary data structure, and scale well to infinite-fidelity problems. In this work, we first generalize the popular autoregression (AR) to derive a novel linear fidelity differential equation (FiDE), paving the way to tractable infinite-fidelity fusion. We generalize FiDE to a high-dimensional system, which also provides a unifying framework to seemly bridge the gap between many multi- and single-fidelity GP-based models. We then propose ContinuAR, a rank-1 approximation solution to FiDEs, which is tractable to train, compatible with arbitrary multi-fidelity data structure, linearly scalable to the output dimension, and most importantly, delivers consistent SOTA performance with a significant margin over the baseline methods. Compared to the SOTA infinite-fidelity fusion, IFC, ContinuAR achieves up to 4x improvement in accuracy and 62,500x speedup in training time.
ContinuAR: Continuous Autoregression For Infinite-Fidelity Fusion
[ "WEI W. XING", "Yuxin Wang", "Zheng Xing" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=woptnU6fh1
@inproceedings{ annadani2023bayesdag, title={Bayes{DAG}: Gradient-Based Posterior Inference for Causal Discovery}, author={Yashas Annadani and Nick Pawlowski and Joel Jennings and Stefan Bauer and Cheng Zhang and Wenbo Gong}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=woptnU6fh1} }
Bayesian causal discovery aims to infer the posterior distribution over causal models from observed data, quantifying epistemic uncertainty and benefiting downstream tasks. However, computational challenges arise due to joint inference over combinatorial space of Directed Acyclic Graphs (DAGs) and nonlinear functions. Despite recent progress towards efficient posterior inference over DAGs, existing methods are either limited to variational inference on node permutation matrices for linear causal models, leading to compromised inference accuracy, or continuous relaxation of adjacency matrices constrained by a DAG regularizer, which cannot ensure resulting graphs are DAGs. In this work, we introduce a scalable Bayesian causal discovery framework based on a combination of stochastic gradient Markov Chain Monte Carlo (SG-MCMC) and Variational Inference (VI) that overcomes these limitations. Our approach directly samples DAGs from the posterior without requiring any DAG regularization, simultaneously draws function parameter samples and is applicable to both linear and nonlinear causal models. To enable our approach, we derive a novel equivalence to the permutation-based DAG learning, which opens up possibilities of using any relaxed gradient estimator defined over permutations. To our knowledge, this is the first framework applying gradient-based MCMC sampling for causal discovery. Empirical evaluation on synthetic and real-world datasets demonstrate our approach's effectiveness compared to state-of-the-art baselines.
BayesDAG: Gradient-Based Posterior Inference for Causal Discovery
[ "Yashas Annadani", "Nick Pawlowski", "Joel Jennings", "Stefan Bauer", "Cheng Zhang", "Wenbo Gong" ]
Conference
poster
2307.13917
[ "https://github.com/microsoft/Project-BayesDAG" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=wm5Ane9VRO
@inproceedings{ li2023maximization, title={Maximization of Average Precision for Deep Learning with Adversarial Ranking Robustness}, author={Gang Li and Wei Tong and Tianbao Yang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=wm5Ane9VRO} }
This paper seeks to address a gap in optimizing Average Precision (AP) while ensuring adversarial robustness, an area that has not been extensively explored to the best of our knowledge. AP maximization for deep learning has widespread applications, particularly when there is a significant imbalance between positive and negative examples. Although numerous studies have been conducted on adversarial training, they primarily focus on robustness concerning accuracy, ensuring that the average accuracy on adversarially perturbed examples is well maintained. However, this type of adversarial robustness is insufficient for many applications, as minor perturbations on a single example can significantly impact AP while not greatly influencing the accuracy of the prediction system. To tackle this issue, we introduce a novel formulation that combines an AP surrogate loss with a regularization term representing adversarial ranking robustness, which maintains the consistency between ranking of clean data and that of perturbed data. We then devise an efficient stochastic optimization algorithm to optimize the resulting objective. Our empirical studies, which compare our method to current leading adversarial training baselines and other robust AP maximization strategies, demonstrate the effectiveness of the proposed approach. Notably, our methods outperform a state-of-the-art method (TRADES) by more than 4\% in terms of robust AP against PGD attacks while achieving 7\% higher AP on clean data simultaneously on CIFAR10 and CIFAR100.The code is available at: <https://github.com/GangLii/Adversarial-AP>
Maximization of Average Precision for Deep Learning with Adversarial Ranking Robustness
[ "Gang Li", "Wei Tong", "Tianbao Yang" ]
Conference
spotlight
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=wkIBfnGPTA
@inproceedings{ chou2023villandiffusion, title={VillanDiffusion: A Unified Backdoor Attack Framework for Diffusion Models}, author={Sheng-Yen Chou and Pin-Yu Chen and Tsung-Yi Ho}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=wkIBfnGPTA} }
Diffusion Models (DMs) are state-of-the-art generative models that learn a reversible corruption process from iterative noise addition and denoising. They are the backbone of many generative AI applications, such as text-to-image conditional generation. However, recent studies have shown that basic unconditional DMs (e.g., DDPM and DDIM) are vulnerable to backdoor injection, a type of output manipulation attack triggered by a maliciously embedded pattern at model input. This paper presents a unified backdoor attack framework (VillanDiffusion) to expand the current scope of backdoor analysis for DMs. Our framework covers mainstream unconditional and conditional DMs (denoising-based and score-based) and various training-free samplers for holistic evaluations. Experiments show that our unified framework facilitates the backdoor analysis of different DM configurations and provides new insights into caption-based backdoor attacks on DMs.
VillanDiffusion: A Unified Backdoor Attack Framework for Diffusion Models
[ "Sheng-Yen Chou", "Pin-Yu Chen", "Tsung-Yi Ho" ]
Conference
poster
2306.06874
[ "https://github.com/ibm/villandiffusion" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=wiv21EJ0Vd
@inproceedings{ li2023zeroshot, title={Zero-shot Visual Relation Detection via Composite Visual Cues from Large Language Models}, author={Lin Li and Jun Xiao and Guikun Chen and Jian Shao and Yueting Zhuang and Long Chen}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=wiv21EJ0Vd} }
Pretrained vision-language models, such as CLIP, have demonstrated strong generalization capabilities, making them promising tools in the realm of zero-shot visual recognition. Visual relation detection (VRD) is a typical task that identifies relationship (or interaction) types between object pairs within an image. However, naively utilizing CLIP with prevalent class-based prompts for zero-shot VRD has several weaknesses, e.g., it struggles to distinguish between different fine-grained relation types and it neglects essential spatial information of two objects. To this end, we propose a novel method for zero-shot VRD: RECODE, which solves RElation detection via COmposite DEscription prompts. Specifically, RECODE first decomposes each predicate category into subject, object, and spatial components. Then, it leverages large language models (LLMs) to generate description-based prompts (or visual cues) for each component. Different visual cues enhance the discriminability of similar relation categories from different perspectives, which significantly boosts performance in VRD. To dynamically fuse different cues, we further introduce a chain-of-thought method that prompts LLMs to generate reasonable weights for different visual cues. Extensive experiments on four VRD benchmarks have demonstrated the effectiveness and interpretability of RECODE.
Zero-shot Visual Relation Detection via Composite Visual Cues from Large Language Models
[ "Lin Li", "Jun Xiao", "Guikun Chen", "Jian Shao", "Yueting Zhuang", "Long Chen" ]
Conference
poster
2305.12476
[ "https://github.com/hkust-longgroup/recode" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=wiidCRA3at
@inproceedings{ wang2023stein, title={Stein \${\textbackslash}Pi\$-Importance Sampling}, author={Congye Wang and Wilson Ye Chen and Heishiro Kanagawa and Chris J. Oates}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=wiidCRA3at} }
Stein discrepancies have emerged as a powerful tool for retrospective improvement of Markov chain Monte Carlo output. However, the question of how to design Markov chains that are well-suited to such post-processing has yet to be addressed. This paper studies Stein importance sampling, in which weights are assigned to the states visited by a $\Pi$-invariant Markov chain to obtain a consistent approximation of $P$, the intended target. Surprisingly, the optimal choice of $\Pi$ is not identical to the target $P$; we therefore propose an explicit construction for $\Pi$ based on a novel variational argument. Explicit conditions for convergence of Stein $\Pi$-Importance Sampling are established. For $\approx 70$% of tasks in the PosteriorDB benchmark, a significant improvement over the analogous post-processing of $P$-invariant Markov chains is reported.
Stein Π-Importance Sampling
[ "Congye Wang", "Wilson Ye Chen", "Heishiro Kanagawa", "Chris J. Oates" ]
Conference
spotlight
[ "https://github.com/congyewang/stein-pi-importance-sampling" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=wg3d2FKAm8
@inproceedings{ nietert2023outlierrobust, title={Outlier-Robust Wasserstein {DRO}}, author={Sloan Nietert and Ziv Goldfeld and Soroosh Shafiee}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=wg3d2FKAm8} }
Distributionally robust optimization (DRO) is an effective approach for data-driven decision-making in the presence of uncertainty. Geometric uncertainty due to~sampling or localized perturbations of data points is captured by Wasserstein DRO (WDRO), which seeks to learn a model that performs uniformly well over a Wasserstein ball centered around the observed data distribution. However, WDRO fails to account for non-geometric perturbations such as adversarial outliers, which can greatly distort the Wasserstein distance measurement and impede the learned model. We address this gap by proposing a novel outlier-robust WDRO framework for decision-making under both geometric (Wasserstein) perturbations and non-geometric (total variation (TV)) contamination that allows an $\varepsilon$-fraction of data to be arbitrarily corrupted. We design an uncertainty set using a certain robust Wasserstein ball that accounts for both perturbation types and derive minimax optimal excess risk bounds for this procedure that explicitly capture the Wasserstein and TV risks. We prove a strong duality result that enables tractable convex reformulations and efficient computation of our outlier-robust WDRO problem. When the loss function depends only on low-dimensional features of the data, we eliminate certain dimension dependencies from the risk bounds that are unavoidable in the general setting. Finally, we present experiments validating our theory on standard regression and classification tasks.
Outlier-Robust Wasserstein DRO
[ "Sloan Nietert", "Ziv Goldfeld", "Soroosh Shafiee" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=wcdF6jR0Sp
@inproceedings{ pitis2023consistent, title={Consistent Aggregation of Objectives with Diverse Time Preferences Requires Non-Markovian Rewards}, author={Silviu Pitis}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=wcdF6jR0Sp} }
As the capabilities of artificial agents improve, they are being increasingly deployed to service multiple diverse objectives and stakeholders. However, the composition of these objectives is often performed ad hoc, with no clear justification. This paper takes a normative approach to multi-objective agency: from a set of intuitively appealing axioms, it is shown that Markovian aggregation of Markovian reward functions is not possible when the time preference (discount factor) for each objective may vary. It follows that optimal multi-objective agents must admit rewards that are non-Markovian with respect to the individual objectives. To this end, a practical non-Markovian aggregation scheme is proposed, which overcomes the impossibility with only one additional parameter for each objective. This work offers new insights into sequential, multi-objective agency and intertemporal choice, and has practical implications for the design of AI systems deployed to serve multiple generations of principals with varying time preference.
Consistent Aggregation of Objectives with Diverse Time Preferences Requires Non-Markovian Rewards
[ "Silviu Pitis" ]
Conference
poster
2310.00435
[ "" ]
https://huggingface.co/papers/2310.00435
0
0
0
1
1
[]
[]
[]
null
https://openreview.net/forum?id=wbg4JEM5Jp
@inproceedings{ jin2023improved, title={Improved Best-of-Both-Worlds Guarantees for Multi-Armed Bandits: {FTRL} with General Regularizers and Multiple Optimal Arms}, author={Tiancheng Jin and Junyan Liu and Haipeng Luo}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=wbg4JEM5Jp} }
We study the problem of designing adaptive multi-armed bandit algorithms that perform optimally in both the stochastic setting and the adversarial setting simultaneously (often known as a best-of-both-world guarantee). A line of recent works shows that when configured and analyzed properly, the Follow-the-Regularized-Leader (FTRL) algorithm, originally designed for the adversarial setting, can in fact optimally adapt to the stochastic setting as well. Such results, however, critically rely on an assumption that there exists one unique optimal arm. Recently, Ito [2021] took the first step to remove such an undesirable uniqueness assumption for one particular FTRL algorithm with the 1/2-Tsallis entropy regularizer. In this work, we significantly improve and generalize this result, showing that uniqueness is unnecessary for FTRL with a broad family of regularizers and a new learning rate schedule. For some regularizers, our regret bounds also improve upon prior results even when uniqueness holds. We further provide an application of our results to the decoupled exploration and exploitation problem, demonstrating that our techniques are broadly applicable.
Improved Best-of-Both-Worlds Guarantees for Multi-Armed Bandits: FTRL with General Regularizers and Multiple Optimal Arms
[ "Tiancheng Jin", "Junyan Liu", "Haipeng Luo" ]
Conference
poster
2302.13534
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=wbbTqsiKzl
@inproceedings{ cui2023highdimensional, title={High-dimensional Asymptotics of Denoising Autoencoders}, author={Hugo Cui and Lenka Zdeborova}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=wbbTqsiKzl} }
We address the problem of denoising data from a Gaussian mixture using a two-layer non-linear autoencoder with tied weights and a skip connection. We consider the high-dimensional limit where the number of training samples and the input dimension jointly tend to infinity while the number of hidden units remains bounded. We provide closed-form expressions for the denoising mean-squared test error. Building on this result, we quantitatively characterize the advantage of the considered architecture over the autoencoder without the skip connection that relates closely to principal component analysis. We further show that our results capture accurately the learning curves on a range of real datasets.
High-dimensional Asymptotics of Denoising Autoencoders
[ "Hugo Cui", "Lenka Zdeborova" ]
Conference
spotlight
2305.11041
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=waXoG35kbb
@inproceedings{ pabbaraju2023provable, title={Provable benefits of score matching}, author={Chirag Pabbaraju and Dhruv Rohatgi and Anish Sevekari and Holden Lee and Ankur Moitra and Andrej Risteski}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=waXoG35kbb} }
Score matching is an alternative to maximum likelihood (ML) for estimating a probability distribution parametrized up to a constant of proportionality. By fitting the ''score'' of the distribution, it sidesteps the need to compute this constant of proportionality (which is often intractable). While score matching and variants thereof are popular in practice, precise theoretical understanding of the benefits and tradeoffs with maximum likelihood---both computational and statistical---are not well understood. In this work, we give the first example of a natural exponential family of distributions such that the score matching loss is computationally efficient to optimize, and has a comparable statistical efficiency to ML, while the ML loss is intractable to optimize using a gradient-based method. The family consists of exponentials of polynomials of fixed degree, and our result can be viewed as a continuous analogue of recent developments in the discrete setting. Precisely, we show: (1) Designing a zeroth-order or first-order oracle for optimizing the maximum likelihood loss is NP-hard. (2) Maximum likelihood has a statistical efficiency polynomial in the ambient dimension and the radius of the parameters of the family. (3) Minimizing the score matching loss is both computationally and statistically efficient, with complexity polynomial in the ambient dimension.
Provable benefits of score matching
[ "Chirag Pabbaraju", "Dhruv Rohatgi", "Anish Sevekari", "Holden Lee", "Ankur Moitra", "Andrej Risteski" ]
Conference
spotlight
2306.01993
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=waDF0oACu2
@inproceedings{ cheng2023collaboratively, title={Collaboratively Learning Linear Models with Structured Missing Data}, author={Chen Cheng and Gary Cheng and John Duchi}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=waDF0oACu2} }
We study the problem of collaboratively learning least squares estimates for $m$ agents. Each agent observes a different subset of the features---e.g., containing data collected from sensors of varying resolution. Our goal is to determine how to coordinate the agents in order to produce the best estimator for each agent. We propose a distributed, semi-supervised algorithm Collab, consisting of three steps: local training, aggregation, and distribution. Our procedure does not require communicating the labeled data, making it communication efficient and useful in settings where the labeled data is inaccessible. Despite this handicap, our procedure is nearly asymptotically, local-minimax optimal---even among estimators allowed to communicate the labeled data such as imputation methods. We test our method on US Census data. We also discuss generalizations of our method to non-Gaussian feature settings, non-linear settings, and Federated Learning.
Collaboratively Learning Linear Models with Structured Missing Data
[ "Chen Cheng", "Gary Cheng", "John Duchi" ]
Conference
poster
2307.11947
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=wYkfog48Bq
@inproceedings{ song2023optimal, title={Optimal Block-wise Asymmetric Graph Construction for Graph-based Semi-supervised Learning}, author={Zixing Song and Yifei Zhang and Irwin King}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=wYkfog48Bq} }
Graph-based semi-supervised learning (GSSL) serves as a powerful tool to model the underlying manifold structures of samples in high-dimensional spaces. It involves two phases: constructing an affinity graph from available data and inferring labels for unlabeled nodes on this graph. While numerous algorithms have been developed for label inference, the crucial graph construction phase has received comparatively less attention, despite its significant influence on the subsequent phase. In this paper, we present an optimal asymmetric graph structure for the label inference phase with theoretical motivations. Unlike existing graph construction methods, we differentiate the distinct roles that labeled nodes and unlabeled nodes could play. Accordingly, we design an efficient block-wise graph learning algorithm with a global convergence guarantee. Other benefits induced by our method, such as enhanced robustness to noisy node features, are explored as well. Finally, we perform extensive experiments on synthetic and real-world datasets to demonstrate its superiority to the state-of-the-art graph construction methods in GSSL.
Optimal Block-wise Asymmetric Graph Construction for Graph-based Semi-supervised Learning
[ "Zixing Song", "Yifei Zhang", "Irwin King" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=wYKU1C77sa
@inproceedings{ vuong2023languagedriven, title={Language-driven Scene Synthesis using Multi-conditional Diffusion Model}, author={An Dinh Vuong and Minh Nhat VU and Toan Tien Nguyen and Baoru Huang and Dzung Nguyen and Thieu Vo and Anh Nguyen}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=wYKU1C77sa} }
Scene synthesis is a challenging problem with several industrial applications. Recently, substantial efforts have been directed to synthesize the scene using human motions, room layouts, or spatial graphs as the input. However, few studies have addressed this problem from multiple modalities, especially combining text prompts. In this paper, we propose a language-driven scene synthesis task, which is a new task that integrates text prompts, human motion, and existing objects for scene synthesis. Unlike other single-condition synthesis tasks, our problem involves multiple conditions and requires a strategy for processing and encoding them into a unified space. To address the challenge, we present a multi-conditional diffusion model, which differs from the implicit unification approach of other diffusion literature by explicitly predicting the guiding points for the original data distribution. We demonstrate that our approach is theoretically supportive. The intensive experiment results illustrate that our method outperforms state-of-the-art benchmarks and enables natural scene editing applications. The source code and dataset can be accessed at https://lang-scene-synth.github.io/.
Language-driven Scene Synthesis using Multi-conditional Diffusion Model
[ "An Dinh Vuong", "Minh Nhat VU", "Toan Tien Nguyen", "Baoru Huang", "Dzung Nguyen", "Thieu Vo", "Anh Nguyen" ]
Conference
poster
2310.15948
[ "https://github.com/andvg3/LSDM" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=wX8GuzDSJR
@inproceedings{ fu2023what, title={What can a Single Attention Layer Learn? A Study Through the Random Features Lens}, author={Hengyu Fu and Tianyu Guo and Yu Bai and Song Mei}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=wX8GuzDSJR} }
Attention layers---which map a sequence of inputs to a sequence of outputs---are core building blocks of the Transformer architecture which has achieved significant breakthroughs in modern artificial intelligence. This paper presents a rigorous theoretical study on the learning and generalization of a single multi-head attention layer, with a sequence of key vectors and a separate query vector as input. We consider the random feature setting where the attention layer has a large number of heads, with randomly sampled frozen query and key matrices, and trainable value matrices. We show that such a random-feature attention layer can express a broad class of target functions that are permutation invariant to the key vectors. We further provide quantitative excess risk bounds for learning these target functions from finite samples, using random feature attention with finitely many heads. Our results feature several implications unique to the attention structure compared with existing random features theory for neural networks, such as (1) Advantages in the sample complexity over standard two-layer random-feature networks; (2) Concrete and natural classes of functions that can be learned efficiently by a random-feature attention layer; and (3) The effect of the sampling distribution of the query-key weight matrix (the product of the query and key matrix), where Gaussian random weights with a non-zero mean result in better sample complexities over the zero-mean counterpart for learning certain natural target functions. Experiments on simulated data corroborate our theoretical findings and further illustrate the interplay between the sample size and the complexity of the target function.
What can a Single Attention Layer Learn? A Study Through the Random Features Lens
[ "Hengyu Fu", "Tianyu Guo", "Yu Bai", "Song Mei" ]
Conference
poster
2307.11353
[ "" ]
https://huggingface.co/papers/2307.11353
0
0
0
4
1
[]
[]
[]
null
https://openreview.net/forum?id=wUNPmdE273
@inproceedings{ chaudhuri2023transitivity, title={Transitivity Recovering Decompositions: Interpretable and Robust Fine-Grained Relationships}, author={Abhra Chaudhuri and Massimiliano Mancini and Zeynep Akata and Anjan Dutta}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=wUNPmdE273} }
Recent advances in fine-grained representation learning leverage local-to-global (emergent) relationships for achieving state-of-the-art results. The relational representations relied upon by such methods, however, are abstract. We aim to deconstruct this abstraction by expressing them as interpretable graphs over image views. We begin by theoretically showing that abstract relational representations are nothing but a way of recovering transitive relationships among local views. Based on this, we design Transitivity Recovering Decompositions (TRD), a graph-space search algorithm that identifies interpretable equivalents of abstract emergent relationships at both instance and class levels, and with no post-hoc computations. We additionally show that TRD is provably robust to noisy views, with empirical evidence also supporting this finding. The latter allows TRD to perform at par or even better than the state-of-the-art, while being fully interpretable. Implementation is available at https://github.com/abhrac/trd.
Transitivity Recovering Decompositions: Interpretable and Robust Fine-Grained Relationships
[ "Abhra Chaudhuri", "Massimiliano Mancini", "Zeynep Akata", "Anjan Dutta" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=wRhLd65bDt
@inproceedings{ yang2023improving, title={Improving Diffusion-Based Image Synthesis with Context Prediction}, author={Ling Yang and Jingwei Liu and Shenda Hong and Zhilong Zhang and Zhilin Huang and Zheming Cai and Wentao Zhang and Bin CUI}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=wRhLd65bDt} }
Diffusion models are a new class of generative models, and have dramatically promoted image generation with unprecedented quality and diversity. Existing diffusion models mainly try to reconstruct input image from a corrupted one with a pixel-wise or feature-wise constraint along spatial axes. However, such point-based reconstruction may fail to make each predicted pixel/feature fully preserve its neighborhood context, impairing diffusion-based image synthesis. As a powerful source of automatic supervisory signal, context has been well studied for learning representations. Inspired by this, we for the first time propose ConPreDiff to improve diffusion-based image synthesis with context prediction. We explicitly reinforce each point to predict its neighborhood context (i.e., multi-stride pixels/features) with a context decoder at the end of diffusion denoising blocks in training stage, and remove the decoder for inference. In this way, each point can better reconstruct itself by preserving its semantic connections with neighborhood context. This new paradigm of ConPreDiff can generalize to arbitrary discrete and continuous diffusion backbones without introducing extra parameters in sampling procedure. Extensive experiments are conducted on unconditional image generation, text-to-image generation and image inpainting tasks. Our ConPreDiff consistently outperforms previous methods and achieves new SOTA text-to-image generation results on MS-COCO, with a zero-shot FID score of 6.21.
Improving Diffusion-Based Image Synthesis with Context Prediction
[ "Ling Yang", "Jingwei Liu", "Shenda Hong", "Zhilong Zhang", "Zhilin Huang", "Zheming Cai", "Wentao Zhang", "Bin CUI" ]
Conference
poster
2401.02015
[ "" ]
https://huggingface.co/papers/2401.02015
1
6
1
8
1
[]
[]
[]
null
https://openreview.net/forum?id=wRJqZRxDEX
@inproceedings{ doshi2023critical, title={Critical Initialization of Wide and Deep Neural Networks using Partial Jacobians: General Theory and Applications}, author={Darshil Doshi and Tianyu He and Andrey Gromov}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=wRJqZRxDEX} }
Deep neural networks are notorious for defying theoretical treatment. However, when the number of parameters in each layer tends to infinity, the network function is a Gaussian process (GP) and quantitatively predictive description is possible. Gaussian approximation allows one to formulate criteria for selecting hyperparameters, such as variances of weights and biases, as well as the learning rate. These criteria rely on the notion of criticality defined for deep neural networks. In this work we describe a new practical way to diagnose criticality. We introduce *partial Jacobians* of a network, defined as derivatives of preactivations in layer $l$ with respect to preactivations in layer $l_0\leq l$. We derive recurrence relations for the norms of partial Jacobians and utilize these relations to analyze criticality of deep fully connected neural networks with LayerNorm and/or residual connections. We derive and implement a simple and cheap numerical test that allows one to select optimal initialization for a broad class of deep neural networks; containing fully connected, convolutional and normalization layers. Using these tools we show quantitatively that proper stacking of the LayerNorm (applied to preactivations) and residual connections leads to an architecture that is critical for any initialization. Finally, we apply our methods to analyze ResNet and MLP-Mixer architectures; demonstrating the everywhere-critical regime.
Critical Initialization of Wide and Deep Neural Networks using Partial Jacobians: General Theory and Applications
[ "Darshil Doshi", "Tianyu He", "Andrey Gromov" ]
Conference
spotlight
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=wPqEvmwFEh
@inproceedings{ ceron2023small, title={Small batch deep reinforcement learning}, author={Johan Samir Obando Ceron and Marc G Bellemare and Pablo Samuel Castro}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=wPqEvmwFEh} }
In value-based deep reinforcement learning with replay memories, the batch size parameter specifies how many transitions to sample for each gradient update. Although critical to the learning process, this value is typically not adjusted when proposing new algorithms. In this work we present a broad empirical study that suggests reducing the batch size can result in a number of significant performance gains; this is surprising, as the general tendency when training neural networks is towards larger batch sizes for improved performance. We complement our experimental findings with a set of empirical analyses towards better understanding this phenomenon.
Small batch deep reinforcement learning
[ "Johan Samir Obando Ceron", "Marc G Bellemare", "Pablo Samuel Castro" ]
Conference
poster
2310.03882
[ "" ]
https://huggingface.co/papers/2310.03882
1
0
0
3
1
[]
[]
[]
null
https://openreview.net/forum?id=wNxyDofh74
@inproceedings{ dennis2023progressive, title={Progressive Ensemble Distillation: Building Ensembles for Efficient Inference}, author={Don Dennis and Abhishek Shetty and Anish Sevekari and Kazuhito Koishida and Virginia Smith}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=wNxyDofh74} }
Knowledge distillation is commonly used to compress an ensemble of models into a single model. In this work we study the problem of progressive ensemble distillation: Given a large, pretrained teacher model , we seek to decompose the model into an ensemble of smaller, low-inference cost student models . The resulting ensemble allows for flexibly tuning accuracy vs. inference cost, which can be useful for a multitude of applications in efficient inference. Our method, B-DISTIL, uses a boosting procedure that allows function composition based aggregation rules to construct expressive ensembles with similar performance as using much smaller student models. We demonstrate the effectiveness of B-DISTIL by decomposing pretrained models across a variety of image, speech, and sensor datasets. Our method comes with strong theoretical guarantees in terms of convergence as well as generalization.
Progressive Ensemble Distillation: Building Ensembles for Efficient Inference
[ "Don Dennis", "Abhishek Shetty", "Anish Sevekari", "Kazuhito Koishida", "Virginia Smith" ]
Conference
poster
2302.10093
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=wNpsGwixjG
@inproceedings{ kim2023leveraging, title={Leveraging Early-Stage Robustness in Diffusion Models for Efficient and High-Quality Image Synthesis}, author={Yulhwa Kim and Dongwon Jo and Hyesung Jeon and Taesu Kim and Daehyun Ahn and Hyungjun Kim and jae-joon kim}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=wNpsGwixjG} }
While diffusion models have demonstrated exceptional image generation capabilities, the iterative noise estimation process required for these models is compute-intensive and their practical implementation is limited by slow sampling speeds. In this paper, we propose a novel approach to speed up the noise estimation network by leveraging the robustness of early-stage diffusion models. Our findings indicate that inaccurate computation during the early-stage of the reverse diffusion process has minimal impact on the quality of generated images, as this stage primarily outlines the image while later stages handle the finer details that require more sensitive information. To improve computational efficiency, we combine our findings with post-training quantization (PTQ) to introduce a method that utilizes low-bit activation for the early reverse diffusion process while maintaining high-bit activation for the later stages. Experimental results show that the proposed method can accelerate the early-stage computation without sacrificing the quality of the generated images.
Leveraging Early-Stage Robustness in Diffusion Models for Efficient and High-Quality Image Synthesis
[ "Yulhwa Kim", "Dongwon Jo", "Hyesung Jeon", "Taesu Kim", "Daehyun Ahn", "Hyungjun Kim", "jae-joon kim" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=wMNpMe0vp3
@inproceedings{ tu2023a, title={A Closer Look at the Robustness of Contrastive Language-Image Pre-Training ({CLIP})}, author={Weijie Tu and Weijian Deng and Tom Gedeon}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=wMNpMe0vp3} }
Contrastive Language-Image Pre-training (CLIP) models have demonstrated remarkable generalization capabilities across multiple challenging distribution shifts. However, there is still much to be explored in terms of their robustness to the variations of specific visual factors. In real-world applications, reliable and safe systems must consider other safety measures beyond classification accuracy, such as predictive uncertainty. Yet, the effectiveness of CLIP models on such safety-related objectives is less-explored. Driven by the above, this work comprehensively investigates the safety measures of CLIP models, specifically focusing on three key properties: resilience to visual factor variations, calibrated uncertainty estimations, and the ability to detect anomalous inputs. To this end, we study $83$ CLIP models and $127$ ImageNet classifiers. They are diverse in architecture (pre)training distribution and training strategies. We consider $10$ visual factors (\emph{e.g.}, shape and pattern), $5$ types of out-of-distribution data, and $8$ natural and challenging test conditions with different shift types, such as texture, style, and perturbation shifts. Our study has unveiled several previously unknown insights into CLIP models. For instance, they are not consistently more calibrated than other ImageNet models, which contradicts existing findings. Additionally, our analysis underscores the significance of training source design by showcasing its profound influence on the three key properties. We believe our comprehensive study can shed light on and help guide the development of more robust and reliable CLIP models.
A Closer Look at the Robustness of Contrastive Language-Image Pre-Training (CLIP)
[ "Weijie Tu", "Weijian Deng", "Tom Gedeon" ]
Conference
poster
2402.07410
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=wLiMhVJ7fx
@inproceedings{ falkiewicz2023calibrating, title={Calibrating Neural Simulation-Based Inference with Differentiable Coverage Probability}, author={Maciej Falkiewicz and Naoya Takeishi and Imahn Shekhzadeh and Antoine Wehenkel and Arnaud Delaunoy and Gilles Louppe and Alexandros Kalousis}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=wLiMhVJ7fx} }
Bayesian inference allows expressing the uncertainty of posterior belief under a probabilistic model given prior information and the likelihood of the evidence. Predominantly, the likelihood function is only implicitly established by a simulator posing the need for simulation-based inference (SBI). However, the existing algorithms can yield overconfident posteriors (Hermans *et al.*, 2022) defeating the whole purpose of credibility if the uncertainty quantification is inaccurate. We propose to include a calibration term directly into the training objective of the neural model in selected amortized SBI techniques. By introducing a relaxation of the classical formulation of calibration error we enable end-to-end backpropagation. The proposed method is not tied to any particular neural model and brings moderate computational overhead compared to the profits it introduces. It is directly applicable to existing computational pipelines allowing reliable black-box posterior inference. We empirically show on six benchmark problems that the proposed method achieves competitive or better results in terms of coverage and expected posterior density than the previously existing approaches.
Calibrating Neural Simulation-Based Inference with Differentiable Coverage Probability
[ "Maciej Falkiewicz", "Naoya Takeishi", "Imahn Shekhzadeh", "Antoine Wehenkel", "Arnaud Delaunoy", "Gilles Louppe", "Alexandros Kalousis" ]
Conference
poster
2310.13402
[ "https://github.com/dmml-geneva/calibrated-posterior" ]
-1
-1
-1
-1
0
[]
[]
[]