|
# Test-Time Recalibration Of Conformal Predictors Under Distribution Shift Based On Unlabeled Examples |
|
|
|
Anonymous authors Paper under double-blind review |
|
|
|
## Abstract |
|
|
|
Modern image classifiers are very accurate, but the predictions come without uncertainty estimates. Conformal predictors provide uncertainty estimates by computing a set of classes containing the correct class with a user-specified probability based on the classifier's probability estimates. To provide such sets, conformal predictors often estimate a cutoff threshold for the probability estimates based on a calibration set. Conformal predictors guarantee reliability only when the calibration set is from the same distribution as the test set. |
|
|
|
Therefore, conformal predictors need to be recalibrated for new distributions. However, in practice, labeled data from new distributions is rarely available, making calibration infeasible. |
|
|
|
In this work, we consider the problem of predicting the cutoff threshold for a new distribution based on unlabeled examples. While it is impossible in general to guarantee reliability when calibrating based on unlabeled examples, we propose a method that provides excellent uncertainty estimates under natural distribution shifts, and provably works for a specific model of a distribution shift. |
|
|
|
## 1 Introduction |
|
|
|
Consider a (black-box) image classifier, typically a deep neural network with a softmax layer at the end, that is trained to output probability estimates for L classes given an input feature vector x ∈ R |
|
d. Conformal predictors are wrapped around such a classifier and generate a set of classes that contains the correct label with a user-specified probability based on the classifier's probability estimates. |
|
|
|
Let x ∈ R |
|
d be a feature vector with associated label y ∈ {1*, . . . , L*}. We say that a set-valued function C |
|
generates valid prediction sets for the distribution P if P(x,y)∼P [y ∈ C(x)] ≥ 1 − α, (1) |
|
where 1 − α is the desired coverage level. Conformal predictors generate valid sets C for the distribution P |
|
by utilizing a calibration set consisting of labeled examples {(x1, y1), . . . ,(xn, yn)}. An important caveat of conformal predictors is that the examples from the calibration set are drawn from the same distribution as the test dataset. |
|
|
|
This assumption is difficult to satisfy in applications and potentially limits the applicability of conformal prediction methods in practice. In fact, in practice one usually expects a distribution shift between the calibration set and the examples at inference (or the test set), in which case the coverage guarantees provided by conformal prediction methods are void. For example, the new ImageNetV2 test set was created in the same way as the original ImageNet test sets, yet Recht et al. (2019) found a notable drop in classification accuracy for all classifiers considered. |
|
|
|
Ideally, a conformal predictor is recalibrated on a distribution before testing, otherwise the coverage guarantees are not valid (Cauchois et al., 2020). However, in real-world applications, where distribution shifts are ubiquitous, labeled data from new distributions is scarce or non-existent. |
|
|
|
We therefore consider the problem of recalibrating a conformal predictor only based on unlabeled data from the new domain. This is an ill-posed problem: it is in general impossible to calibrate a conformal predictor based on unlabeled data. Yet, we propose a simple calibration method that gives excellent performance for a variety of natural distribution shifts. |
|
|
|
Organization and contributions. We start with concrete examples on how conformal predictors yield miscalibrated uncertainty estimates under natural distribution shifts. We next propose a simple recalibration method that only uses unlabeled examples from the target distribution. We show that our method correctly recalibrates a popular conformal predictor (Sadinle et al., 2019) on a theoretical toy model. We provide empirical results for various natural distribution shifts of ImageNet showing that recalibrating conformal predictors using our proposed method significantly reduces the performance gap. In certain cases, it even achieves near oracle-level coverage. Related work. Several works have considered the robustness of conformal prediction to distribution shift |
|
(Tibshirani et al., 2019; Gibbs & Candes, 2021; Park et al., 2022; Barber et al., 2023; Prinster et al., 2022; 2023; Gibbs & Candès, 2023; Fannjiang et al., 2022). Gibbs & Candes (2021); Gibbs & Candès (2023) consider a setting where the distribution varies over time and propose an adaptive conformal prediction method to guarantee asymptotic and local coverage. Similarly, Barber et al. (2023) propose a weighted conformal prediction method to provably generalize to the case where distribution changes over time. On the other hand, Prinster et al. (2022; 2023) propose a weighted uncertainty quantification based on the jackknife+ |
|
method rather than the typical conformal prediction methods that we consider in this paper. |
|
|
|
Particularly of interest, Tibshirani et al. (2019) and Park et al. (2022) propose methods that assume a covariate shift and calibrate based on estimating the amount of covariate shift, we compare to those later in Section 5.2. Podkopaev & Ramdas (2021) studies the related, but discrete setting of label shifts between the source and target domains and proposes a method that is more robust under the label shift setting. In contrast, we focus on complex image datasets for which covariate shift is not well defined and label shift not broadly relevant. |
|
|
|
We are not aware of other works studying calibration of conformal predictors under distribution shift based on unlabeled examples. However, prior works propose to make conformal predictors robust to various distribution shifts from the source distribution of the calibration set (Cauchois et al., 2020; Gendler et al., 2022), via calibrating the conformal predictor to achieve a desired coverage in the worse case scenario of the considered distribution shifts. Cauchois et al. (2020) considers covariate shifts and calibrates the conformal predictor to achieve coverage for the worst-case distribution within the f-divergence ball of the source distribution. |
|
|
|
Gendler et al. (2022) considers adversarial perturbations as distribution shifts and calibrates a conformal predictor to achieve coverage for the worst-case distribution obtained through ℓ2-norm bounded adversarial noise. |
|
|
|
While making the conformal predictor robust to a range of worst-case distributions at calibration time allows maintaining coverage under the worst-case distributions, these approaches have two shortcomings: |
|
First, natural distribution shifts are difficult to capture mathematically, and models like covariate-shifts or adversarial perturbations do not seem to model natural distribution shifts (such as that from ImageNet to ImageNetV2) accurately. Second, calibrating for a worst-case scenario results in an overly conservative conformal predictor that tends to yield much higher coverage than desired for test distributions that correspond to a less severe shift from the source, which comes at the cost of reduced efficiency (i.e., larger set size, or larger confidence interval length). In contrast, our method does not compromise the efficiency of the conformal predictor on easier distributions as we recalibrate the conformal predictor for any new dataset. |
|
|
|
A related problem is to predict the accuracy of a classifier on new distributions from unlabeled data sampled from a new distribution (Deng & Zheng, 2021; Chen et al., 2021; Jiang et al., 2022; Deng et al., 2021; Guillory et al., 2021; Garg et al., 2022). In particular, Garg et al. (2022) proposed a simple method that achieves state-of-the-art performance in predicting classifier accuracy across a range of distributions. However, the calibration problem we consider is fundamentally different than estimating the accuracy of a classifier. While predicting the accuracy of the classifier would allow making informed decisions on whether to use the classifier for a new distribution, it doesn't provide a solution for recalibration. |
|
|
|
## 2 Background On Conformal Prediction |
|
|
|
Consider a black-box classifier with input feature vector x ∈ R |
|
dthat outputs a probability estimate πℓ(x) ∈ [0, 1] for each class ℓ = 1*, . . . , L*. Typically, the classifier is a neural network trained on some distribution, and the probability estimates are the softmax outputs. We denote the order statistics of the probability estimates by π(1)(x) ≥ π(2)(x) ≥ *. . .* ≥ π(L)(x). |
|
|
|
Many conformal predictors use a calibration set DP |
|
cal = {(xi, yi)} |
|
n i=1 to find a cutoff threshold (Sadinle et al., 2019; Romano et al., 2020; Angelopoulos et al., 2020; Bates et al., 2021) that achieves the desired empirical coverage on this set. Here, the superscript P denotes the distribution from which the examples in the calibration set are sampled from. Given a set-valued function C(x, u, τ ) ⊂ {1*, . . . , L*} containing the set of predicted classes by the conformal predictor, such conformal predictors compute the threshold parameter τ as |
|
|
|
$$\tau^{*}=\operatorname*{inf}\left\{\tau:|\left\{i:y_{i}\in{\mathcal{C}}(\mathbf{x}_{i},u_{i},\tau)\right\}|\geq(1-\alpha)(n+1)\right\},$$ |
|
|
|
where uiis added randomization to smoothen the cardinality term, chosen independently and uniformly from the interval [0, 1], see Vovk et al. (2005) on smoothed conformal predictors. Finally, the '+1' term in the |
|
(n + 1) term is a bias correction for the finite size of the calibration set. |
|
|
|
This conformal calibration procedure achieves distributional coverage as defined in the expression (1), |
|
for any set valued function C(x*, u, τ* ) satisfying the nesting property C(x, u, τ1) ⊆ C(x*, u, τ*2) for τ1 < τ2, see (Angelopoulos et al., 2020, Thm. 1). |
|
|
|
In this paper, we primarily focus on the popular conformal predictors *Thresholded Prediction Sets* |
|
(TPS) (Sadinle et al., 2019) and *Adaptive Prediction Sets* (APS) (Romano et al., 2020). The set generating functions of the two conformal predictors are |
|
|
|
$$\mathcal{C}^{\mathrm{TPS}}(\mathbf{x},\tau)=\{\ell=1,\ldots,L\colon\pi_{\ell}(\mathbf{x})\geq1-\tau\}\,,$$ $$\mathcal{C}^{\mathrm{APS}}(\mathbf{x},u,\tau)=\{\ell=1,\ldots,L\colon\sum_{j=1}^{\ell-1}\pi_{(j)}(\mathbf{x})+u\cdot\pi_{(\ell)}(\mathbf{x})\leq\tau\},$$ |
|
$\uparrow$). |
|
$$\left({3}\right)$$ |
|
$$\left(4\right)$$ |
|
|
|
with u ∼ U(0, 1) for smoothing. The set generating function of TPS doesn't require smoothing since each softmax score is independently thresholded and therefore there are no discrete jumps. Computing the threshold τ through conformal calibration (2) requires a labeled calibration set from distribution P. We therefore add a superscript to the threshold to designate which distribution the calibration set was sampled from; for example τ P indicates that the calibration set was sampled from the distribution P. The prediction set function C |
|
TPS for TPS and C |
|
APS for APS both satisfy the nesting property. Therefore, TPS |
|
and APS calibrated on a calibration set DP |
|
cal by computing the threshold in the expression (2) is guaranteed to achieve coverage on the distribution P. However, coverage is only guaranteed if the test distribution Q is the same as the calibration distribution P. |
|
|
|
## 3 Failures Under Distribution Shifts And Problem Statement |
|
|
|
Often we are most interested in quantifying uncertainty with conformal prediction when we apply a classifier to new data that might come from a slightly different distribution than the distribution we calibrated on. |
|
|
|
Yet, conformal predictors only provide coverage guarantees for data coming from the same distribution as the calibration set, and the coverage guarantees often fail even under slight distribution shifts. For example, our experiments (see Figure 3) show that APS calibrated on ImageNet-Val to yield 1 − α = 0.9 coverage only achieves a coverage of 0.64 on the ImageNet-Sketch dataset, which consists of sketches of the ImageNet-Val images and hence constitutes a distribution shift (Wang et al., 2019). |
|
|
|
Different conformal predictors typically have different coverage gaps under the same distribution shift. More efficient conformal predictors (i.e., those that produce smaller prediction sets) tend to have a larger coverage gap under a distribution shift. For example, both TPS and RAPS (a generalization of APS proposed by Angelopoulos et al. (2020)) yield smaller confidence sets, but only achieve a coverage of 0.38 vs. 0.64 for APS on the ImageNet-Sketch distribution shift discussed above. |
|
|
|
![3_image_0.png](3_image_0.png) |
|
|
|
Figure 1: **Left**: Vanilla conformal prediction. **Right**: QTC recalibration. QTC encapsulates the conformal calibration process to recalibrate the conformal predictor for each new distribution without altering the underlying set generating function. D |
|
Q |
|
tst is the unlabeled test set and DP is the labeled training/calibration set. QTC finds a threshold on the scores of the model on the unlabeled samples and predicts the coverage level by utilizing how the distribution of the scores changes across test distribution with respect to this threshold. |
|
Even under more subtle distribution shifts such as subpopulation shifts (Santurkar et al., 2021), the achieved coverage can drop significantly. For example, APS calibrated to yield 1 − α = 0.9 coverage on the source distribution of the Living-17 BREEDS dataset only achieves a coverage of 0.68 on the target distribution. |
|
|
|
The source and target distributions contain images of exclusively different breeds of animals while the animals' species is shared as the label (Santurkar et al., 2021). |
|
|
|
Problem statement. Our goal is to recalibrate a conformal predictor on a new distribution Q based on unlabeled data. Given an unlabeled dataset D |
|
Q |
|
tst = {x1*, . . . ,* xn} sampled from the target distribution Q, our goal is to provide an accurate estimate τˆ |
|
Q for the threshold τ Q. Recall that the threshold τ Q is so that the conformal predictor with set function C(x*, u, τ* Q) achieves the desired coverage of 1 − α on the target distribution Q. Thus, in other words, our goal is to estimate a threshold τˆ |
|
Q so that the set C(x*, u,* τˆ |
|
Q) |
|
achieves close to the desired coverage of 1 − α on the target distribution, based on the unlabeled dataset only. In general, it is impossible to guarantee coverage since conformal prediction relies on exchangeability assumptions which can not be guaranteed in practice for new datasets (Vovk et al., 2005; Romano et al., |
|
2020; Angelopoulos et al., 2020; Cauchois et al., 2020; Bates et al., 2021). However, we will see that we can consistently estimate the threshold τ Q for a variety of natural distribution shifts. |
|
|
|
We refer to the difference between the target coverage of 1 − α and the actual coverage achieved on a given distribution without any recalibration efforts as the *coverage gap*. We assess how effective a recalibration method is based on the reduction of the coverage gap after recalibration. |
|
|
|
## 4 Methods |
|
|
|
In this section we introduce our calibration method, termed Quantile Thresholded Confidence (QTC), along with baseline methods we consider in our experiments. |
|
|
|
## 4.1 Quantile Thresholded Confidence |
|
|
|
Consider a conformal predictor with threshold τ P |
|
α calibrated so that the conformal predictor achieves coverage 1 − α on the source distribution P. On a different distribution Q the coverage of the conformal predictor is off. But there is a value β such that, if we calibrate the conformal predictor on the *source distribution* using the value β instead of α, it achieves 1 − α coverage on the *target distribution*, i.e., the corresponding thresholds obey τ P |
|
β = τ Q |
|
α . |
|
|
|
Our method first estimates the value β based on unlabeled examples. From the estimate βˆ, we estimate τ Q |
|
α based on computing the threshold τ P |
|
βˆ |
|
by calibrating the conformal predictor on the source calibration set using βˆ. This yields a threshold close to the desired one, i.e., τ P |
|
βˆ ≈ τ Q |
|
α . |
|
|
|
Step 1, estimation of β: We are given a labeled source dataset DP |
|
cal and an unlabeled target dataset D |
|
Q |
|
tst. |
|
|
|
Our estimate of β relies on the quantile function |
|
|
|
$$q({\mathcal{D}},c)=\inf\left\{p\colon\frac{1}{|{\mathcal{D}}|}\sum_{{\mathbf{x}}\in{\mathcal{D}}}\mathbbm{1}_{\{s(\pi({\mathbf{x}}))<p\}}\geq c\right\}.$$ |
|
|
|
The quantile function depends on the classifier's predictions through a score function s(π(x)) = maxℓ πℓ(x), |
|
which we take as the largest softmax score of the classifier's predictions. Here, D is a set of unlabeled examples and c ∈ [0, 1] is a scalar. Our method first identifies a threshold based on the unlabeled target dataset D |
|
Q |
|
tst for a desired coverage level α in expression (5) by computing q(D |
|
Q |
|
tst, α). Since this process is identical to finding the (α) |
|
th quantile of the scores on the dataset, we dub the method Quantile Thresholded Confidence |
|
(QTC). QTC estimates β as |
|
|
|
$$\mathbf{\dot{b}})$$ |
|
$$\beta_{\mathrm{QTC}}=\operatorname*{min}(\beta_{\mathrm{QTC-T}},\beta_{\mathrm{QTC-S}}),$$ |
|
$$(6)$$ |
|
$$\left(7\right)$$ |
|
$$\quad(8)$$ |
|
βQTC = min(βQTC−T, βQTC−S), (6) |
|
where the QTC-Target and QTC-Source estimates are |
|
|
|
βQTC−T(D Q tst) = 1 |DP cal| X x∈DP cal 1{s(π(x))<q(DQ tst,α)}(7) βQTC−S(D Q tst) = 1 − 1 |DQ tst| X x∈DQ tst 1{s(π(x))<q(DP cal ,1−α)} . (8) |
|
We consider two estimates for β, and aggregate them to a single value by taking the minimum of the two. |
|
|
|
This yields best performance, as demonstrated by studying the three versions of QTC, corresponding to the three estimates (6), (7), and (8). |
|
|
|
The reasons for having two estimates and aggregating them is as follows. DNNs have a tendency to be over-confident in their predictions (Guo et al., 2017). If the distribution of the softmax scores over the dataset is not sufficiently smooth in the lower-confidence regime, the QTC-T estimate might be inaccurate. In this higher-confidence regime QTC-S provides a better estimate. The minimum of the two provides a good estimate in the high and low confidence regions. |
|
|
|
The motivation behind QTC is that we essentially map the quantile function conformal prediction uses, which relies on the labels, to the quantile function of QTC, which does not require labels. While this mapping is not guaranteed to be preserved under distribution shift, we have observed that it works very well in practice and provably works in the theoretical setting that we consider. |
|
|
|
If there is no distribution shift between the source and target, QTC would recover the original α. That is, both the QTC-T and QTC-S estimates of the β would be asymptotically equal to α and 1 − α respectively. |
|
|
|
To see this more clearly, note that we can insert the definition of q in (5) in the RHS of the equations (7), (8). |
|
|
|
As n → ∞, the sums over the datasets converge to the expectations which are equal when no distribution shift is present. |
|
|
|
Step 2, estimation of the threshold τ Q |
|
α **based on** β: QTC predicts the conformal threshold τ Q |
|
α by conformal calibration with target value βQTC. Specifically, we calibrate the conformal predictor on the dataset DP |
|
cal as |
|
|
|
$$\tau_{\rm QTC}=\inf\left\{\tau:|\{i:y_{i}\in{\cal C}({\bf x}_{i},u_{i},\tau)\}|\geq(1-\beta_{\rm QTC})(|{\cal D}_{\rm cal}^{p}|+1)\right\},\tag{9}$$ |
|
|
|
which yields the estimate τQTC for τ Q |
|
α . QTC is illustrated in Figure 1. |
|
|
|
QTC is inspired by a method for predicting a classifier's accuracy from Garg et al. (2022). Garg et al. (2022)'s method finds a threshold on the scores matching the accuracy of a classifier on the dataset and predicts the accuracy on other datasets. Contrary, we predict the threshold of a conformal predictor, and our method is based on predicting an auxillary parameter β instead of a threshold directly. |
|
|
|
## 4.2 Baseline Methods |
|
|
|
We consider regression-based methods as baselines. Regression-based methods have been used for predicting classification accuracy, assuming a correlation between the classification accuracy and a feature (e.g., average confidence) across different distributions (Deng et al., 2021; Deng & Zheng, 2021; Guillory et al., 2021). We consider regression-based methods as baselines for predicting the conformal threshold on a target distribution that would achieve 1 − α coverage. We train the regression-based methods on a dataset consisting of synthetically generated distributions given a source distribution (e.g. ImageNet-C from ImageNet) with the goal of predicting the conformal threshold for a test dataset sampled from a natural distribution. |
|
|
|
Let ϕπ(D): R |
|
L → R |
|
d be the feature extractor part of a neural network that maps the softmax scores of the classifier to the features for a given dataset D. A simple example is the one-dimensional feature (d = 1) |
|
extracted by computing the average confidence of a given classifier across the examples of a given dataset. |
|
|
|
We fit a regression function fθ parameterized by different feature extractors ϕπ by minimizing the mean squared error between the output and the calibrated threshold τ across the distributions as |
|
|
|
$$\hat{\theta}=\arg\operatorname*{min}_{\theta}\sum_{j}(f_{\theta}(\phi_{\pi}({\mathcal{D}}_{j}))-\tau^{{\mathcal{P}}_{j}})^{2}.$$ |
|
$$(10)^{\frac{1}{2}}$$ |
|
2. (10) |
|
We consider the following choices for the feature extractor ϕπ (see App A.1 for details): |
|
- *Average confidence regression (ACR)*: The average confidence of the classifier across the entire dataset. |
|
|
|
- *Difference of confidence regression (DCR)* (Guillory et al., 2021): The average confidence of the classifier across the entire dataset offset by the average confidence on the source dataset. Prediction is also for the offset target τ − τ P . DCR performs better than ACR for predicting a classifier's accuracy (Guillory et al., 2021). |
|
|
|
- *Confidence histogram-density regression (CHR)*: Normalized histogram density of the classifier confidence across the dataset, where the feature dimension is controlled by a hyperparameter that determines the number of histogram bins in the probability range [0, 1]. Neural networks tend to be overconfident in their prediction which heavily skews the histogram densities to the last bin. We also therefore consider a variant of CHR, *dubbed CHR-*, where we drop the last bin of the histogram as a feature. |
|
|
|
- *Predicted class-wise average confidence regression (PCR)*: Class-wise (by predicted class) average confidence of the classifier across the samples. |
|
|
|
## 5 Experiments |
|
|
|
We study the performance of QTC on natural distribution shifts and on an artifical covariate shift. |
|
|
|
## 5.1 Natural Distribution Shifts |
|
|
|
We consider the following choices for the source distribution P and associated natural distribution shifts: |
|
ImageNet (Deng et al., 2009) distribution shifts: In our ImageNet experiments, ImageNet is the source distribution P and the following natural distribution shifts are the target distributions Q: |
|
- **ImageNetV2** (Recht et al., 2019) was constructed by following the same procedure as for constructing and labeling the original ImageNet dataset. However, all standard models perform significantly worse on ImageNetV2 relative to the original ImageNet test set. |
|
|
|
- **ImageNet-Sketch** (Wang et al., 2019) contains sketch-like images of the objects in the original ImageNet, but otherwise matches the original categories and scales. |
|
|
|
![6_image_0.png](6_image_0.png) |
|
|
|
Figure 2: Coverage obtained by TPS for a desired coverage of 1 − α = 0.9 on the target distribution Q after recalibration using the unlabeled samples from Q for various recalibration methods. The dotted line is the coverage without recalibration, and the dashed line is the target coverage 1 − α = 0.9. The figure shows that QTC-T and QTC-S almost fully close the coverage gap across ImageNet and BREEDS test distribution shifts, corresponding to varying severities. |
|
|
|
![6_image_1.png](6_image_1.png) |
|
|
|
![6_image_2.png](6_image_2.png) |
|
|
|
Figure 3: Coverage obtained by TPS and APS on the target distribution Q as a function of the desired coverage (i.e., 1 − α) after recalibration with the respective prediction method. For regression methods, only the best performing method, CHR-, is shown. QTC significantly closes the coverage gap across the range of 1 − α, while CHR- yields inconsistent or insufficient performance improvements. |
|
- **ImageNet-R** (Hendrycks et al., 2021) contains artwork images of the ImageNet class objects found in the web. ImageNet-R only contains images for a 200-class subset of the original ImageNet. We don't limit our experiments to this subset but instead consider the adverse setting of calibrating on all 1000 classes since our main goal is to provide an end-to-end solution for recalibration of the conformal predictors and we are interested in how well our method performs against possible adversaries such as dataset imbalance that can be encountered in practice. |
|
|
|
7 BREEDS (Santurkar et al., 2021) distribution shifts: The BREEDS datasets feature *sub-population* shifts from the training set to test. The BREEDS datasets were constructed using the existing ImageNet images, but with different classes. BREEDS utilizes the hierarchical WordNet structure of the classes to choose a parent class that makes the original ImageNet classes the leaves. For example, in the BREEDS |
|
Living-17 dataset, one of the classes is *domestic cat*. This is a parent class of several ImageNet classes, which are *tiger cat, Egyptian cat, Persian cat and Siamese cat*. BREEDS induces a subpopulation shift from the source distribution to the target by assigning these leaf classes to either the source or target. For example, the images in the source dataset of Living-17 under the *domestic cat* class are that of either *tiger cats* or Egyptian cats, whereas in the target are that of either Persian cats or *Siamese cats*. Therefore, despite having the same label (*domestic cat*), the source and target distributions semantically differ due to the differences between the breeds, which induces a subpopulation shift. |
|
|
|
We consider three BREEDS datasets: Entity-13, Entity-30 and Living-17, which are named using the convention theme/object type–*\#classes*. Experimental procedure. For the ImageNet experiments we use a ResNet-50 and DenseNet-121 pretrained on the ImageNet training set. For the BREEDS experiments, we train a ResNet-18 model from scratch for the BREEDS datasets. In both cases, the classifiers only see examples from the source distribution. |
|
|
|
For all experiments, we first calibrate the conformal predictor on the source distribution P to find the cutoff threshold τ P . For QTC and variants, we find the threshold q using the expression (5). For the regression methods, we use the ImageNet-C dataset (Hendrycks & Dietterich, 2019) as the source of synthetic distributions, find the cutoff threshold τ for each of the distributions, and fit a regressor by minimizing the loss (10). For the regression function we use a 4-layer MLP with ReLU activations. ImageNet-C consists of 90 different distributions obtained by synthetically perturbing the images of ImageNet-Val for 18 different types of perturbations at 5 different levels of severity, resulting in 90 distinct distributions. Recalibration experiments for a fixed target coverage. We first evaluate the recalibration methods for a fixed target coverage of 1 − α = 0.9. The results in Figure 2 for recalibrating TPS show that QTC |
|
reduces the coverage gap much more than regression methods, and even closes it in some cases. |
|
|
|
We also display QTC-T and QTC-S as ablation studies. Here it can be seen that sometimes QTC-T and sometimes QTC-S performs best, which is why combining them is necessary. The different performance of QTC-T and QTC-S can be attributed to the difference of the type of shifts (e.g. semantic vs. subpopulation) |
|
between ImageNet and BREEDS. Note that QTC-T operates on the regime of samples with lower confidence whereas QTC-S on the higher confidence regime. Therefore, QTC-T may perform subpar compared to QTC-S |
|
for datasets consisting of fewer, more distinct classes like BREEDS, for which a well-trained classifier tends to assign high confidence to its predictions. |
|
|
|
Recalibration experiments for different target coverage levels. The coverage gap (i.e., the difference of achieved coverage and targeted coverage) varies across the desired coverage level 1 − α. We therefore next evaluate the performance as a function of the desired coverage level. |
|
|
|
Figure 3 shows the coverage obtained after recalibration with TPS and APS for different values of 1 − α for the natural distribution shifts from ImageNet. QTC closes the coverage gap significantly for all choices of 1 − α, whereas the best performing regression-based baseline method, CHR-, fails to significantly improve the coverage gap consistently across all choices of 1 − α. |
|
|
|
## 5.2 Comparison To Covariate Shift Based Methods |
|
|
|
QTC does not require labeled data from the target distribution at training or inference time. Existing methods that aim to measure the amount of covariate shift based on unlabeled examples also improve the robustness of conformal prediction, but rely on labeled examples from the target domain (Tibshirani et al., |
|
2019; Park et al., 2022). Here, we compare the performance of QTC to that of covariate shift based methods and show that QTC outperforms the state-of-the-art when labeled data is not available during training, and performs only marginally worse if labeled data is available. |
|
|
|
![8_image_0.png](8_image_0.png) |
|
|
|
Figure 4: Coverage (**left**) and the average set size (**right**) obtained by TPS on the target Q = |
|
DomainNet-Infograph for various settings of (1 − α). For the setting where all domains are available for the discriminator (**left**), WSCI closes the coverage gap while QTC considerably improves it; whereas when only DomainNet-Real is available, QTC slightly outperforms. In both settings, PS-W fails by constructing uninformatively large confidence sets for the range 1 − α > 0.9. |
|
Under a covariate shift, the conditional distribution of the label y given the feature vector x is fixed but the marginal distribution of the feature vectors differ: |
|
|
|
source:$(\mathbf{x},y)\sim\mathcal{P}=p_{\mathcal{P}}(\mathbf{x})\times p(y|\mathbf{x}),\qquad\text{target:}(\mathbf{x},y)\sim\mathcal{Q}=p_{\mathcal{Q}}(\mathbf{x})\times p(y|\mathbf{x}),$ |
|
where pP (x) and pQ(x) are the marginal PDFs of the features x, and p(y|x) is the conditional PDF of the label y. In order to account for a covariate shift, Tibshirani et al. (2019); Park et al. (2022) utilize an approach called weighted conformal calibration. Weighted conformal calibration uses the likelihood ratio of the covariate distributions, i.e., the importance weights w(x) = pQ(x)/pP (x) to weigh the scores used for the set generating function of the conformal predictor for each sample (x, y) ∈ DP |
|
cal. A conformal predictor calibrated on a source calibration set with the true importance weights for a target distribution is guaranteed to achieve the desired coverage on the target, see Tibshirani et al. (2019, Cor. 1). In practice, the importance weights are not known and are therefore estimated heuristically. |
|
|
|
Covariate shifts is not well defined for complex tasks such as image classification. We therefore follow the experimental setup of Park et al. (2022) and consider a backbone ResNet-101 classifier trained using unsupervised domain adaptation based on training sets sampled from both the source and target distribution as well as an auxillary classifier (discriminator) g that yields probability estimates of membership between the two for a given sample. For the *weighted split conformal inference* (WSCI) method of Tibshirani et al. (2019), |
|
we estimate the importance weights using this discriminator g and for the PAC prediction sets method of Park et al. (2022) based on rejection sampling (PS-W), using histogram density estimation over the probability estimates. We use TPS as the conformal predictor. |
|
|
|
We consider the DomainNet distribution shift problem (Peng et al., 2019) and choose *DomainNet-Infograph* as the target distribution since the coverage gap is insignificant for the others (see Park et al. (2022, Table 1)). |
|
|
|
We consider two scenarios, for both of which all six DomainNet domains, i.e. *DomainNet-Sketch, DomainNetClipart, DomainNet-Painting, DomainNet-Quickdraw, DomainNet-Real, and DomainNet-Infograph*, are available during training. In the first scenario all domains are also available at inference, whereas in the second scenario, analogous to the ImageNet setup, we only have access to the examples from *DomainNet-Real* |
|
(source) and *DomainNet-Infograph* (target). |
|
|
|
The results in Figure 4 show that when the source includes all the domains, WSCI outperforms other methods. |
|
|
|
However, when only DomainNet-Real is available for the source at calibration time, QTC slightly outperforms WSCI. In both settings, PS-W fails if α is chosen such that 1 − α > 0.9, by constructing uninformatively large confidence sets that tend to contain all possible labels. On the other hand, QTC and WSCI tend to construct similarly sized confidence sets consistently across the range of 1 − α. Note that while QTC considerably closes the coverage gap in both setups, QTC-S fails to improve the coverage gap. This might be due to the fact that |
|
|
|
![9_image_0.png](9_image_0.png) |
|
|
|
Figure 5: Example source and target distributions P and Q for the binary classification model, and a classifier with winv, wsp = 1. The decision boundary is shown with a faded dotted line. The correlation between the feature xsp and the label y is higher for the source than target (p P > pQ). |
|
ResNet-101 trained with domain adaptation tends to yield very high confidence across all examples. While a separate discriminator that uses the representations of the ResNet-101 before the fully-connected linear layer is utilized for the covariate shift based methods, this is not the case for QTC and its variants. Therefore, the threshold found by QTC-S tends to be very close or even equal to 1.0, hindering the performance. |
|
|
|
## 6 Theoretical Results |
|
|
|
We consider a simple binary classification distribution shift model from Nagarajan et al. (2021); Garg et al. |
|
|
|
(2022), and adapt the analysis from Garg et al. (2022) to show that recalibrating provably succeeds within this model. Specifically, we show that the conformal predictor TPS with QTC-T yields the desired coverage of 1 − α on the target distribution based on unlabeled examples. |
|
|
|
The distribution shift model from Nagarajan et al. (2021) is as follows. Consider a binary classification problem with response y *∈ {−*1, 1} and with two features x = [xinv, xsp] ∈ R |
|
2, an invariant one and a spuriously correlated one. The source and target distributions P and Q over the feature vector and label are defined as follows. The label y is uniformly distributed over {−1, 1}. The invariant fully-predictive feature xinv is uniformly distributed in an interval determined by the constants *c > γ* ≥ 0, with the interval being conditional on y: |
|
|
|
$$x_{\mathrm{inv}}|y\sim\begin{cases}U\left[\gamma,c\right]&\text{if}\quad y=1\\ U\left[-c,-\gamma\right]&\text{if}\quad y=-1\end{cases}.\tag{1}$$ |
|
$$(11)$$ |
|
|
|
The spurious feature xsp is correlated with the response y such that P(x,y)∼P [xsp · y > 0] = p P , where p P ∈ (0.5, 1.0) for some joint distribution P. A distribution shift is modeled by simulating target data with different degrees of spurious correlation such that P(x,y)∼Q [xsp · y > 0] = p Q, where p Q ∈ [0, 1]. There is a distribution shift from source to target when p P ̸= p Q. Two example distributions P and Q are illustrated in Figure 5. |
|
|
|
We consider a logistic regression classifier that predicts class probability estimates for the classes y = −1 and y = 1 as f(x) = h1 1+ewT x |
|
,e wT x 1+ewT x i, where w = [winv, wsp] ∈ R |
|
2. The classifier with winv > 0 and wsp = 0 minimizes the misclassification error across all choices of distributions P and Q (i.e., across all choices of p). |
|
|
|
However, a classifier learned by minimizing the empirical logistic loss via gradient descent depends on both the invariant feature xinv and the spuriously-correlated feature xsp, i.e., wsp = 0 ̸ due to the geometric skews on the finite data and statistical skews of the optimization with finite gradient descent steps (Nagarajan et al., 2021). |
|
|
|
For the logistic regression classifier TPS recalibrated with QTC-T provably suceeds: |
|
Theorem 6.1 (Informal). *Consider the logistic regression classifier for the binary classification problem* described above with winv > 0, wsp ̸= 0, let n *be the number of samples for the source and target datasets* and α ∈ (0, ϵ) be a user-defined value, where ϵ is the error rate of the classifier on the source. The coverage achieved on the target by recalibrating TPS on the source with the QTC estimate obtained in (7) by finding the QTC threshold on the target as in (5) converges to 1 − α as n → ∞ *with high probability.* |
|
Regarding the assumption on α: A value of α that is larger than the error rate of the classifier does make sense as it would result in empty confidence sets for a portion of the examples in the dataset. |
|
|
|
In order to understand the intuition behind Theorem 6.1, we first explain how the coverage is off under a distribution shift in this model. Consider a classifier that depends positively on the spurious feature (i.e., |
|
wsp > 0). When the spurious correlation is decreased from the source distribution to the target, the error rate of the classifier increases. TPS calibrated on the source samples finds a threshold τ such that the prediction sets yield 1 − α coverage on the source dataset as n → ∞. In other words, the fraction of misclassified points for which the model confidence is larger than the threshold τ is equal to α on the source. As the spurious correlation decreases and the error rate increases from source to target, the fraction of misclassified points for which the model confidence is larger than the threshold τ surpasses α, leading to a gap in targeted and actual coverage. |
|
|
|
Now, we remark on how QTC recalibrates and ensures the target coverage is met. Note that there exists an unknown coverage level 1 − β that can be used to calibrate TPS on the source distribution such that it yields 1 − α coverage on the target. Theorem 6.1 guarantees that QTC correctly estimates β and therefore recalibration of the conformal predictor using QTC yields the desired coverage level of 1 − α on the target. |
|
|
|
## References |
|
|
|
Anastasios Nikolas Angelopoulos, Stephen Bates, Michael Jordan, and Jitendra Malik. Uncertainty sets for image classifiers using conformal prediction. *International Conference on Learning Representations (ICLR)*, |
|
2020. |
|
|
|
Rina Foygel Barber, Emmanuel J. Candès, Aaditya Ramdas, and Ryan J. Tibshirani. Conformal prediction beyond exchangeability. *The Annals of Statistics*, 2023. |
|
|
|
Stephen Bates, Anastasios Angelopoulos, Lihua Lei, Jitendra Malik, and Michael I. Jordan. Distribution-free, risk-controlling prediction sets. *Journal of the ACM*, 2021. |
|
|
|
Maxime Cauchois, Suyash Gupta, Alnur Ali, and John C. Duchi. Robust validation: Confident predictions even when distributions shift. *arXiv:2008.04267 [cs, stat]*, 2020. |
|
|
|
Mayee Chen, Karan Goel, and Nimit Sohoni. Mandoline: Model evaluation under distribution shift. |
|
|
|
International Conference on Machine Learning (ICML), 2021. |
|
|
|
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. *Conference on Computer Vision and Pattern Recognition (CVPR)*, 2009. |
|
|
|
Weijian Deng and Liang Zheng. Are labels always necessary for classifier accuracy evaluation? Conference on Computer Vision and Pattern Recognition (CVPR), 2021. |
|
|
|
Weijian Deng, Stephen Gould, and Liang Zheng. What does rotation prediction tell us about classifier accuracy under varying testing environments? *International Conference on Machine Learning (ICML)*, |
|
2021. |
|
|
|
Clara Fannjiang, Stephen Bates, Anastasios N. Angelopoulos, Jennifer Listgarten, and Michael I. Jordan. |
|
|
|
Conformal prediction under feedback covariate shift for biomolecular design. *Proceedings of the National* Academy of Sciences, 2022. |
|
|
|
Saurabh Garg, Sivaraman Balakrishnan, and Zachary C Lipton. Leveraging unlabeled data to predict out-of-distribution performance. *International Conference on Learning Representations (ICLR)*, 2022. |
|
|
|
Asaf Gendler, Tsui-Wei Weng, Luca Daniel, and Yaniv Romano. Adversarially robust conformal prediction. |
|
|
|
International Conference on Learning Representations (ICLR), 2022. |
|
|
|
Isaac Gibbs and Emmanuel Candes. Adaptive conformal inference under distribution shift. In Advances in Neural Information Processing Systems (NeurIPS), 2021. |
|
|
|
Isaac Gibbs and Emmanuel Candès. Conformal inference for online prediction with arbitrary distribution shifts. *arXiv:2208.08401 [cs, stat]*, 2023. |
|
|
|
Devin Guillory, Vaishaal Shankar, Sayna Ebrahimi, Trevor Darrell, and Ludwig Schmidt. Predicting with confidence on unseen distributions. *IEEE International Conference on Computer Vision (ICCV)*, 2021. |
|
|
|
Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. On Calibration of Modern Neural Networks. In International Conference on Machine Learning. PMLR, 2017. |
|
|
|
Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. *International Conference on Learning Representations (ICLR)*, 2019. |
|
|
|
Dan Hendrycks, Steven Basart, Norman Mu, Saurav Kadavath, Frank Wang, Evan Dorundo, Rahul Desai, Tyler Zhu, Samyak Parajuli, Mike Guo, Dawn Song, Jacob Steinhardt, and Justin Gilmer. The many faces of robustness: A critical analysis of out-of-distribution generalization. *IEEE International Conference on* Computer Vision (ICCV), 2021. |
|
|
|
Yiding Jiang, Vaishnavh Nagarajan, Christina Baek, and J. Zico Kolter. Assessing generalization of sgd via disagreement. *International Conference on Learning Representations (ICLR)*, 2022. |
|
|
|
Jing Lei, Max G'Sell, Alessandro Rinaldo, Ryan J. Tibshirani, and Larry Wasserman. Distribution-Free Predictive Inference For Regression. *Journal of the American Statistical Association (JASA)*, 2018. |
|
|
|
Vaishnavh Nagarajan, Anders Andreassen, and Behnam Neyshabur. Understanding the failure modes of out-of-distribution generalization. *International Conference on Learning Representations (ICLR)*, 2021. |
|
|
|
Sangdon Park, Edgar Dobriban, Insup Lee, and Osbert Bastani. Pac prediction sets under covariate shift. In International Conference on Learning Representations (ICLR), 2022. |
|
|
|
Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang. Moment matching for multi-source domain adaptation. In *IEEE International Conference on Computer Vision (ICCV)*, 2019. |
|
|
|
Aleksandr Podkopaev and Aaditya Ramdas. Distribution-free uncertainty quantification for classification under label shift. *arXiv:2103.03323 [cs, stat]*, 2021. |
|
|
|
Drew Prinster, Anqi Liu, and Suchi Saria. Jaws: Auditing predictive uncertainty under covariate shift. |
|
|
|
Advances in Neural Information Processing Systems (NeurIPS), 2022. |
|
|
|
Drew Prinster, Suchi Saria, and Anqi Liu. Jaws-x: Addressing efficiency bottlenecks of conformal prediction under standard and feedback covariate shift. In *International Conference on Machine Learning (ICML)*, |
|
2023. |
|
|
|
Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. Do imagenet classifiers generalize to imagenet? *International Conference on Machine Learning (ICML)*, 2019. |
|
|
|
Yaniv Romano, Matteo Sesia, and Emmanuel J. Candès. Classification with valid and adaptive coverage. |
|
|
|
Advances in Neural Information Processing Systems (NeurIPS), 2020. |
|
|
|
Mauricio Sadinle, Jing Lei, and Larry Wasserman. Least ambiguous set-valued classifiers with bounded error levels. *Journal of the American Statistical Association (JASA)*, 2019. |
|
|
|
Shibani Santurkar, Dimitris Tsipras, and Aleksander Madry. Breeds: Benchmarks for subpopulation shift. |
|
|
|
International Conference on Learning Representations (ICLR), 2021. |
|
|
|
Ryan J. Tibshirani, Rina Foygel Barber, Emmanuel J. Candes, and Aaditya Ramdas. Conformal prediction under covariate shift. *Advances in Neural Information Processing Systems (NeurIPS)*, 2019. |
|
|
|
Vladimir Vovk, A. Gammerman, and Glenn Shafer. *Algorithmic learning in a random world*. Springer, 2005. |
|
|
|
Haohan Wang, Songwei Ge, Eric P Xing, and Zachary C Lipton. Learning robust global representations by penalizing local predictive power. *Advances in Neural Information Processing Systems (NeurIPS)*, 2019. |
|
|
|
## Appendix A Proof Of Theorem 6.1 |
|
|
|
In this section, we state and prove a formal version of Theorem 6.1. Our results rely on adapting the proof idea of Garg et al. (2022, Theorem 3) for predicting the classification accuracy of a model to our conformal prediction setup. |
|
|
|
Recall that we consider a distribution shift model for a binary classification problem with an invariant predictive feature and a spuriously correlated feature, where a distribution shift is induced by the spurious feature of the target distribution being more or less correlated with the label than the source distribution (Nagarajan et al., 2021; Garg et al., 2022). |
|
|
|
We consider a logistic regression classifier that outputs class probability estimates (softmax scores) for the two classes of y = −1 and y = +1 as |
|
|
|
$$(\mathbf{x})=\left[{\frac{1}{1+e^{\mathbf{w}^{T}\mathbf{x}}}},{\frac{e^{\mathbf{w}^{T}\mathbf{x}}}{1+e^{\mathbf{w}^{T}\mathbf{x}}}}\right],$$ |
|
|
|
where w = [winv, wsp] ∈ R |
|
2. The classifier with winv > 0 and wsp = 0 minimizes the misclassification error across all choices of distributions P and Q (i.e., across all choices of p). However, a classifier learned by minimizing the empirical logistic loss via gradient descent depends on both the invariant feature xinv and the spuriously-correlated feature xsp, i.e., wsp ̸= 0 due to the geometric skews on the finite data and statistical skews of the optimization with finite gradient descent steps (Nagarajan et al., 2021). |
|
|
|
In order to understand how geometric skews result in learning a classifier that depends on the spurious feature, suppose the probability that the spurious feature agrees with the label is high, i.e., p is close to 1.0. Note that in a finite-size training set drawn from this distribution, the fraction of samples for which the spurious feature disagrees with the label (i.e., xsp ̸= y) is small. Therefore, the margin on the invariant feature for these samples alone can be significantly larger than the actual margin γ of the underlying distribution. This implies that the max-margin classifier depends positively on the spurious feature, i.e., wsp > 0. Furthermore, we assume that winv > 0, which is required to obtain non-trivial performance (beating a random guess). |
|
|
|
Conformal prediction in the distribution shift model. We consider the conformal prediction method TPS (Sadinle et al., 2019) applied to the linear classifier described above. While other conformal prediction methods such as APS and RAPS also work for this model, the smoothing induced by the randomization of the model scores used in those conformal predictors would introduce additional complexity to the analysis. |
|
|
|
TPS also tends to be more efficient in that it yields smaller confidence sets compared to APS and RAPS at the same coverage level, see (Angelopoulos et al., 2020, Table 9). In the remaining part of this section, we establish Theorem 6.1, which states that TPS recalibrated on the source calibration set with QTC achieves the desired coverage of 1 − α on any target distribution that has a |
|
(potentially) different correlation probability p for the spurious feature. We show this in two steps: |
|
|
|
First, consider the oracle conformal predictor that is calibrated to achieve α miscoverage on the target |
|
distribution, i.e., the conformal predictor with threshold τ |
|
Q |
|
α chosen so that |
|
$$\alpha=\mathrm{P}_{({\bf x},y)\sim\mathcal{Q}}\left[y\notin\mathcal{C}({\bf x},\tau_{\alpha}^{\mathcal{Q}})\right].$$ |
|
α ). (12) |
|
Define the miscoverage on the source distribution as From those two equations, it follows that a conformal predictor calibrated to achieve miscoverage β on the |
|
source distribution P achieves the desired miscoverage of α on the target distribution, provided that the |
|
calibration sets are sufficiently large, which is assumed as we consider the case of n → ∞. |
|
$$\beta=\mathrm{P}_{({\bf x},y)\sim{\mathcal P}}\left[y\notin{\mathcal C}({\bf x},\tau_{\alpha}^{\mathcal Q})\right].$$ |
|
Second, we provide a bound on the deviation of the QTC estimate from the true value of β. We show that in the infinite sample size case, the QTC estimate converges to the true value of β. Those two steps prove Theorem 6.1. |
|
|
|
$$(12)^{\frac{1}{2}}$$ |
|
|
|
Step 1: QTC relies on the fact that there exists an unknown β ∈ (0, 1) that can be used to calibrate TPS on the source distribution such that it yields 1 − α coverage on the target. |
|
|
|
Here, we show that callibrating to achieve 1 − β coverage on the source calibration set DP via computing the threshold (2) achieves 1 − α coverage on the target distribution Q as n → ∞. |
|
|
|
We utilize the following coverage guarantee of conformal predictors established by Vovk et al. (2005); Lei et al. (2018); Angelopoulos et al. (2020): |
|
Lemma A.1. *(Lei et al., 2018, Thm. 2.2), (Angelopoulos et al., 2020, Thm. 1, Prop. 1) Consider* |
|
(xi, yi), i = 1, . . . , n drawn iid from some distribution P. Let C(x, τ ) be the conformal set generating function that satisfies the nesting property in τ , i.e., C(x, τ ′) ⊆ C(x, τ ) if τ |
|
′ ≤ τ *. Then, the conformal predictor* calibrated by finding τ |
|
∗that achieves 1 − α coverage on the finite set {(xi, y)} |
|
n i=1 *as in* (2) *achieves* 1 − α coverage on distribution P*, i.e.,* |
|
|
|
$$\operatorname{P}_{(\mathbf{x},y)\sim{\mathcal{P}}}\left[y\in{\mathcal{C}}(\mathbf{x},\tau^{*})\right]\geq1-\alpha.$$ |
|
$$(13)^{\frac{1}{2}}$$ |
|
$$\left(14\right)$$ |
|
P(x,y)∼P [y ∈ C(x, τ ∗)] ≥ 1 − α. (13) |
|
Furthermore, assume that the variables si = s(xi, yi) = inf{τ : yi ∈ C(xi, τ )} for i = 1, . . . , n *are distinct* almost surely. Then, the coverage achieved by the calibrated conformal predictor with the set generating function C(x, τ ) = {ℓ ∈ Y : s(x, ℓ) ≤ τ} *is also accurate, in that it satisfies* |
|
|
|
$$\operatorname{P}_{(\mathbf{x},y)\sim{\mathcal{P}}}\left[y\in{\mathcal{C}}(\mathbf{x},\tau^{*})\right]\leq1-\alpha+{\frac{1}{n+1}}.$$ |
|
|
|
Both the lower bound (13) and the upper bound (14) of Lemma A.1 apply to TPS in the context of the binary classification problem that we consider. To see this, we verify that TPS calibrated with the set generating function (19) satisfies both assumptions of Lemma A.1. First, note that TPS satisfies the nesting property, since we have C |
|
TPS(x, τ ′) ⊆ CTPS(x, τ ) for τ |
|
′ ≤ τ . Next, note that for TPS we have s(x, y) = πy(x). Further note that the linear logistic regression model we consider assigns a distinct score to each data point and since the invariant feature xinv is uniformly distributed in a continuous interval conditional on y, the variables si are distinct almost surely. |
|
|
|
Now, consider the oracle TPS threshold τ Q |
|
α that achieves 1 − α coverage, or equivalently α miscoverage, on the target distribution, i.e., |
|
|
|
$$\mathrm{P}_{(\mathbf{x},y)\sim\mathcal{Q}}\left[y\not\in\mathcal{C}^{\mathrm{TPS}}(\mathbf{x},\tau_{\alpha}^{\mathcal{Q}})\right]=\alpha.$$ |
|
$$\left(15\right)$$ |
|
|
|
Next, note that y /∈ CTPS(x, τ Q |
|
α ) if and only if arg maxj∈{0,1} πj (x) ̸= y and maxj∈{0,1} πj (x) ≥ τ Q |
|
α . To see that, note that the confidence set returned by TPS is a singleton containing only the top prediction of the model when the confidence of this prediction is higher than the threshold τ Q |
|
α . Moreover, the confidence set returned by TPS for the binary classification problem above does not contain the true label only when the confidence set is the singleton set of the top prediction of the model and is different than the true label. Thus, equation (15) implies |
|
|
|
$$\mathrm{P}_{(\mathbf{x},y)\sim\mathcal{Q}}\left[\arg\max_{j\in\{0,1\}}\pi_{j}\left(\mathbf{x}\right)\neq y\text{and}\max_{j\in\{0,1\}}\pi_{j}\left(\mathbf{x}\right)\geq\tau_{\alpha}^{\mathcal{Q}}\right]=\alpha.$$ |
|
|
|
We define β as the miscoverage that the oracle TPS yields on the source distribution, i.e., |
|
|
|
$$\beta:=\mathrm{P}_{(\mathbf{x},y)\sim\mathcal{P}}\left[\arg\operatorname*{max}_{j\in\{0,1\}}\pi_{j}\left(\mathbf{x}\right)\neq y\,\,a n d\operatorname*{max}_{j\in\{0,1\}}\pi_{j}\left(\mathbf{x}\right)\geq\tau_{\alpha}^{\mathcal{Q}}\right].$$ |
|
. (17) |
|
We have β ̸= α if there is a distribution shift from target to source. |
|
|
|
Consider the threshold τˆ |
|
P |
|
βfound by calibrating TPS on the set DP to achieve empirical coverage of 1 − β as in (2). TPS with the threshold τˆ |
|
P |
|
βachieves coverage on the source distribution P as a result of Lemma A.1. |
|
|
|
Moreover, combining (13) with (14) at n → ∞ yields exact coverage of 1 − β on the source distribution P. |
|
|
|
Thus, we have |
|
|
|
$$\mathrm{P}_{(\mathbf{x},y)\sim\mathcal{P}}\left[\arg\operatorname*{max}_{j\in\{0,1\}}\pi_{j}\left(\mathbf{x}\right)\neq y\,\,a n d\operatorname*{max}_{j\in\{0,1\}}\pi_{j}\left(\mathbf{x}\right)\geq\hat{\tau}_{\beta}^{\mathcal{P}}\right]=\beta.$$ |
|
$$(16)$$ |
|
$$\left(17\right)$$ |
|
$$(18)$$ |
|
|
|
Comparing equation (18) to the definition of β in equation (17) yields τˆ |
|
P |
|
β = τ Q |
|
α . Therefore, it follows that TPS calibrated to achieve 1 − β coverage on the source calibration set DP as in (2) achieves exactly 1 − α coverage on the target distribution Q as n → ∞. |
|
|
|
Step 2: In the second step, we show that QTC correctly estimates the value of β defined above. This is formalized by the lemma below. |
|
|
|
Recall that the calibration of TPS entails identifying a cutoff threshold τ computed by the formula (2). The set generating function of TPS for the linear classification problem described above simplifies to |
|
|
|
$${\mathcal{C}}^{\mathrm{TPS}}(\mathbf{x},\tau)=\left\{j\in\{0,1\}\colon\pi_{j}(\mathbf{x})\geq1-\tau\right\},\tag{1}$$ |
|
$$(19)$$ |
|
|
|
where π0(x) and π1(x) are the first and second entry of (x) as defined above. |
|
|
|
We are only interested in the regime where the desired coverage level 1 − α is larger than the classifier's accuracy, or equivalently *α < ϵ* with ϵ being the error rate of the classifier. This is because a trivial method that constructs confidence sets with equal length of 1 for all samples (i.e., singleton sets of only the predicted label) already achieves coverage of 1 − ϵ. |
|
|
|
Lemma A.2. Given the logistic regression classifier for the binary classification problem described above with any winv > 0, wsp ̸= 0, assume that the threshold q for QTC is computed using a dataset DQ consisting of n samples, sampled from some target distribution Q*, such that* |
|
|
|
$$\frac{1}{|\mathcal{D}^{\mathcal{Q}}|}\sum_{\mathbf{x}\in\mathcal{D}^{\mathcal{Q}}}1_{\left\{\max_{j\in\{0,1\}}\pi_{j}(\mathbf{x})<q\right\}}=\alpha.\tag{1}$$ |
|
$$(20)$$ |
|
|
|
Consider the oracle TPS conformal predictor with conformal threshold τ Q |
|
α *, i.e., the predictor that achieves* 1−α coverage on the target distribution Q. Denote with 1−β *the coverage achieved on the source distribution* P by this oracle TPS. Fix a δ > 0*. The QTC estimate of the* miscoverage β*, denoted by* |
|
|
|
$$\beta_{\mathrm{QTC}}={\frac{1}{|{\mathcal{D}}^{P}|}}\sum_{{\bf x}\in{\mathcal{D}}^{P}}\mathbb{1}\left\{\max_{j\in\{0,1\}}\pi_{j}<q\right\},\tag{1}$$ |
|
|
|
satisfies the following inequality with probability at least 1 − δ over a randomly drawn set of examples DQ |
|
|
|
$$(21)$$ |
|
$$|\beta_{\rm qTc}-\beta|\leq\sqrt{\frac{2\log(16/\delta)}{n\cdot c_{sp}}},\tag{1}$$ |
|
$$(22)$$ |
|
|
|
where csp = (1 − p Q) · (1 − p P ) |
|
2if wsp > 0 and csp = p Q · (p P ) |
|
2 *otherwise.* |
|
Proof. We adapt the proof idea of Garg et al. (2022, Theorem 3), which pertains to the problem of estimating the classification error of the classifier on the target, to estimating the source coverage of the oracle conformal predictor that achieves 1 − α coverage on the target distribution. |
|
|
|
For notational convenience, we define the event that a sample (x, y) is not in the prediction set of the oracle TPS with conformal threshold τ Q |
|
α (i.e., y /∈ CTPS(x, τ Q |
|
α )) as |
|
|
|
$\mathcal{E}_{mc}=\{y\notin\mathcal{C}^{\mathrm{TPS}}(\mathbf{x},\tau_{\alpha}^{\mathcal{Q}})\}$ $$=\{\arg\max_{j\in\{0,1\}}\pi_{j}\left(\mathbf{x}\right)\neq y\text{and}\max_{j\in\{0,1\}}\pi_{j}\left(\mathbf{x}\right)\geq\tau_{\alpha}^{\mathcal{Q}}\}.$$ |
|
The infinite sample size case (n → ∞). In this part we show that as n → ∞, the QTC estimate βQTC |
|
found as in (21) converges to the source miscoverage β, to illustrate the proof idea. For n → ∞, the QTC |
|
estimate βQTC in (21) becomes |
|
|
|
$$\beta_{\text{QTC}}=\mathbb{E}_{(\mathbf{x},y)\sim\mathcal{P}}\left[\mathbb{1}\left\{\max_{j\in\{0,1\}}f_{j}(\mathbf{x})\leq q\right\}\right]$$ $$=\text{P}_{(\mathbf{x},y)\sim\mathcal{P}}\left[\max_{j\in\{0,1\}}\pi_{j}\left(\mathbf{x}\right)<q\right]$$ $$=\text{P}_{(\mathbf{x},y)\sim\mathcal{P}}\left[\mathcal{E}_{mc}\right]$$ $$=\beta,$$ |
|
$$(23)$$ |
|
|
|
where the last equality is the definition of β as given in equation (17). The critical step is equation (23), |
|
which we establish in the remainder of this part of the proof. |
|
|
|
First, we condition on the label y. Using the law of total probability, we get |
|
|
|
P(x,y)∼P max j∈{0,1} πj (x) < q= Px∼P|y=−1 max j∈{0,1} πj (x) < q· P(x,y)∼P [y = −1] + Px∼P|y=+1 max j∈{0,1} πj (x) < q· P(x,y)∼P [y = +1] (i) = 1 2 · Px∼P|y=−1 max j∈{0,1} πj (x) < q+ 1 2 · Px∼P|y=+1 max j∈{0,1} πj (x) < q (ii) = Px∼P|y max j∈{0,1} πj (x) < q. (24) |
|
For equation (i), we used that y is uniformly distributed across {−1, 1}, and for equation (ii) that x is symmetrically distributed with respect to the label y. That is, we have xinv ∼ U[−c, −γ] and P [xsp = −1] = p if y = −1 and xinv ∼ U[*γ, c*] and P [xsp = +1] = p if y = +1, so the two probabilities in (i) are equal. |
|
|
|
We can further expand the expression for the probability Px∼P|y |
|
-maxj∈{0,1} πj (x) < qby additionally conditioning on the spurious feature xsp, which yields |
|
|
|
$$\mathrm{P}_{(\mathbf{x},\mathbf{y})\sim\mathcal{P}}\left[\max_{j\in\{0,1\}}\pi_{j}\left(\mathbf{x}\right)<q\right]=\mathrm{P}_{\pi_{\mathrm{inv}}\sim\mathcal{P}|\pi_{\mathrm{inv}}\sim\pi_{\mathrm{tp}}}\left[\max_{j\in\{0,1\}}\pi_{j}\left(\mathbf{x}\right)<q\right]\cdot\mathrm{P}_{\mathbf{x}\sim\mathcal{P}|y}\left[x_{\mathrm{tp}}=y\right]$$ $$+\mathrm{P}_{\pi_{\mathrm{inv}}\sim\mathcal{P}|\pi_{\mathrm{inv}}\neq y}\left[\max_{j\in\{0,1\}}\pi_{j}\left(\mathbf{x}\right)<q\right]\cdot\mathrm{P}_{\mathbf{x}\sim\mathcal{P}|y}\left[x_{\mathrm{tp}}\neq y\right].\tag{25}$$ |
|
|
|
In order to simplify the expression in the RHS of equation (25), we consider the cases of wsp > 0 and wsp < 0 separately. If wsp > 0, we have maxj∈{0,1} πj (x) > q if xsp = y. Therefore, we have Pxinv∼P|y,xsp=y |
|
-maxj∈{0,1} πj (x) < q= 0 if wsp > 0 and equation (25) simplifies to |
|
|
|
$$\mathrm{P}_{(\mathbf{x},\mathbf{y})\sim\mathcal{P}}\left[\max_{j\in\{0,1\}}\ \pi_{j}\left(\mathbf{x}\right)<q\right]=\mathrm{P}_{x_{\mathrm{un}}\sim\mathcal{P}\left[x_{\mathrm{up}}\neq y\right]}\left[\max_{j\in\{0,1\}}\ \pi_{j}\left(\mathbf{x}\right)<q\right]\cdot\mathrm{P}_{\mathbf{x}\sim\mathcal{P}\left[y_{\mathrm{up}}\ \left[x_{\mathrm{up}}\neq y\right]\right.\right.$$ $$=\mathrm{P}_{x_{\mathrm{un}}\sim\mathcal{P}\left[x_{\mathrm{up}}\neq y\right]}\left[\max_{j\in\{0,1\}}\ \pi_{j}\left(\mathbf{x}\right)<q\right]\cdot(1-p^{p}).\tag{26}$$ |
|
|
|
Similarly, if wsp < 0, we have maxj∈{0,1} πj (x) > q if xsp ̸= y, and equation (25) becomes |
|
|
|
$$\mathrm{P}_{(\mathbf{x},y)\sim\mathcal{P}}\left[\max_{J\in\{0,1\}}\pi_{j}\left(\mathbf{x}\right)<q\right]=\mathrm{P}_{x_{\mathrm{new}}\sim\mathcal{P}\left|x_{\mathrm{np}}=y\right.}\left[\max_{J\in\{0,1\}}\pi_{j}\left(\mathbf{x}\right)<q\right]\cdot\mathrm{P}_{\mathbf{x}\sim\mathcal{P}\left|y\right.}\left[x_{\mathrm{np}}=y\right]$$ $$=\mathrm{P}_{x_{\mathrm{new}}\sim\mathcal{P}\left|x_{\mathrm{np}}=y\right.}\left[\max_{J\in\{0,1\}}\pi_{j}\left(\mathbf{x}\right)<q\right]\cdot p^{\mathcal{P}}.\tag{1}$$ |
|
|
|
We next follow the same steps that we carried out above for P(x,y)∼P -maxj∈{0,1} πj (x) < qto rewrite the probability P(x,y)∼P [Emc]. If wsp > 0, the classifier makes no errors if xsp = y and only misclassifies a fraction |
|
|
|
$$(27)$$ |
|
|
|
of examples if xsp ̸= y. Therefore, we have |
|
|
|
$$\mathrm{P}_{\mathbf{x}\sim\mathcal{P}|y}\left[\mathcal{E}_{mc}\right]=\mathrm{P}_{x_{\mathrm{inv}}\sim\mathcal{P}|x_{\mathrm{np}}\neq y}\left[\mathcal{E}_{mc}\right]\cdot\mathrm{P}_{\mathbf{x}\sim\mathcal{P}|y}\left[x_{\mathrm{sp}}\neq y\right]$$ $$=\mathrm{P}_{x_{\mathrm{inv}}\sim\mathcal{P}|x_{\mathrm{np}}\neq y}\left[\mathcal{E}_{mc}\right]\cdot(1-p^{\mathcal{P}}).$$ |
|
$$(28)$$ |
|
|
|
Similarly, for wsp < 0, we have |
|
|
|
$$\mathrm{P}_{\mathbf{x}\sim\mathcal{P}|y}\left[\mathcal{E}_{mc}\right]=\mathrm{P}_{x_{\mathrm{inv}}\sim\mathcal{P}|x_{\mathrm{op}}\neq y}\left[\mathcal{E}_{mc}\right]\cdot\mathrm{P}_{\mathbf{x}\sim\mathcal{P}|y}\left[x_{\mathrm{sp}}=y\right]$$ $$=\mathrm{P}_{x_{\mathrm{inv}}\sim\mathcal{P}|x_{\mathrm{op}}\neq y}\left[\mathcal{E}_{mc}\right]\cdot p^{\mathcal{P}}.$$ |
|
$$(29)$$ |
|
for $w_{\rm sp}>0$ and $\tau_{\rm sp}<0$. |
|
(30) $\binom{31}{2}$ (31) ... |
|
for $w_{\rm sp}>0$ and $\tau_{\rm sp}<0$. |
|
(32) $\binom{33}{2}$ (33) . |
|
Therefore, in order to establish equation (23), it suffices to show |
|
|
|
$$\begin{array}{l}{{\mathrm{P}_{x_{\mathrm{inv}}\sim\mathcal{P}|y,x_{\mathrm{np}}\neq y}\left[\max_{j\in\{0,1\}}\pi_{j}\left(\mathbf{x}\right)<q\right]=\mathrm{P}_{x_{\mathrm{inv}}\sim\mathcal{P}|y,x_{\mathrm{np}}\neq y}\left[\mathcal{E}_{m c}\right],}}\\ {{\mathrm{P}_{x_{\mathrm{inv}}\sim\mathcal{P}|y,x_{\mathrm{np}}=y}\left[\max_{j\in\{0,1\}}\pi_{j}\left(\mathbf{x}\right)<q\right]=\mathrm{P}_{x_{\mathrm{inv}}\sim\mathcal{P}|y,x_{\mathrm{np}}=y}\left[\mathcal{E}_{m c}\right],}}\end{array}$$ |
|
πj (x) < q= Pxinv∼P|y,xsp̸=y [Emc] , for wsp > 0 and (30) |
|
πj (x) < q= Pxinv∼P|y,xsp=y [Emc] , for wsp < 0. (31) |
|
The feature xinv is identically distributed conditioned on y, i.e., uniformly distributed in the same interval, regardless of the underlying source or target distributions P and Q. Therefore, equations (30) and (31) are equivalent to |
|
|
|
Pxinv∼Q|y,xsp̸=y max j∈{0,1} Pxinv∼Q|y,xsp=y max j∈{0,1} |
|
πj (x) < q= Pxinv∼Q|y,xsp̸=y [Emc] , for wsp > 0 and (32) |
|
πj (x) < q= Pxinv∼Q|y,xsp=y [Emc] , for wsp < 0. (33) |
|
Equations (32) and (33) in turn follow from |
|
|
|
$$\mathrm{P}_{(\mathbf{x},y)\sim\mathcal{Q}}\left[\operatorname*{max}_{j\in\{0,1\}}\pi_{j}\left(\mathbf{x}\right)<q\right]=\mathrm{P}_{(\mathbf{x},y)\sim\mathcal{Q}}\left[\mathcal{E}_{m c}\right],$$ |
|
$$(34)$$ |
|
|
|
by carrying out the same steps that we carried out to expand the probabilities Px∼P|y |
|
-maxj∈{0,1} πj (x) < q and P(x,y)∼P -maxj∈{0,1} πj (x) < qstarting from equation (24) to establish equations (30) and (31). Equation (34) in turn is a consequence of combining (16) with (20) at n → ∞. This establishes equation (23), as desired. |
|
|
|
The finite sample case: We next show that the desired results approximately hold with high probability over a randomly drawn finite-sized set of examples DQ. We bound the difference between the LHS and RHS |
|
of (32) and (33) with high probability. |
|
|
|
First, consider the case of wsp > 0. Recall that for the case of wsp > 0 we are interested in the regime where xsp ̸= y. We denote the set of points in the target set DQ for which the spurious feature disagrees with the label as |
|
|
|
$\mathcal{X}_{D}=\{i=1,\ldots,n:\text{\rm{\bf{x}}}_{\text{\rm{sp}}}\neq y,(\mathbf{x}_{i},y_{i})\in\mathcal{D}^{\mathcal{Q}}\}$, |
|
and denote the set of points for which the spurious feature agrees with the label as |
|
|
|
$${\mathcal{X}}_{A}=\{i=1,\ldots,n:x_{\mathrm{sp}}=y,({\mathbf{x}}_{i},y_{i})\in{\mathcal{D}}^{\mathcal{Q}}\}.$$ |
|
|
|
Note that the QTC threshold q found on the entire set DQ as in (20) satisfies |
|
|
|
$$\frac{1}{|\mathcal{X}_{D}|}\sum_{i\in\mathcal{X}_{D}}\mathbbm{1}_{\{\max_{j\in\{0,1\}}\pi_{j}(\mathbf{x}_{i})<q\}}=\frac{1}{|\mathcal{X}_{D}|}\sum_{i\in\mathcal{X}_{D}}\mathbbm{1}_{\{\mathcal{E}_{\text{\tiny{me}}}(\mathbf{x}_{i},y_{i})\}},\tag{35}$$ |
|
|
|
which follows from noting that the classifier only makes an error on the subset XD if wsp > 0 and therefore the only points for which the event Emc is observed lie in the set XD. Similarly, as established before in the infinite sample case, we have 1{maxj∈{0,1} πj (xi)<q} |
|
= 0 for all i ∈ XD. |
|
|
|
By the Dvoretzky-Kiefer-Wolfowitz-Massart (DKWM) inequality, for any q > 0 we have with probability at least 1 − δ/8 |
|
|
|
$$\left|{\frac{1}{|X_{D}|}}\sum_{i\in\mathcal{X}_{D}}{\frac{3}{4}}\left\{\operatorname*{max}_{x_{i}\in\{0,1\}}\pi_{j}(\mathbf{x}_{i})<q\right\}\right.-\operatorname*{\mathbb{E}}_{x_{i},w_{i}\sim\mathcal{Q}[y,w_{i}\neq y}\left[{\frac{1}{2}}\left\{\operatorname*{max}_{x_{j}\in\{0,1\}}\pi_{j}(\mathbf{x})<q\right\}\right]\right|\leq{\sqrt{\frac{\log(16/\delta)}{2|X_{D}|}}}.$$ |
|
2|XD|. (36) |
|
Plugging equation (35) into (36), we have with probability at least 1 − δ/8 |
|
|
|
$$\left|\Xi_{x_{1m}\sim\mathcal{Q}|y,x_{m}\neq y}\left[\mathbbm{1}_{\left\{\max_{j\in\{0,1\}}\pi_{j}(\mathbf{x})<q\right\}}\right]-{\frac{1}{|\mathcal{X}_{D}|}}\sum_{i\in\mathcal{X}_{D}}\mathbbm{1}_{\left\{\mathcal{E}_{m,c}\right\}}\right|\leq{\sqrt{\frac{\log(16/\delta)}{2|\mathcal{X}_{D}|}}}.$$ |
|
|
|
We next bound the second term in the LHS of equation (37) from its expectation. Using Hoeffding's inequality, we have with probability at least 1 − δ/8 |
|
|
|
$$(36)$$ |
|
$$\left|{\frac{1}{|\mathcal{X}_{D}|}}\sum_{i\in\mathcal{X}_{D}}\mathbbm{1}_{\{\mathcal{E}_{m e}\}}-\mathbbm{E}_{x_{i m}\sim\mathcal{Q}|y,x_{i p}\neq y}\left[\mathbbm{1}_{\{\mathcal{E}_{m e}\}}\right]\right|\leq{\sqrt{\frac{\log(16/\delta)}{2|\mathcal{X}_{D}|}}}.$$ |
|
|
|
Combining equations (37) and (38) using the triangle inequality and union bound, we have with probability at least 1 − δ/4 |
|
|
|
$$\mathbb{E}_{x_{\text{true}}\sim\mathcal{Q}|y,x_{\text{w}}\neq\psi}\left[1_{\left\{\max_{j\in\{0,1\}}\pi_{j}(\mathbf{x})<q\right\}}-\mathbb{E}_{x_{\text{true}}\sim\mathcal{Q}|y,x_{\text{w}}\neq\psi}\left[1_{\left\{\mathcal{E}_{\text{nn}}\right\}}\right]\right]\leq\sqrt{\frac{2\log(16/\delta)}{|\mathcal{X}_{D}|}}.\tag{39}$$ |
|
|
|
Recall that the invariant feature xinv is uniformly distributed in the same interval conditioned on y regardless of the source or target distributions P and Q and that Pxinv|y,xsp=y |
|
-maxj∈{0,1} πj (x) > q= |
|
Pxinv|y,xsp=y |
|
-arg maxj∈{0,1} πj (x) ̸= y= 0 for the case of wsp > 0 as shown before. Therefore, by dividing both sides of (39) with 1/Px∼P|y [xsp ̸= y] we have with probability at least 1 − δ/4 |
|
|
|
$$\left|\mathbb{E}_{(\mathbf{x},\mathbf{y})\sim\mathcal{P}}\left[1_{\left\{\max_{\mathbf{j}\in\{0,1\}}\tau_{\mathbf{j}}(\mathbf{x})<q\right\}}\right]-\mathbb{E}_{(\mathbf{x},\mathbf{y})\sim\mathcal{P}}\left[1_{\left\{\mathcal{E}_{\text{\tiny{\it rms}}}\right\}}\right]\right|\leq\frac{1}{\mathbb{P}_{\mathbf{x}\sim\mathcal{P}|\mathcal{P}|\left\{x_{\text{\tiny{\it rms}}}\neq\mathbf{y}\right\}}\sqrt{\frac{2\log(16/\delta)}{|X_{D}|}}$$ $$=\frac{1}{1-p^{p}}\sqrt{\frac{2\log(16/\delta)}{|X_{D}|}}.\tag{40}$$ |
|
$$(37)$$ |
|
$$(38)$$ |
|
$$(41)$$ |
|
|
|
For the case of wsp < 0, we can show an analogous result by noting that the above results can be shown on the set XA, where xsp = y. Specifically, noting that 1 |XA| Pi∈XA |
|
1{maxj∈{0,1} πj (xi)<q} |
|
=1 |XA| Pi∈XA |
|
1{Emc} |
|
if wsp < 0 and following exactly the same steps from equation (35) onward that lead to equation (40), we have with probability at least 1 − δ/4 |
|
|
|
$$\left|\mathbb{E}_{(\mathbf{x},y)\sim\mathcal{P}}\left[1_{\left\{\operatorname*{max}_{j\in\{0,1\}}\pi_{j}(\mathbf{x})<q\right\}}\right]-\mathbb{E}_{(\mathbf{x},y)\sim\mathcal{P}}\left[1_{\left\{\mathcal{E}_{\mathrm{loc}}\right\}}\right]\right|\leq{\frac{1}{p^{\mathcal{P}}}}{\sqrt{\frac{2\log(16/\delta)}{|\mathcal{X}_{A}|}}}.$$ |
|
|
|
Using Hoeffding's inequality we can further bound the RHS of (40) and (41). For the set XD, we have with probability at least 1 − δ/2 |
|
|
|
$$\left|\left|\mathcal{X}_{D}\right|-n\cdot(1-p^{\mathfrak{Q}})\right|\leq\sqrt{\frac{\log(4/\delta)}{2n}},$$ |
|
$$\left(42\right)$$ |
|
|
|
![18_image_0.png](18_image_0.png) |
|
|
|
$$(43)$$ |
|
$$(44)$$ |
|
$$(45)$$ |
|
|
|
Figure 6: Coverage obtained by RAPS on the target distribution Q for various settings of (1 − α) w/ and w/o recalibration using QTC. |
|
and for the set XA, we have with probability at least 1 − δ/2 |
|
|
|
$$|\mathcal{X}_{A}|-n\cdot p^{\mathcal{Q}}|\leq\sqrt{\frac{\log(4/\delta)}{2n}}.$$ |
|
|
|
We next bound the difference between the finite sample QTC estimation on the source from its expectation. |
|
|
|
By DKWM inequality, for any q > 0 we have with probability at least 1 − δ/4 |
|
|
|
$$\left|{\frac{1}{|\overline{{D^{p}}}|}}\sum_{\mathbf{x}\in\mathcal{D}^{p}}{\mathbbm{1}}_{\left\{{\operatorname*{max}_{j\in\{0,1\}}\pi_{j}(\mathbf{x})<q\}\right\}}-\mathbb{E}_{(\mathbf{x},\mathbf{y})\sim\mathcal{P}}\left[{\mathbbm{1}}_{\left\{{\operatorname*{max}_{j\in\{0,1\}}\pi_{j}(\mathbf{x})<q\}\right\}}\right]\leq{\sqrt{\frac{\log(8/\delta)}{2n}}}.$$ |
|
2n. (44) |
|
We first show the result for the case wsp > 0. Combining equations (40) and (44) using the triangle inequality and union bound, we have with probability at least 1 − δ/2 |
|
|
|
$$\left|{\frac{1}{|D^{p}|}}\sum_{{\bf x}\in{\mathcal{D}}^{p}}{\mathbb{1}}\left\{\operatorname*{max}_{j\in\{0,1\}}\pi_{j}({\bf x})<q\right\}-{\mathbb{E}}_{({\bf x},{\bf y})\sim{\mathcal{P}}}\left[{\mathbb{1}}_{\left\{E_{m}\right\}}\right]\right|\leq{\frac{1}{1-p^{p}}}{\sqrt{\frac{2\log(16/\delta)}{|{\mathcal{X}}_{D}|}}}.$$ |
|
|
|
Plugging in the definitions of βQTC in (21) and β in (17) above, equivalently we get |
|
|
|
$$|\beta_{\mathrm{QTC}}-\beta|\leq{\frac{1}{1-p^{\overline{{p}}}}}{\sqrt{\frac{2\log(16/\delta)}{|\mathcal{X}_{D}|}}},$$ |
|
$$(46)$$ |
|
|
|
which holds with probability at least 1 − δ/2. Combining (46) with (42) proves equation (22) for wsp > 0, as desired. |
|
|
|
Similarly, for the case wsp < 0, following the same steps by first combining equation (41) with (44), we have with probability at least 1 − δ/2 |
|
|
|
$$|\beta_{\mathrm{QTC}}-\beta|\leq{\frac{1}{p^{\overline{{p}}}}}{\sqrt{\frac{2\log(16/\delta)}{|X_{A}|}}}.$$ |
|
|
|
Combining (47) with (43) yields equation (22), as desired, for the case wsp < 0, which concludes the proof. |
|
|
|
## A.1 Details On The Baseline Regression Methods |
|
|
|
In this section, we provide details on the baseline regression based methods. Recall that we consider several regression-based methods as baselines by fitting a regression function fθ parameterized by a feature extractor |
|
|
|
$$(47)$$ |
|
|
|
![19_image_0.png](19_image_0.png) |
|
|
|
Figure 7: Coverage obtained by RAPS on the target distribution Q (ImageNetV2) for kreg = 2 and various settings of λ when the threshold τ is replaced with the predicted threshold τˆ with the respective prediction method. For regression methods, only the best performing method of CHR- is shown. |
|
ϕπ by minimizing the mean squared error between the output and the calibrated threshold τ across the distributions as |
|
|
|
$$\hat{\theta}=\arg\operatorname*{min}_{\theta}\sum_{j}(f_{\theta}(\phi_{\pi}({\mathcal{D}}_{j}))-\tau^{{\mathcal{P}}_{j}})^{2}.$$ |
|
|
|
We consider the following choices for the feature extractor ϕπ: |
|
- *Average confidence regression (ACR)*: The one-dimensional (d = 1) average confidence of the classifier across the entire dataset which is ϕπ(D) = 1 |D| Px∈D maxℓ πℓ(x). |
|
|
|
- *Difference of confidence regression (DCR)* (Guillory et al., 2021): The one-dimensional (d = 1) |
|
average confidence of the classifier across the entire dataset offset by the average confidence on the source dataset, which is ϕπ(D) = 1 |D| Px∈D maxℓ πℓ(x) −1 |DP | Px∈DP maxℓ πℓ(x), where DP is the source dataset. Prediction is also for the offset target τ − τ P . |
|
|
|
We consider DCR in addition to ACR, because DCR performs better for predicting the classifier accuracy (Guillory et al., 2021). Since the threshold τ found by conformal calibration depends on the distribution of the confidences beyond the average, we propose the below techniques for extracting more detailed information from the dataset. |
|
|
|
- *Confidence histogram-density regression (CHR)*: Variable dimensional (d = p) features extracted as ϕπ(D) = n1 |D| Px∈D 1{maxℓ πℓ(x)∈[ |
|
j−1 p |
|
, |
|
j p ]} |
|
o j={1*,...,p*} |
|
. This corresponds to the normalized histogram density of the classifier confidence across the dataset, where p is a hyperparameter that determines the number of histogram bins in the probability range [0, 1]. Neural networks tend to be overconfident in their prediction which heavily skews the histogram densities to the last bin. We also therefore consider a variant of CHR, *dubbed CHR-*, where we have j = {1*, . . . , p* − 1} and hence d = p − 1, equivalent to dropping the last bin of the histogram as a feature. |
|
|
|
- *Predicted class-wise average confidence regression (PCR)*: Features with dimensionality equal to the number of classes (d = L) extracted as ϕπ(D) = Px∈D |
|
πj (x)·1{l=arg maxℓ πℓ P |
|
(x)} |
|
x∈D |
|
1{l=arg maxℓ πℓ |
|
(x)} |
|
|
|
j={1*,...,L*} |
|
. This corresponds to the average confidence of the classifier across the samples for each predicted class. |
|
|
|
## B Raps Recalibration Experiments |
|
|
|
APS is a powerful yet simple conformal predictor. However, other conformal predictors (Sadinle et al., 2019; Angelopoulos et al., 2020) are more efficient (in that they have on average smaller confidence sets for a given desired coverage 1 − α). |
|
|
|
![20_image_0.png](20_image_0.png) |
|
|
|
Figure 8: Coverage obtained by RAPS on the target distribution Q for λ = 0.1 and various settings of |
|
(1 − α) when the threshold τ is replaced with the predicted threshold τˆ with the respective prediction method. |
|
|
|
For regression methods, only the best performing method of CHR- is shown. |
|
In this section, we focus on the conformal predictor proposed by Angelopoulos et al. (2020), dubbed Regularized Adaptive Prediction Sets (RAPS). RAPS is an extension of APS that is obtained by adding a regularizing term to the classifier's probability estimates of the higher-order predictions (i.e., subsequent predictions after the top-k predictions). RAPS is more efficient and tends to produce smaller confidence sets when calibrated on the same calibration set as APS, as it penalizes large sets. While TPS tends to achieve slightly better results in terms of efficiency compared to RAPS, see (Angelopoulos et al., 2020, Table 9), RAPS coverage tends to be more uniform across different instances (in terms of difficult vs. easy instances) and therefore RAPS still carries practical relevance. Recall that while efficiency can be improved by constructing confidence sets more aggressively, efficient models tend to be less robust, meaning the coverage gap is greater when there is distribution shift at test time. For example, when calibrated to yield 1 − α = 0.9 coverage on ImageNet-Val and tested on Image-Sketch, the coverage of RAPS drops to 0.38 in contrast to that of APS, which only drops to 0.64 (see Section 3). It is therefore of great interest to understand how QTC performs for recalibration of RAPS under distribution shift. RAPS is calibrated using exactly the same conformal calibration process as APS and only differs from APS |
|
in terms of the prediction set function C(x*, u, τ* ). The prediction set function for RAPS is defined as |
|
|
|
$$\mathcal{C}^{\text{RAPS}}(\mathbf{x},u,\tau)=\left\{\ell\in\{1,\ldots,L\}\colon\sum_{j=1}^{\ell-1}[\pi_{(j)}(\mathbf{x})+\underbrace{1_{\{j-k_{\text{sep}}>0\}}\cdot\lambda\}_{\text{regularization}}+u\cdot\pi_{(\ell)}(\mathbf{x})\leq\tau\right\},\tag{48}$$ |
|
|
|
where u ∼ U(0, 1), similar to APS and *λ, k*reg are the hyperparameters of RAPS corresponding to the regularization amount and the number of top non-penalized predictions respectively. Note that the cutoff threshold τ P obtained by calibrating RAPS on some calibration set DP |
|
cal can be larger than one due to the added regularization. Therefore, in order to apply QTC-ST, we map τ P back to the |
|
[0, 1] range by dividing by the total scores after added regularization. QTC and QTC-SC do not require such an additional step as the coverage level α ∈ [0, 1] by definition. We show the results of RAPS' performance under distribution shift with or without calibration by QTC in Figure 6. The results show that while QTC is not able to completely mitigate the coverage gap, it significantly reduces it. Recall that RAPS utilizes a hyperparameter λ, which is the added penalty to the scores of the predictions following the top-kreg predictions, that can significantly change the cutoff threshold τ P when we calibrate on the calibration set DP . The regularization amount λ also implicitly controls the change in the cutoff thresholdτ Q − τ P when the conformal predictor is calibrated on different distributions P and Q. That is, the value ofτ Q − τ Pincreases with increasing λ as long as the distributions P and Q are meaningfully different, as is the case for all the distribution shifts that we consider. |
|
|
|
Therefore, a good recalibration method should be relatively immune to the choice of λ in order to successfully predict the threshold τ Q based only on unlabeled examples. In Figure 7, we show the performance of RAPS |
|
under the ImageNetV2 distribution shift for various values of λ. While QTC is able to improve the coverage gap for various choices of λ, the best performing regression-based baseline method does not generalize well to natural distribution shifts when λ is relatively large. In contrast, as demonstrated in Figure 8, when the regularization amount λ is relatively small, the best performing regression-based method of CHR- does very well in reducing the coverage gap of RAPS under various distribution shifts. |