Datasets:
source
stringclasses 1
value  text
stringlengths 152
659k
 filtering_features
stringlengths 402
437
 source_other
stringlengths 440
819k


arxiv 
ON THE SPLITTING PRINCIPLE OF BENIAMINO SEGRE
18 Oct 2021
Camilla Felisetti
Claudio Fontanari
ON THE SPLITTING PRINCIPLE OF BENIAMINO SEGRE
18 Oct 2021arXiv:2105.00892v3 [math.AG]
We state and prove in modern terms a Splitting Principle first claimed by Beniamino Segre in 1938, which should be regarded as a strong form of the classical Principle of Connectedness.
Introduction
The memoir [S38], appeared in the Annali di Matematica Pura e Applicata in 1938, is the last paper published in Italy by Beniamino Segre before the end of Second World War, as a consequence of the odious racial laws promulgated by the Italian Fascist Regime in that same year.
The title Un teorema fondamentale della geometria sulle superficie algebriche ed il principio di spezzamento alludes to a Splitting Principle stated there for the first time (albeit with a sloppy formulation and an incomplete proof). Francesco Severi, who praised that work as the best one of his former student (see [BC98], p. 233), extensively reconsidered it in his treatise [S59], published in 1959.
Such Splitting Principle should be regarded as a strong form of the classical Principle of Connectedness, attributed by Segre to Enriques and by Severi to Noether, stating that if the general member of a flat family {X t } of closed subschemes of P k parameterized by an irreducible curve T of finite type is connected, then X t is connected for all t ∈ T (see for instance [H77], III, Exercise 11.4 on p. 281, or [S09], Proposition 6.5).
In modern terms, we state it as follows (see [S38], p. 111, and [S59], pp. 8182):
Theorem 1.1 (Splitting Principle). Let {E} be a flat family over a normal base of nodal curves on a smooth surface F of geometric genus p g . Suppose that the general element E of {E} is irreducible and that a special element E 0 of {E} splits as E 0 = C + D with C, D irreducible. Let Γ := C ∩D = Γ 1 ⊔Γ 2 , where Γ 1 is the set of points which are limits of nodes of the general curve in {E} and Γ 2 is its complement in Γ. Assume that D D  is nonempty and C is sufficiently general with respect to D, in particular that C(−Γ 1 ) has no base points on D. If c i is the cardinality of Γ i then we have c 2 ≥ p g + 1, unless the points in Γ 2 are linearly dependent with respect to K F .
The assumptions that all curves in {E} are nodal and that C is general with respect to D are both missing from Segre's statement in [S38] and are added by Severi in [S59], p. 81. We point out that the splitting of Γ into the disjoint union of Γ 1 and Γ 2 is welldefined only if E 0 is assumed to have double points as singularities (hence it is implicit in Segre's argument, see in particular [S38], §10, p. 122: "I punti di Γ si potranno allora distinguere in due categorie, secondochè provengono o meno come limiti da punti doppi di E, ossia rispettivamente a seconda che non risultano oppure risultano punti di collegamento tra C e D; denotiamo ordinatamente con Γ 1 , Γ 2 i gruppi costituiti dai punti del primo o del secondo tipo, (...) talchè sarà Γ = Γ 1 + Γ 2 "). Furthermore, Severi's statement in [S59], p. 81, assumes that the curve D is ordinaria, which in particular implies that the characteristic series E D is complete on D. Severi in [S59], p. 197, comments: "(...) abbiamo stabilito nel n. 23 il notevole principio di spezzamento This research project was partially supported by GNSAGA of INdAM and by PRIN 2017 "Moduli Theory and Birational Classification". 2020 Mathematics Subject Classification. 1403, 14D05, 14D06. Keywords and phrases. Splitting principle, Connectedness principle, Algebraic system, Flat family, Nodal curve. di B. Segre contenuto nella Memoria degli Annali di Matematica, 1938, però sotto l'ipotesi, qui aggiunta, che una componente del limite della curva che tende a spezzarsi sia una curva ordinaria irriducibile (appartenente cioè totalmente ad un sistema irriducibile avente su di essa la serie caratteristica completa). Siccome questo principio fa entrare in giuoco soltanto il genere geometrico p g della superficie F e non l'irregolarità,è ragionevole supporre (n. 23) che il suo fondamento topologico sia in relazione soltanto ai cicli bidimensionali della riemanniana di F ; epperò, siccome essoè vero, qualunque sia p g , sopra le superficie regolari, e per p g = 0 su ogni superficie irregolare apparisce naturale di supporre che il principio stesso sia vero sempre." Finally, in [S59], p. 81, the curve D is assumed to be nonsingular ("priva di nodi "), but our modern proof shows that also this assumption is unnecessary. Indeed, the main ingredients are RiemannRoch theorem, Serre duality and adjunction formula, which hold also in the singular case up to replacing the canonical bundle K D with the dualizing sheaf ω D (see for instance [BHPV04], p. 62). We are going to apply these formulas to restrictions to D of divisors on the smooth surface F , hence to Cartier divisors on the nodal curve D.
The proofs
Our proof of Theorem 1.1 relies on a couple of crucial remarks.
Lemma 2.1. We have Γ 2 = ∅.
Proof. If Γ 2 = ∅ then all nodes of E 0 = C + D belong to Γ 1 , i.e. they are limits of nodes of E. Hence by [T76], Théorème 1 on p. 73, locally around E 0 we may resolve simultaneously all singularities of E. In this way we would obtain a family of irreducible curves degenerating to a disconnected one, contradicting the classical Principle of Connectedness (see for instance [H77], III, Exercise 11.4 on p. 281).
Lemma 2.2. There is at least one point P ∈ Γ 2 which is not a base point of the complete linear series E 0D − Γ 1  on D.
Proof. Assume by contradiction that
(1) h 0 (D, E 0D − Γ 1 ) = h 0 (D, E 0D − Γ 1 − Γ 2 ) = h 0 (D, E 0D − Γ).
On the other hand, since E 0 = C + D we have E 0D − Γ 1  ⊇ C D − Γ 1  + D D . Moreover, since C D − Γ 1 = Γ 2 = ∅ by Lemma 2.1 and C(−Γ 1 ) has no base points on D by assumption, we have h 0 (D, C D − Γ 1 ) ≥ 2. Hence we deduce
h 0 (D, E 0D − Γ 1 ) ≥ h 0 (D, C D − Γ 1 ) + h 0 (D, D D ) − 1 ≥ h 0 (D, D D ) + 1 = h 0 (D, E 0D − C D ) + 1 = h 0 (D, E 0D − Γ) + 1,
contradicting (1), so the claim is established.
Proof of Theorem 1.1. We follow Segre's approach in [S38]. Let d := h 1 (D, D D ). We have two possibilities:
(i) c 2 ≥ d + 1 or (ii) c 2 ≤ d. (i) Suppose c 2 ≥ d + 1. Let i := h 2 (F, D)
. We first prove that
(2) d ≥ p g − i.
Indeed, by adjunction K F D = ω D − D D and by Serre duality
d = h 1 (D, D D ) = h 0 (D, ω D − D D ) = h 0 (D, K F D ).
The short exact sequence on F
0 → K F (−D) → K F → K F D → 0
yields a long exact sequence
0 → H 0 (K F (−D)) → H 0 (K F ) → H 0 (K F D ) → . . . hence p g ≤ i + d.
If i = 0, we immediately get c 2 ≥ d + 1 ≥ p g + 1. If instead i > 0, then the points in Γ 2 are dependent with respect to K F , i.e. h 0 (F, K F (−Γ 2 )) > p g − c 2 . Indeed, on the one hand by (2) we have
(3) p g − c 2 ≤ p g − d − 1 ≤ p g − p g + i − 1 = i − 1.
On the other hand, since any global section of K F which vanishes on D vanishes in particular on Γ 2 , we have
h 0 (F, K F (−Γ 2 )) ≥ h 0 (F, K F (−D)) = h 2 (F, D) = i. By (3) we conclude that h 0 (F, K F (−Γ 2 )) ≥ i > p g − c 2 . (ii) Suppose c 2 ≤ d.
Let P ∈ Γ 2 and set Γ * 2 := Γ 2 \ P . 1 Observe first that the linear series
D D + Γ * 2  on D is special. In fact, h 1 (D, D D + Γ * 2 ) = h 0 (D, ω D − D D − Γ * 2 ) ≥ h 0 (D, ω D − D D ) − c 2 + 1 = h 1 (D, D D ) − c 2 + 1 = d − c 2 + 1 ≥ 1. In particular, by adjunction we have that H 0 (D, ω D − D D − Γ * 2 ) ∼ = H 0 (D, K F D − Γ * 2 )
is nonzero. We are going to prove that the natural inclusion
H 0 (F, K F − Γ 2 ) ⊆ H 0 (F, K F − Γ *
2 ) is an isomorphism for some choice of P ∈ Γ 2 , i.e. that the points in Γ 2 are dependent with respect to K F . Indeed, by Lemma 2.2, there exists at least one P ∈ Γ 2 such that the complete linear series
E 0D − Γ 1  = C D + D D − Γ 1  = D D + Γ − Γ 1  = D D + Γ 2 
on D does not admit P as a base point.
On the other hand, by the RiemannRoch theorem
h 0 (D D + Γ 2 ) = h 1 (D D + Γ 2 ) + deg(D D + Γ 2 ) + 1 − p a (D)
h 0 (D D + Γ * 2 ) = h 1 (D D + Γ * 2 ) + deg(D D + Γ 2 ) − 1 + 1 − p a (D). Since h 0 (D D + Γ * 2 ) = h 0 (D D + Γ 2 ) − 1 then h 1 (D D + Γ 2 ) = h 1 (D D + Γ * 2 ) and by Serre duality
(4) h 0 (D, ω D − D D − Γ * 2 ) = h 0 (D, ω D − D D − Γ 2 ).
Suppose now by contradiction that the inclusion H 0 (F, K F − Γ 2 ) ⊆ H 0 (F, K F − Γ * 2 ) is strict, i.e. that there exists an effective divisor A in PH 0 (F, K F − Γ * 2 ) not passing through P . Note that A ∩ D = D, since P ∈ D \ A. Now, if Γ * 2 is not empty, then A ∩ D = ∅, since ∅ = Γ * 2 ⊂ A ∩ D, and A D is a nontrivial effective divisor on D lying in PH 0 (D, K F D − Γ * 2 ) \ PH 0 (D, K F D − Γ 2 ), contradicting (4). The same conclusion holds if Γ * 2 is empty but A ∩ D = ∅. On the other hand, if Γ * 2 is empty and A D = 0, then K 1 Note that Γ 2 is non empty by Lemma 2.1, but Γ * 2 might be.
F D ∼ = O D and by adjunction we have ω D = D D . Hence (4) implies 1 = h 0 (D, O D ) = h 0 (D, O D (−P )) = 0 and this contradiction ends the proof.
. W Barth, K Hulek, C Peters, A Van De Ven, SpringerVerlagBerlin HeidelbergCompact complex surfacesW. Barth, K. Hulek, C. Peters, A. Van de Ven, Compact complex surfaces, SpringerVerlag Berlin Heidelberg (2004).
A Brigaglia, C Ciliberto, Geometria Algebrica, La matematica italiana dopo l'unità. Marcos Y Marcos; MilanoA. Brigaglia, C. Ciliberto, Geometria algebrica, in: La matematica italiana dopo l'unità, Marcos Y Marcos, Milano (1998), 185320.
Algebraic Geometry. R Hartshorne, SpringerVerlagNew YorkR. Hartshorne, Algebraic Geometry, SpringerVerlag New York (1977).
Un teorema fondamentale della geometria sulle superficie algebriche ed il principio di spezzamento. B Segre, Ann. Mat. Pura Appl. 171B. Segre, Un teorema fondamentale della geometria sulle superficie algebriche ed il principio di spezza mento, Ann. Mat. Pura Appl. 17 (1938), no. 1, 107126.
A smoothing criterion for families of curves. E Sernesi, preprintE. Sernesi, A smoothing criterion for families of curves, preprint February 2009, available online at http://www.mat.uniroma3.it/users/sernesi/smoothcrit.pdf.
Geometria dei sistemi algebrici sopra una superficie e sopra una varietà. algebrica. Edizioni Cremonese. F Severi, RomaF. Severi, Geometria dei sistemi algebrici sopra una superficie e sopra una varietà. algebrica. Edizioni Cremonese, Roma 1959.
Résolution simultanée. I. II., in: Séminaire sur les singularités des surfaces. B Teissier, Lecture Notes in Mathematics. 777SpringerVerlagCent. Math.Éc. Polytech.B. Teissier, Résolution simultanée. I. II., in: Séminaire sur les singularités des surfaces (Cent. Math.Éc. Polytech., Palaiseau 197677), Lecture Notes in Mathematics 777, SpringerVerlag, Berlin (1980), 71146.
Email: camilla.felisetti@unitn.it. Trento, ItalyTrento, Italy. Email: camilla.felisetti@unitn.it
Claudio Fontanari Dipartimento di Matematica Università di Trento Via Sommarive 14. Claudio Fontanari Dipartimento di Matematica Università di Trento Via Sommarive 14
Email: claudio.fontanari@unitn.it. Trento, ItalyTrento, Italy. Email: claudio.fontanari@unitn.it
 {'fraction_non_alphanumeric': 0.08609448959542734, 'fraction_numerical': 0.03679557024202911, 'mean_word_length': 3.2707856598016782, 'pattern_counts': {'":': 0, '<': 0, '<?xml version=': 0, '>': 3, 'https://': 0, 'lorem ipsum': 0, 'www.': 1, 'xml': 0}, 'pii_count': 4, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 22, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}  {'abstract': 'We state and prove in modern terms a Splitting Principle first claimed by Beniamino Segre in 1938, which should be regarded as a strong form of the classical Principle of Connectedness.', 'arxivid': '2105.00892', 'author': ['Camilla Felisetti ', 'Claudio Fontanari '], 'authoraffiliation': [], 'corpusid': 233481657, 'doi': '10.1007/s1023102101171w', 'github_urls': [], 'n_tokens_mistral': 4322, 'n_tokens_neox': 3789, 'n_words': 2183, 'pdfsha': '8ebbbecaed6e4a815d4764ee1e529dfa5a4a119e', 'pdfurls': ['https://arxiv.org/pdf/2105.00892v3.pdf'], 'title': ['ON THE SPLITTING PRINCIPLE OF BENIAMINO SEGRE', 'ON THE SPLITTING PRINCIPLE OF BENIAMINO SEGRE'], 'venue': []} 
arxiv 
Calibrating AI Models for Wireless Communications via Conformal Prediction
15 Dec 2022
Student Member, IEEEKfir M Cohen kfir.cohen@kcl.ac.uk
Viterbi Faculty of Electrical and Computing Engineering
TechnionIsrael Institute of Technology
WC2R 2LS, 3200003London, HaifaU.K., Israel
Member, IEEESangwoo Park sangwoo.park@kcl.ac.uk
Viterbi Faculty of Electrical and Computing Engineering
TechnionIsrael Institute of Technology
WC2R 2LS, 3200003London, HaifaU.K., Israel
Fellow, IEEEOsvaldo Simeone osvaldo.simeone@kcl.ac.uk.
Viterbi Faculty of Electrical and Computing Engineering
TechnionIsrael Institute of Technology
WC2R 2LS, 3200003London, HaifaU.K., Israel
Shlomo Shamai
Viterbi Faculty of Electrical and Computing Engineering
TechnionIsrael Institute of Technology
WC2R 2LS, 3200003London, HaifaU.K., Israel
Life Fellow, IEEEShitz
Viterbi Faculty of Electrical and Computing Engineering
TechnionIsrael Institute of Technology
WC2R 2LS, 3200003London, HaifaU.K., Israel
Shitz)Shlomo Shamai
Viterbi Faculty of Electrical and Computing Engineering
TechnionIsrael Institute of Technology
WC2R 2LS, 3200003London, HaifaU.K., Israel
Calibrating AI Models for Wireless Communications via Conformal Prediction
15 Dec 202210.18742/rnvfm0761 The authors acknowledge the use of King's Computational Research, Engineering and Technology Environment (CREATE). Retrieved November 21, 2022, from https://doi.org/10.18742/rnvfm076. Kfir M. Cohen, Sangwoo Park, and Osvaldo Simeone are with King's Communication, Learning, & Information Processing (KCLIP) lab, Department of Engineering, King's College London, 2Index Terms Calibrationset predictionreliabilityconformal predictioncrossvalidationBayesian learningwireless commu nications
When used in complex engineered systems, such as communication networks, artificial intelligence (AI) models should be not only as accurate as possible, but also well calibrated. A wellcalibrated AI model is one that can reliably quantify the uncertainty of its decisions, assigning high confidence levels to decisions that are likely to be correct and low confidence levels to decisions that are likely to be erroneous. This paper investigates the application of conformal prediction as a general framework to obtain AI models that produce decisions with formal calibration guarantees. Conformal prediction transforms probabilistic predictors into set predictors that are guaranteed to contain the correct answer with a probability chosen by the designer. Such formal calibration guarantees hold irrespective of the true, unknown, distribution underlying the generation of the variables of interest, and can be defined in terms of ensemble or timeaveraged probabilities. In this paper, conformal prediction is applied for the first time to the design of AI for communication systems in conjunction to both frequentist and Bayesian learning, focusing on demodulation, modulation classification, and channel prediction.A. MotivationHow reliable is your artificial intelligence (AI)based model? The most common metric to design an AI model and to gauge its performance is the average accuracy. However, in applications in which AI decisions are used within a larger system, AI models should not only be as accurate as possible, but they should also be able to reliably quantify the uncertainty of their decisions. As an example, consider an unlicensed link that uses AI tools to predict the best channel to access out of four possible channels. A predictor that assigns the probability vector of [90%, 2%, 5%, 3%] to the possible channels predicts the same best channel the first as a predictor that outputs the probability vector [30%, 20%, 25%, 25%]. However, the latter predictor is less certain of its decision, and it may be preferable for the unlicensed link to refrain from accessing the channel when acting on less confident predictions, e.g., to avoid excessive interference to licensed links [1], [2].As in the example above, AI models typically report a confidence measure associated with each prediction, which reflects the model's selfevaluation of the accuracy of a decision. Notably, neural network models implement probabilistic predictors that produce a probability distribution across all possible values of the output variable.The selfreported model confidence, however, may not be a reliable measure of the true, unknown, accuracy of a prediction. In such situations, the AI model is said to be poorly calibrated.As illustrated in the example inFig. 1, accuracy and calibration are distinct criteria, with neither criterion implying the other. It is, for instance, possible to have an accurate predictor that consistently underestimates the accuracy of its decisions, and/or that is overconfident where making incorrect decisions (see fourth column inFig. 1). Conversely, one can have inaccurate predictions that estimate correctly their uncertainty (see fifth column inFig. 1).Deep learning models tend to produce either overconfident decisions [3], or calibration levels that rely on strong assumptions about the groundtruth, unknown, data generation mechanism [4][9]. This paper investigates the use of conformal prediction (CP) [10][12] as a framework to design provably wellcalibrated AI predictors, with distributionfree calibration guarantees that do not require making any assumption about the groundtruth data generation mechanism.B. Conformal Prediction for AIBased Wireless SystemsCP leverages probabilistic predictors to construct wellcalibrated set predictors. Instead of producing a probability vector, as in the examples inFig. 1, a set predictor outputs a subset of the output space, as exemplified inFig. 2. A set predictor is well calibrated if it contains the correct output with a predefined coverage probability selected by the system designer. For a wellcalibrated set predictor, the size of the prediction set for a given input provides a measure of the uncertainty of the decision. Set predictors with smaller average prediction size are said to be more efficient [10]. This paper investigates CP as a general mechanism to obtain AI models with formal calibration guarantees for communication systems. The calibration guarantees of CP hold irrespective of the true, unknown, distribution underlying the generation of the variables of interest, and are defined either in terms of ensemble averages[10]or
(a) Examples of probabilistic predictors for two inputs x 1 and x 2 : As compared to the groundtruth distribution in the second column, the first predictor (third column) is accurate, assigning the largest probability to the optimal decision (indicated as "opt" in the second column) and also well calibrated, reproducing the true accuracy of the decision; the second predictor (fourth column) is still accurate, but it is underconfident on the correct decision (for input x 1 ) and overconfident on the correct decision (for input x 2 ); the third predictor (fifth column) is not accurate, producing a uniform distribution across all output values, but is well calibrated if the data set is balanced [13]; and the last predictor (sixth column) is both inaccurate and poorly calibrated, providing overconfident decisions. (b) Confidence versus accuracy for the decisions made by the corresponding predictors. A wellcalibrated set predictor can be inefficient if it returns excessively large set predictions (forth column). In contrast, a poorlycalibrated set predictor (fifth column) returns set predictions that include the true value of the label with a probability smaller than 1 − α.
in terms of longterm averages [14]. CP is applied in conjunction to both frequentist and Bayesian learning, and specific applications are discussed to demodulation, modulation classification, and channel prediction.
C. Related Work
Most work on AI for communications relies on conventional frequentist learning tools (see, e.g., the review papers [15] [18]). Frequentist learning is based on the minimization of the (regularized) training loss, which is interpreted as an estimate of the groundtruth population loss. When data is scarce, this estimate is unreliable, and hence the focus on a single, optimized, model parameter vector often yields probabilistic predictors that are poorly calibrated, producing overconfident decisions [3], [19] [21].
Bayesian learning offers a principled way to address this problem [22], [23]. This is done by producing as the output of the learning process not a single model parameter vector, but rather a distribution in the model parameter space, which quantifies the model's epistemic uncertainty caused by limited access to data. A model trained via
Bayesian learning produces probabilistic predictions that are averaged over the trained model parameter distribution.
This ensembling approach to prediction ensures that disagreements among models that fit the training data (almost) equally well are accounted for, substantially improving model calibration [24], [25].
In practice, Bayesian learning is implemented via approximations such as variational inference (VI) or Monte Carlo (MC) sampling, yielding scalable learning solutions [23]. VI methods approximate the exact Bayesian posterior distribution with a tractable variational density [26] [29], while MC techniques obtain approximate samples from the Bayesian posterior distribution [30] [32]. Among other applications to communications systems, Bayesian learning was studied for network allocation in [33] [35], for massive MIMO detection in [36] [38], for channel estimation in [39] [41], for user identification in [42], and for multi user detection in [43], [44]. Extensions to Bayesian metalearning have been investigated in [20].
Exact Bayesian learning offers formal guarantees of calibration only under the assumption that the assumed model is well specified [4], [5]. In practice, this means that the assumed neural network models should have sufficient capacity to represent the groundtruth data generation mechanism, and that the predictive uncertainty should be unimodal for continuous outputs (since conventional likelihoods are unimodal, e.g., Gaussian) [5], [23], [24]. These assumptions are easily violated in practice, especially in communication systems in which lowercomplexity models must be implemented on edge devices, and access to data for specific network configurations is limited. Specific examples are provided in [21] for applications including modulation classification [45], [46] and localization [47], [48].
Robustified versions of Bayesian learning that are based on the optimization of a modified free energy criterion were shown empirically to partly address the problem of model misspecification [4], [5], with implications for communication systems presented in [21]. However, robust Bayesian learning solutions do not have formal guarantees of calibration in the presence of misspecified models.
Another family of methods that aim at enhancing the calibration of probabilistic models implement a validationbased postprocessing phase. Platt scaling [49] and temperature scaling [3] find a fixed parametric mapping of the trained model output that minimizes the validation loss, while isotonic regression [50] applies a nonparametric binning approach. These recalibrationbased approaches cannot guarantee calibration, as they may overfit the validation data set [51] and they are sensitive to the inaccuracy of the starting model [52].
Conformal prediction is a general framework for the design of set predictors that satisfy formal, distributionfree, guarantees of calibration [10], [11]. Given a desired miscoverage probability α, CP returns set predictions that include the correct output value with probability at least 1 − α under the only assumption that the data distribution is exchangeable. This condition is weaker that the standard assumption of "i.i.d." data made in the design of most machine learning systems.
The original work on CP, [10], introduced validationbased CP and full CP. Since then, progress has been made on reducing computational complexity, minimizing the size of the prediction sets, and further alleviating the assumptions of exchangeability. Crossvalidationbased CP was proposed in [53] to reduce the computational complexity as compared to full CP, while improving the efficiency of validationbased CP. The authors of [54], [55] proposed the optimization of a CPaware loss to improve the efficiency of validationbased CP, while avoiding the larger computational cost of crossvalidation. The work [56] proposed reweighting as a means to handle distribution shifts between the examples in the data set and the test point. Other research directions include improvements in the training algorithms [57], [58], and the introduction of novel calibration metrics [59], [60]. Finally, online CP, presented in [14], [61], was shown to achieve longterm calibration over time without requiring statistical assumptions on the data generation.
D. Main Contributions
To the best of our knowledge, with the exception of the conference version [62] of this paper, this is the first work to investigate the application of CP to the design of AI models for communication systems. The main contributions of this paper are as follows.
• We provide a selfcontained introduction to CP by focusing on validationbased CP [10], crossvalidationbased CP [53], and online conformal prediction [61]. The presentation details connections to conventional probabilistic predictors, as well as the performance metrics used to assess calibration and efficiency.
• We propose the application of offline CP to the problems of symbol demodulation and modulation classification.
The experimental results validate the theoretical property of CP methods of providing wellcalibrated decisions.
Furthermore, they demonstrate that naïve predictors that only rely on the output of either frequentist or Bayesian learning tools often result in poor calibration.
• Finally, we study the application of online CP to the problem of predicting received signal strength for overtheair measured signals [63]. We demonstrate that online CP can obtain the predefined target longterm coverage rate at the cost of negligible increase in the prediction interval as compared to naïve predictors.
The conference version [62] of this work presented results only for symbol demodulation, while not providing background material on CP and not considering online CP. In contrast, this work is selfcontained, presenting CP from first principles and including also online CP. Furthermore, this work investigates applications of CP to modulation classification and to channel prediction by leveraging realworld data sets [63], [64]. For reproducibility purposes, we have made our code publicly available 1 .
The rest of this paper is organized as follows. In Sec. II, we define set predictors, and introduce the relevant performance metrics. Then, in Sec. III, naïve set predictors are introduced that do not provide guarantees in terms of calibration. Sec. IV describes conformal prediction, a general methodology to obtain wellcalibrated set predictors.
Sec. V details online conformal prediction, which is well suited for timevarying data. Applications to wireless communications are investigated in the following sections: Symbol demodulation is studied in Sec. VI; modulation classification in Sec. VII; and channel prediction in Sec. VIII. Sec. IX concludes the paper.
II. PROBLEM DEFINITION
This section introduces set predictors, along with key performance metrics of coverage and inefficiency. To this end, we start by describing the datageneration model and reviewing probabilistic predictors.
A. DataGeneration Model
We consider the standard supervised learning setting in which the learner is given a data set
D = {z[i]} N i=1 of N examples of inputoutput pairs z[i] = (x[i], y[i]) for i = 1, .
. . , N , and is tasked with producing a prediction on a test input x with unknown output y. Writing z = (x, y) for the test pair, data set D and test point z follow the unknown groundtruth, or population, distribution p 0 (D, z). Apart from Sec. V, we further assume throughout that the population distribution p 0 (D, z) is exchangeable a condition that includes as a special case the traditional independent and identically distributed (i.i.d.) datageneration setting. Note that we will not make explicit the distinction between random variables and their realizations, which will be clear from the context.
p 0 (D, zc) = p 0 (zc) N i=1 p 0 z[i] c(1)
for some groundtruth sampling distribution p 0 (zc) given the variable c, under the exchangeability assumption, the joint distribution can be expressed as
p 0 (D, z) = E p0(c) p 0 (D, zc) ,(2)
where E p(x) [·] denotes the expectation with respect to distribution p(x).
The vector c in (2) can be interpreted as including context variables that determine the specific learning task. For instance, in a wireless communication setting, the vector c may encode information about channel conditions. In
Sec. V, we will consider a more general setting in which no assumptions are made on the distribution of the data.
B. Probabilistic Predictors
Before introducing set predictors, we briefly review conventional probabilistic predictors. Probabilistic predictors implement a parametric conditional distribution model p(yx, φ) on the output y ∈ Y given the input x ∈ X , where φ ∈ Φ is a vector of model parameters. Given the training data set D, frequentist learning produces an optimized single vector φ * D , while Bayesian learning returns a distribution q * (φD) on the model parameter space Φ [23], [24]. In either case, we will denote as p(yx, D) the resulting optimized predictive distribution
p(yx, D) = p(yx, φ * D )
for frequentist learning E q * (φD) [p(yx, φ)] for Bayesian learning.
Note that the predictive distribution for Bayesian learning is obtained by averaging, or ensembling, over the optimized distribution q * (φD). We refer to Appendix A for basic background on frequentist and Bayesian learning.
From (3), one can obtain a point predictionŷ for output y given input x as the probabilitymaximizing output aŝ
y(xD) = argmax y ∈Y p(y x, D).(4)
In the case of a discrete set Y, the hard predictor (4) minimizes the probability of detection error under the model p(yx, D). The probabilistic prediction p(yx, D) also provides a measure of predictive uncertainty for all possible outputs y ∈ Y. In particular, for the point predictionŷ(xD) in (4), we have the predictive, selfreported, confidence level conf(xD) = max
y ∈Y p(y x, D) = p ŷ(xD) x, D .(5)
As illustrated in Fig. 1, the performance of a probabilistic predictor can be evaluated in terms of both accuracy and calibration, with the latter quantifying the quality of uncertainty quantification via the confidence level (5) [3]. Specifically, a probabilistic predictor p(yx, D) is said to be well calibrated [3] if the probability that the hard predictorŷ =ŷ(xD) equals the true label matches its confidence level π for all possible values of probability π ∈ [0, 1]. Mathematically, calibration is defined by the condition
P y =ŷ p(ŷx, D) = π = π, for all π ∈ [0, 1](6)
where the probability P(·) follows the groundtruth distribution p 0 (x, y). Stronger definitions, like that introduced in [66], require the predictive distribution to match the groundtruth distribution also for values of y that are distinct from (4).
C. Set Predictors
A set predictor is defined as a setvalued function Γ(·D) : X → 2 Y that maps an input x to a subset of the output domain Y based on data set D. We denote the size of the set predictor for input x as Γ(xD). As illustrated in the example of Fig. 2, the set size Γ(xD) generally depends on input x, and it can be taken as a measure of the uncertainty of the set predictor.
The performance of a set predictor is evaluated in terms of calibration, or coverage, as well as of inefficiency.
Coverage refers to the probability that the true label is included in the predicted set; while inefficiency refers to the average size Γ(xD) of the predicted set. There is clearly a tradeoff between two metrics. A conservative set predictor that always produces the entire output space, i.e., Γ(xD) = Y, would trivially yield a coverage probability equal to 1, but at the cost of exhibiting the worst possible inefficiency of Y. Conversely, a set predictor that always produces an empty set, i.e., Γ(xD) = ∅, would achieve the best possible inefficiency, equal to zero, while also presenting the worst possible coverage probability equal to zero.
Let us denote a set predictor Γ(··) for short as Γ. Formally, the coverage level of set predictor Γ is the probability that the true output y is included in the prediction set Γ(xD) for a test pair z = (x, y). This can be expressed as coverage(Γ) = P y ∈ Γ(xD) , where the probability P(·) is taken over the groundtruth joint distribution
(a) (b) Γ x[i]{z[j]} i−1 j=1 Γ(xD) y Y y[i] Y timep 0 (D, (x, y)) in (2). The set predictor Γ is said to be (1 − α)valid if it satisfies the inequality coverage(Γ) = P y ∈ Γ(xD) ≥ 1 − α.(7)
When the desired coverage level 1 − α is fixed by the predetermined target miscoverage level α ∈ [0, 1], we will also refer to set predictors satisfying (7) as being well calibrated.
Following the discussion in the previous paragraph, it is straightforward to design a valid, or wellcalibrated, set predictor, even for the restrictive case of miscoverage level α = 0. This can be, in fact, achieved by producing the full set Γ(xD) = Y for all inputs x. One should, therefore, also consider the inefficiency of predictor Γ. The inefficiency of set predictor Γ is defined as the average prediction set size
inefficiency(Γ) = E Γ(xD) ,(8)
where the average is taken over the data set D and the test pair (x, y) following their exchangeable joint distribution p 0 (D, (x, y)).
In practice, the coverage condition (7) is relevant if the learner produces multiple predictions using independent data set D, and is tested on multiple pairs (x, y). In fact, in this case, the probability in (7) can be interpreted as the fraction of predictions for which the set predictor Γ(xD) includes the correct output. This situation, illustrated in Fig. 3(a), is quite common in communication systems, particularly at the lower layers of the protocol stack. For instance, the data D may correspond to pilots received in a frame, and the test point z to a symbol within the payload part of the frame (see Sec. VI). While the coverage condition (7) is defined under the assumption of a fixed groundtruth distribution p 0 (D, z), in Sec. V we will allow for temporal distributional shifts and we will focus on validity metrics defined as longterm time averages (see Fig. 3 Fig. 4. A naïve probabilisticbased (NPB) set predictor uses a pretrained probabilistic predictor to include all output values to which the probabilistic predictor assigns the largest probabilities that reach the coverage target 1 − α. This naïve scheme has no formal guarantee of calibration, i.e., it does not guarantee the coverage condition (7), unless the original probabilistic predictor is well calibrated.
(b)). D train predict x model {p (y x, D)} y ∈Y order probabilities threshold { } 0 1 1 − α Γ NPB (xD)
III. NAÏVE SET PREDICTORS
Before describing CP in the next section, in this section we review two naïve , but natural and commonly used, approaches to produce set predictors, that fail to satisfy the coverage condition (7).
A. Naïve Set Predictors from Probabilistic Predictors
Given a probabilistic predictor p(yx, D) as in (3), one could construct a set predictor by relying on the confidence levels reported by the model. Specifically, aiming at satisfying the coverage condition (7), given an input x, one could construct the smallest subset of the output domain Y that covers a fraction 1 − α of the probability designed by model p(yx, D). Mathematically, the resulting naïve probabilisticbased (NPB) set predictor is defined as
Γ NPB (xD) =argmin Γ∈2 Y Γ (9) s.t. y ∈Γ p(y x, D) ≥ 1 − α
for the case of a discrete set, and an analogous definition applies in the case of a continuous domain Y. Fig. 4 illustrates the NPB for a prediction problem with output domain size Y = 4. Given that, as mentioned in Sec. I, probabilistic predictors are typically poorly calibrated, the naïve set predictor (9) does not satisfy condition (7) for the given desired miscoverage level α, and hence it is not well calibrated. For example, in the typical case in which the probabilistic predictor is overconfident [3], the predicted sets (9) tend to be too small to satisfy the coverage condition (7).
B. Naïve Set Predictors from Quantile Predictors
While the naïve probabilisticbased set predictor (9) applies to both discrete and continuous target variables, we now focus on the important special case in which Y is a real number, i.e., Y = R. This corresponds to scalar regression problems, such as for channel prediction (see Sec. VIII). Under this assumption, one can construct a naïve set predictor based on estimates of the α/2and (1 − α/2)quantiles y α/2 (x) and y 1−α/2 (x) of the groundtruth distribution p 0 (yx) (obtained from the joint distribution p 0 (D, z)). In fact, writing as
y q (x) = inf y ∈ R : y −∞ p 0 (y x) dy ≤ q(10)
the qquantile, with q ∈ [0, 1], of the groundtruth distribution p 0 (yx), the interval y α/2 (x), y 1−α/2 (x) contains the true value y with probability 1 − α.
Defining the pinball loss as [67] q (y,ŷ) = max − (1 − q)(y −ŷ), q(y −ŷ) (11) for q ∈ [0, 1], the quantile y q (x) in (10) can be obtained as [68] y q (x) = argmin
y∈R E p0(yx) q (y,ŷ) .(12)
Therefore, given a parametrized predictive modelŷ(xφ), the quantile y q (x) can be estimated asŷ(xφ D,q ) with optimized parameter vector
φ D,q = argmin φ 1 N (x,y)∈D q y,ŷ(xφ) .(13)
With the estimateŷ(xφ D,α/2 ) of quantile y α/2 (x) and estimateŷ(xφ D,1−α/2 ) of quantile y 1−α/2 (x), we finally obtain the naïve quantilebased (NQB) predictor
Γ NQB (xD) = ŷ(xφ D,α/2 ),ŷ(xφ D,1−α/2 ) .(14)
The naïve set prediction in (14) fails to satisfy the condition (7), since the empirical quantilesŷ q (x) generally differ from the groundtruth quantiles y q (x).
IV. CONFORMAL PREDICTION
In this section, we review CPbased set predictors, which have the key property of guaranteeing the (1 − α)validity condition (7) for any predetermined miscoverage level α, irrespective of the groundtruth distribution p 0 (D, z) of the data. We specifically focus on validationbased CP [10] and crossvalidationbased CP [53], which are more practical variants of full CP [10], [69]. In Sec. V, we cover online CP [14], [61].
A. ValidationBased CP (VBCP)
In this subsection, we describe validationbased CP (VBCP), which partitions the available set D = D tr ∪ D val into a training set D tr with N tr samples and a validation set D val with N val = N − N tr samples ( Fig. 5(a)). This class of methods is also known as inductive CP [10] or split CP [53].
VBCP operates on any pretrained probabilistic model p(yx, D tr ) obtained using the training set D tr as per (3).
At test time, given an input x, VBCP relies on a validation set to determine which labels y ∈ Y should be included in the predicted set. Specifically, for any given test input x, a label y ∈ Y is included in set Γ VB (xD) depending on the extent to which the candidate pair (x, y ) "conforms" with the examples in the validation set.
This "conformity" test for a candidate pair is based on a nonconformity (NC) score. An NC score for VBCP can be obtained as the logloss
NC(z = (x, y)D tr ) = − log p(yx, D tr )(15)
or as any other score function that measures the loss of the probabilistic predictor p(yx, D tr ) on example (x, y). It is also possible to define NC scores for quantilebased predictors as in (14), and we refer to [61] for details. VBCP consists of a training phase (Fig. 5(a)(d)) and of a test phase (Fig. 5(e)). During training, the data set D tr is used to obtain a probabilistic predictor p(yx, D tr ) as in (3) (Fig. 5(b)). Then, NC scores NC(z val [i]D tr ), as in (15), are evaluated on all points z val [i], i = 1, . . . , N val in the validation set D val (Fig. 5(c)). Finally, the real line of NC scores is partitioned into a "keep" region and a "discard" region ( Fig. 5(d)), choosing as a threshold the
split D D tr D val train D tr NC D val NC z val [1]D tr NC z val [2]D tr NC z val N val D tr . . . . . . . . . (a) (b) (c) model model NC (d) (e) 0quantile (1 − α)quantile keep discard NC {(x, y )} y ∈Y NC ((x, y )D(1 − α)empirical quantile of the N val NC scores {NC(z val [i]D tr )} N val i=1
. Accordingly, we "keep" the labels y with NC scores that are smaller than the (1 − α)empirical quantile of the validation NC scores, and "discard" larger NC scores.
During testing (Fig. 5(e)), given a test input x, Y NC scores are evaluated, one for each of the candidate labels y ∈ Y, using the same trained model p(yx, D tr ). All candidate labels y for which the NC score NC((x, y )D tr ) falls within the "keep" region are included in the predicted set of VBCP.
Mathematically, the VBCP set predictor is obtained as
Γ VB (xD) = y ∈ Y NC((x, y )D tr )(16)≤ Q α {NC(z val [i]D tr )} N val i=1 ,
where the empirical quantile from the top for a set of N real values
{r[i]} N i=1 is defined as Q α {r[i]} N i=1 = (1 − α)(N + 1) th smallest value of the set {r[i]} N i=1 ∪ {+∞}.(17)
B. CrossValidationBased CP (CVCP)
VBCP has the computational advantage of requiring the training of a single model, but the split into training and validation data causes the available data to be used in an inefficient way. This data inefficiency generally yields set predictors with a large average size (8). Unlike VBCP, crossvalidationbased CP (CVCP) [53] trains multiple models, each using a subset of the available data set D. As detailed next and summarized in Fig. 6 Specifically, as illustrated in Fig. 6, Kfold CVCP [53], referred here as KCVCP, first partitions the data set Fig. 6(a)), for a predefined integer K ∈ {2, . . . , N } such that the ratio N/K is an integer.
D into K disjoint folds {S k } K k=1 , each with N/K points, i.e., ∪ K k=1 S k = D (
During training, the K subsets D \ S k are used to train K probabilistic predictors p(yx, D \ S k ) defined as in (3) ( Fig. 6(b)). Each trained model p(yx, D \ S k ) is used to evaluate the S k  = N/K NC scores NC z k D \ S k for all validation data points z k ∈ S k that were not used for training the model (Fig. 6(c)). Unlike VBCP, KCVCP requires keeping in memory all the N validation scores for testing. These points are illustrated as crosses in Fig. 6(c).
During testing, for a given test input x and for any candidate label y ∈ Y, CVCP evaluates K NC scores, one for each of the K trained models. Each such NC score NC (x, y ) D \ S k is compared with the N/K validation scores obtained on fold S k . We then count how many of the N/K validation scores are larger than NC (x, y ) D \ S k . If the sum of all such counts, across the K folds {S k } K k=1 , is larger than a fraction α of all N data points, then the candidate label y is included in the prediction set ( Fig. 6(d)). This criterion follows the same principle of VBCP of including all candidate labels y that "conform" well with a sufficiently large fraction of validation points.
Mathematically, KCVCP is defined as
Γ KCV (xD) = y ∈ Y K k=1 z k ∈S k 1 NC (x, y ) D \ S k } (18) ≤ NC z k D \ S k ≥ α(N + 1) ,
where 1(·) is the indicator function (1(true) = 1 and 1(false) = 0). The lefthand side of the inequality in (18) implements the sums, shown in Fig. 6(d), over counts of validation NC scores that are larger than the corresponding NC score for the candidate pair (x, y ).
KCVCP increases the computational complexity Kfold as compared to VBCP, while generally reducing the inefficiency [53]. The special case of K = N , known as jackknife+ [53], is referred here as CVCP. In this case, each of the N folds S k , k = 1, . . . , N uses a single cross validation point. In general, CVCP is the most efficient form of KCVCP, but it may be impractical for large data set sizes due to need to train N models. The number of folds K should strike a balance between computational complexity, as K models are trained, and inefficiency. Specifically, for frequentist learning, the optimization algorithm producing the parameter vector φ * D in (3) must be permutationinvariant. This is the case for standard methods such as fullbatch gradient descent (GD), or for nonparametric techniques such as Gaussian processes. For Bayesian learning, the distribution q * (φD ) in (3) must also be permutationinvariant, which is true for the exact posterior distribution [23], as well as for approximations obtained via MC methods such as Langevin MC [23], [31].
Kfold split D train D\S1 NC S1 {NC(z1D\S1)} z1∈S1 (a) (b) (c)NC ((x, y ) D\Sk) NC ((x, y ) D\SK) (x, y ) y ∈ Y y = y = . . . x Γ KCV (xD) = { } # # # α(N + 1)
The requirement on permutationinvariance can be alleviated by allowing for probabilistic training algorithms such as stochastic gradient descent (SGD) [70]. With probabilistic training algorithms, the only requirement is that the distribution of the (random) output models is permutationinvariant. This is, for instance, the case if SGD is implemented by taking minibatches uniformly at random within the training set D [70] [72]. With probabilistic training algorithms, however, the validity condition (7) of CVCP is only guaranteed on average with respect to the random outputs of the algorithms.
Specifically, under the discussed assumption of permutationinvariance of the NC scores, by [53, Theorems 1 and 4], CVCP satisfies the inequality
P y ∈ Γ CV (xD) ≥ 1 − 2α,(19)
while KCVCP satisfies the inequality
P y ∈ Γ KCV (xD) ≥1 − 2α − min 2(1−1/K) N/K+1 , 1−K/N K+1 ≥1 − 2α − 2/N .(20)
Therefore, validity for both crossvalidation schemes is guaranteed for the larger miscoverage level of 2α. Accordingly, one can achieve miscoverage level of α, satisfying (7), by considering the CVCP set predictor Γ CV (xD) with α/2 in lieu of α in (18). That said, in the experiments, we will follow the recommendation in [53] and [71] to use α in (18).
V. ONLINE CONFORMAL PREDICTION
In this section, we turn to online CP. Unlike the CP schemes presented in the previous section, online CP makes no assumptions about the probabilistic model underlying data generation [14], [61]. In the offline version of CP reviewed in the previous section, all N samples of the data set D are assumed to be available upfront (see Fig. 3(a)). In contrast, in online CP, a set predictor Γ i for time index i is produced for
each new input x[i] over time i = 1, 2, . . . Specifically, given the past observations {z[j]} i−1 j=1 , the set predictor Γ i x[i] {z[j]} i−1
j=1 outputs a subset of the output space Y. Given a target miscoverage level α ∈ [0, 1], an online set predictor is said to be (1 − α)longterm valid if the following limit holds
lim I→∞ 1 I I i=1 1 y[i] ∈ Γ i x[i] {z[j]} i−1 j=1 = 1 − α(21)
for all possible sequences z[i] with i = 1, 2, . . . Note that the condition (21), unlike (7), does not involve any ensemble averaging with respect to the data distribution. We will take (21) as the relevant definition of calibration for online learning.
Rolling conformal inference (RCI) [61] adapts in an online fashion a calibration parameter θ[i] across the time index i as a function of the instantaneous error variable
err[i] = 1 y[i] / ∈ Γ i (x[i]) ,(22)
which equals 1 if the correct output value is not included in the prediction set Γ i (x[i]), and 0 otherwise. This is done using the update rule
θ[i + 1] ← θ[i] + γ err[i] − α ,(23)
where γ > 0 is a learning rate. Accordingly, the parameter θ is increased by γ(1 − α) if an error occurs at time i, and is decreased by γα otherwise. Intuitively, a large positive parameter θ [i] indicates that the set predictor should be more inclusive in order to meet the validity constraint (21); and vice versa, a large negative value of θ[i] suggests that the set predictor can reduce the size of the prediction sets without affecting the longterm validity constraint (21).
Following [61], we elaborate on the use of the calibration parameter θ[i] in order to ensure condition (21) for an online version of the naïve quantilebased predictor (14) for scalar regression. A similar approach applies more broadly (see [14], [73], and [74]
Γ RCI i x[i] D[i] (24) = ŷ(xφ D[i],α/2 ) − ϕ(θ[i]),ŷ(xφ D[i],1−α/2 ) + ϕ(θ[i]) ,
where ϕ(θ) = sign(θ) exp θ −1 (25) is the socalled stretching function, a fixed monotonically increasing mapping.
The set predictor RCI (24) "corrects" the NQB set predictor (14)
VI. SYMBOL DEMODULATION
In this section, we focus on the application of offline CP, as described in Sec. IV, to the problem of symbol demodulation in the presence of transmitter hardware imperfections. This problem was also considered in [20], [75] by focusing on frequentist and Bayesian learning. Unlike [20], [75], we investigate the use of CP as a means to obtain set predictors satisfying the validity condition (7).
A. Problem Formulation
The problem of interest consists of the demodulation of symbols from a discrete constellation based on received baseband signals subject to hardware imperfections, noise, and fading. The goal is to design set demodulators that output a subset of all possible constellation points with the guarantee that the subset includes the true transmitted signal with the desired target probability 1 − α. In the context of channel decoding, this type of receiver is referred to as a list decoder [76].
To keep the notation consistent with the previous sections, we write as y[i] the ith transmitted symbols, and as
x[i] the corresponding received signal. Each transmitted symbol y[i] is drawn uniformly at random from a given constellation Y. We model I/Q imbalance at the transmitter and phase fading as in [62]. Accordingly, the groundtruth channel law connecting symbols y[i] into received samples x[i] is described by the equality
x[i] = e ψ f IQ (y[i]) + v[i],(26)
for a random phase ψ ∼ U[0, 2π), where the additive noise is v[i] ∼ CN (0, SNR −1 ) for signaltonoise ratio level SNR. Furthermore, the I/Q imbalance function [77] is defined as
f IQ (y[i]) =ȳ I [i] + ȳ Q [i],(27)
where
ȳ I [i] y Q [i] = 1 + 0 0 1 − cos δ − sin δ − sin δ cos δ y I [i] y Q [i] ,(28)
B. Implementation
As in [20], [75], demodulation is implemented via a neural network probabilistic model p(yx, φ) consisting of a fully connected network with real inputs x[i] of dimension 2 as per (26), followed by three hidden layers with 10, 30, and 30 neurons having ReLU activations in each layer. The last layer implements a softmax classification for the Y possible constellation points.
We adopt the standard NC score (15), where the trained model φ D for frequentist learning is obtained via I = 120 GD update steps for the minimization of the crossentropy training loss with learning rate η = 0.2; while for Bayesian learning we implement a gradientbased MC method, namely Langevin MC, with burnin period of R min = 100, ensemble size R = 20, learning rate η = 0.2, and temperature parameter T = 20. We assume standard
Gaussian distribution for the prior distribution [31]. Details on Langevin MC can be found in Appendix A.
We compare the naïve set predictor (9), also studied in [20], [75], which provides no formal coverage guarantees, with the CP set prediction methods reviewed in Sec. IV. VBCP uses equal set sizes for the training and validation sets. We target the miscoverage level as α = 0.1. The coverage level is set to 1 − α = 0.9, and each numerical evaluation is averaged over 50 independent trials (new channel state c) with N te = 100 test points.
C. Results
We consider the AmplitudePhaseShiftKeying (APSK) modulation with Y = 8. The SNR level is set to SNR = 5 dB. The amplitude and phase imbalance parameters are independent and distributed as ∼ Beta( /0.155, 2) and δ ∼ Beta(δ/15 • 5, 2), respectively [75].
and Fig. 8 shows the
empirical inefficiency = 1 N te N te j=1 Γ(x te [j]D) ,(30)
both evaluated on a test set D te = {(x te [j], y te [j])} N te j=1 with N te = 100, as a function of the size of the available data set D. We average the results for 50 independent trials, each corresponding to independent draws of the variables {D, D te } from the ground truth distribution. This way, the metrics (29) (30) provide an estimate of the coverage (7) and of the inefficiency (8), respectively [53].
From Fig. 7, we first observe that the naïve set predictor, with both frequentist and Bayesian learning, does not meet the desired coverage level in the regime of a small number N of available samples. In contrast, confirming the theoretical calibration guarantees presented in Sec. IV, all CP methods provide coverage guarantees, achieving coverage rates above 1 − α. Furthermore, as seen in Fig. 8, coverage guarantees are achieved by suitably increasing the size of prediction sets, which is reflected by the larger inefficiency. The size of the prediction sets, and hence the inefficiency, decreases as the data set size, N , increases. In this regard, due to their more efficient use of the available data, CVCP and KCVCP predictors have a lower inefficiency as compared to VB predictors, with CVCP offering the best performance. Finally, Bayesian NC scores are generally seen to yield set predictors with lower inefficiency, confirming the merits of Bayesian learning in terms of calibration.
VII. MODULATION CLASSIFICATION
In this section, we propose and evaluate the application of offline CP to the problem of modulation classification [45], [46].
A. Problem Formulation
Due to the scarcity of frequency bands, electromagnetic spectrum sharing among licensed and unlicensed users is of special interest to improve the efficiency of spectrum utilization. In sensingbased spectrum sharing, a transmitter scans the prospective frequency bands to identify, for each band, if the spectrum is occupied, and, if so, if the signal is from a licensed user or not. A key enabler for this operation is the ability to classify the modulation of the received signal [78]. The modulation classification task is made challenging by the dimensionality of the baseband input signal and by the distortions caused by the propagation channel. Datadriven solutions [79] have shown to be effective for this problem in terms of accuracy, while the focus here is on calibration performance. Accordingly, we aim at designing set modulation classifiers that output a subset of the set of all possible modulation schemes with the property that the true modulation scheme is contained in the subset with a desired probability level 1 − α. To this end, we adopt the data set provided by [80], which has approximately 2.5 × 10 6 baseband signals of 1024 I/Q samples, each produced using one out of 24 possible digital and analog modulations across different SNR values and channel models. We focus only on the high SNR regime (≥ 6 dB). This data set D is made out of approximately 1.28 × 10 6 (x, y) pairs, where x is the channel output signal of dimension 2048 and y is the index of one of the Y = 24 possible modulations. The SNR value itself is not available to the classifier.
B. Implementation
We use a neural network architecture similar to the one used in [80], which has 7 onedimensional convolutional layers with kernel size 3 and 64 channels for all layers, except for the first layer with has 2 channels. The convolution layers are followed by 3 fullyconnected linear layers. A scaled exponential linear unit (SELU) is used for all inner layers, and a softmax is used at the last, fully connected, layer. We assume availability of N = 4800 pairs (x, y)
for the data set D, while gauging the empirical inefficiency and coverage level with N te = 1000 heldout pairs. A total number of I = 4000 GD steps with fixed learning rate of 0.02 are carried out, and the target miscoverage rate is set to α = 0.1. VB partitions its available data into equal sets for training and validation.
C. Results
In this problem, due to computational cost, we exclude CVCP and we focus on KCVCP with a moderate number of folds, namely K = 6 and K = 12. In Fig. 9, box plots show the quartiles of the empirical coverage (29) and of the empirical inefficiency (30) As also noted in the previous section, VBCP suffers from larger predicted set size as compared to KCVCP, due to poor sample efficiency. A small number of folds, as low as K = 6, is sufficient for KCVCP to outperform VBCP. This improvement in efficiency comes at the computational cost of training six models, as compared to the single model trained by VBCP.
VIII. ONLINE CHANNEL PREDICTION
In this section, we investigate the use of online CP, as described in Sec. V, for the problem of channel prediction.
We specifically focus on the prediction of the received signal strength (RSS), which is a key primitive at the physical layer, supporting important functionalities such as resource allocation [81], [82].
A. Problem Formulation
Consider a receiver that has access to a sequence of RSS samples from a given device. We aim at designing a predictor that, given a sequence of past samples from the RSS sequence, produces an interval of values for the next RSS sample. To meet calibration requirements, the interval must contain the correct future RSS value with the desired rate level 1 − α. Unlike the previous applications, here the rate of coverage is evaluated based on the time
average 1 t t i=1 1 y[i] ∈ Γ i x[i] {z[j]} i−1 j=1 .(31)
B. Implementation
We build the CP set predictor by leveraging the probabilistic neural network used in [61] as the model class for the quantile predictors in (13) (14). Each quantile predictor consists of a multilayer neural network that preprocesses the most recent K pairs {z[i − K], . . . , z[i − 1]}; of a stacked long shortterm memory (LSTM) [83] with two layers; and of a postprocessing neural network, which maps the last LSTM hidden vector into a scalar that estimates the quantile used in (14). For details of the implementation, we refer to Appendix C. Fig. 11 report the timeaverage coverage = 1
C. Results
Fig. 10 and
I I i=1 1 y[i] ∈ Γ i x[i] {z[j]} i−1 j=1(32)
and the timeaverage inefficiency = 1 for online CP (24), compared to a baseline of the naïve quantilebased predictor (14), as a function of the time window size I for data sets [63] and [63], respectively. We have discarded 1000 samples for a warmup period for both metrics (32) and (33).
I I i=1 Γ i x[i] {z[j]} i−1 j=1(33)
In both cases, the naïve predictor is seen to fail to satisfy the coverage condition (21)
min φ L D (φ)=− 1 N (x,y)∈D log p(yx, φ) (34) =E p D (x,y) − log p(yx, φ) ,
with empirical distribution p D (x, y) defined by the data set D.
Bayesian learning addresses epistemic uncertainty by treating the model parameter vector as a random vector φ with prior distribution φ ∼ p(φ). Ideally, Bayesian learning updates the prior p(φ) to produce the posterior distribution p(φD) as
p(φD) ∝ p(φ) N i=1 p y[i] x[i], φ(35)
and obtains the ensemble predictor for the test point (x, y) by averaging over multiple models, i.e.,
p(yx, D) = E p(φD) [p(yx, φ)].(36)
In practice, as the true posterior distribution is generally intractable due to the normalizing factor in (35), approximate Bayesian approaches are considered via VI or MC techniques (see, e.g., [23]).
In the experiments, we adopted Langevin MC to approximate the Bayesian posterior [23], [31]. Langevin MC adds Gaussian noise to each standard GD update for frequentist learning (see, e.g., [23,Sec. 4.10]). The noise has power 2η/T , where η is the GD learning rate and T > 0 is a temperature parameter. Langevin MC produces R model parameters {φ[r]} R r=1 across R consecutive iterations. We specifically retain only the last R samples, discarding an initial burnin period of R min iterations. The temperature parameter T is typically chosen to be larger than 1 [86], [87]. With the R samples, the expectation term in (36) is approximated as the empirical average 1 R R r=1 p(yx, φ[r]).
We observe that Langevin MC is a probabilistic training algorithm, and that it satisfies the permutationinvariance property in terms of the distribution of the random output models discussed in Sec. IVC.
APPENDIX B ALGORITHMIC DETAILS FOR ROLLING CONFORMAL INFERENCE
The RCI algorithm is reproduced from [61] in Algorithm 1. The architecture of the set predictor is inspired by [61], and made out of three artificial neural networks. The first,
[c 1 k [i], h 1 k [i]] = f LSTM w k [i], c 1 k−1 [i], h 1 k−1 [i] φ 1 LSTM [i](37)[c 2 k [i], h 2 k [i]] = f LSTM h 1 k [i], c 2 k−1 [i], h 2 k−1 [i] φ 2 LSTM [i] .(38)
The The miscoverage rate was set to α = 0.1, the learning rate to η = 0.01, and we chose γ = 0.03 for the calibration parameter θ in (23).
Fig. 1. (a) Examples of probabilistic predictors for two inputs x 1 and x 2 : As compared to the groundtruth distribution in the second column, the
Fig. 2 .
2Set predictors produce subsets of the range of the output variable (here { } ) for each input. Calibration is measured with respect to a desired coverage level 1 − α: A set predictor is well calibrated if the true label is included in the prediction set with probability at least 1 − α.
Mathematically, exchangeability requires that the joint distribution p 0 (D, z) does not depend on the ordering of the N + 1 variables {z[1], . . . , z[N ], z}. Equivalently, by de Finetti's theorem [65], there exists a latent random vector c with distribution p 0 (c) such that, conditioned on c, the variables {z[1], . . . , z[N ], z} are i.i.d. Writing the conditional i.i.d. distribution as
Fig. 3 .
3(a) The validity condition (7) assumed in offline CP is relevant if one is interested in the average performance with respect to realizations (D, z) ∼ p 0 (D, z) of training set D and test variable z = (x, y). Input variable x is not explicitly shown in the figure, and the horizontal axis runs over the training examples in D and the test example z. (b) In online CP, the set predictor Γ i uses its input x[i] and all previously observed pairs z[1], . . . , z[i − 1] with z[i] = (x[i], y[i]) to produce a prediction set. The longterm validity (21) assumed by online CP is defined as the empirical timeaverage rate at which the predictor Γ i includes the true target variable y[i].
Fig. 5 .
5Validationbased conformal prediction (VBCP): (a) The data set is split into training and validation set; (b) A single model is trained over the training data set; (c)(d) Posthoc calibration is done by evaluating the NC scores on the validation set (c) and by identifying the(1 − α)quantile of the validation NC scores. This divides the axis of NC scores into a "keep" region of NC scores smaller than the threshold, and into a complementary "discard" region (d). (e) For each test input x, VBCP includes in the prediction set all labels y ∈ Y for which the NC score of the pair (x, y ) is within the "keep" region.
, during the training phase, each data point z[i] in the validation set is assigned an NC score based on a model trained using a subset of the data set D that excludes z[i], with i ∈ {1, ..., N }. Then, for testing, the inclusion of a label y in the prediction set for an input x is based on a comparison of NC scores evaluated for the pair (x, y ) with all the N validation NC scores.
Fig. 6 .
6Kfold crossvalidationbased conformal prediction (KCVCP): (a) The N data pairs of data set D are split into Kfolds each with S k  = N/K samples; (b) K models are trained, each using a leavefoldout data set of D \ S k  = N − N/K pairs; (c) NC scores are computed on the N/K holdout data points for each fold S k ; (d) For each test input x, all labels y ∈ Y for which the number of "higherNC" validation points exceeds a fraction α of the total N points are considered in the prediction set. CVCP is the special case with K = N .C. Calibration GuaranteesVBCP (16) satisfies the coverage condition (7)[10] under the only assumption of exchangeability (see Sec. IIA).The validity of CVCP requires a technical assumption on the NC score. While in VBCP the NC score is an arbitrary score function evaluated based on any pretrained probabilistic model, for CVCP, the NC score NC(zD ) must satisfy the additional property of being invariant to permutations of the data set D used to train the underlying probabilistic model. Consider the logloss NC(z = (x, y)D ) = − log p(yx, D ) (15), or any other score function based on the trained model p(yx, D ), as the NC score. CVCP requires that the training algorithm used to produce model p(yx, D ) provides outputs that are invariant to permutations of the training set D .
Rather, it models the observations as a deterministic stream of inputoutput pairs z[i] = (x[i], y[i]) over time index i = 1, 2, . . . ; and it targets a coverage condition defined in terms of the empirical rate at which the prediction set Γ i at time i covers the correct output y[i].
, via the additive stretching function ϕ(θ[i]) based on the calibration parameter θ[i]. As the time index i rolls, the calibration parameter θ[i] adaptively inflates and deflates according to (23). Upon each observation of new label y[i], the quantile predictor model parameters φ D[i],α/2 and φ D[i],1−α/2 can also be updated, without affecting the longterm validity condition (21) [61, Theorem 1]. We refer to Appendix B for further details on online CP.
with y I [i] and y Q [i] being the real and imaginary parts of the modulated symbol y[i]; andȳ I [i] andȳ Q [i] standing for the real and imaginary parts of the transmitted symbol f IQ (y[i]). In (28), the channel state c consists of the tuple c = (ψ, , δ) encompassing the complex phase ψ and the I/Q imbalance parameters ( , δ).
Fig. 7 .
7Coverage for naïve set predictor (9), VBCP (16), CVCP, and KCVCP (18) with K = 4, for symbol demodulation problem (Section VI). For every set predictors, the NC scores are evaluated either using frequentist learning (dashed lines) or Bayesian learning (solid lines).
Fig. 8 .
8Average set prediction size (inefficiency) for the same setting ofFig. 7.
Fig. 7
7te [j] ∈ Γ(x te [j]D) ,
Fig. 9 .
9Coverage and inefficiency for NPB (9), VBCP (16), and KCVCP (18) with K = 6 and K = 12, for the modulation classification problem (implementation details in Section VIIB). The boxes represent the 25% (lower edge), 50% (solid line within the box), and 75% (upper edge) percentiles of the empirical performance metrics evaluated over 32 different experiments, with average value shown by the dashed line.
from 32 independent runs, with different realizations of data set and test examples. The lower edge of the box represents the 0.25quantile; the solid line within the box the median; the dashed line within the box the average; and the upper edge of the box the 0.75quantile. As can be seen in the figure, the naïve set predictor is invalid (see average shown as dashed line), and it exhibits a wide spread of the coverage rates across the trials. On the other hand, all CP set predictors are valid, meeting the predetermined coverage level 1 − α = 0.9, and have less spreadout coverage rates.
Fig. 10 .
10CP for timeseries Outdoor of [64]: (top) Timeaverage coverage (32) of naïve set prediction and online CP; (bottom) Timeaveraged inefficiency (33) of naïve set prediction and online CP.
Fig. 11 .
11CP for timeseries NLOS_Head_Indoor_1khz in [63]: (top) Timeaverage coverage (32) of naïve set prediction and online CP;(bottom) Timeaverage inefficiency (33) of naïve set prediction and online CP.
f
pre (·), is a multilayer perceptron (MLP) network with hidden layers of 16, 32 neurons each, parametrized by vector φ pre [i]. It is meant to apply a preprocess over the most recent observed K = 20 pairs {z[i − K], . . . , z[i − 1]} to be transformed elementwise into a lengthK vector w[i] = w 1 [i], . . . , w K [i] , in which the kth element (k = 1, . . . , K) is w k [i] = f pre z[i − K + k − 1] φ pre [i] .Effectively, this will serve as a temporal sliding Klengthwindow, with a timeevolving preprocessing function. The second neural network, f LSTM (·) has two layers with model parameter vectors φ 1 LSTM [i] (first layer) and φ 2 LSTM [i] (second layer), which retains a memory via the hidden state vectors h and c, initialized at every time index i as c 1 0 [i] = c 2 0 [i] = h 1 0 [i] = h 2 0 [i] = 0. By accessing the previous K pairs via the vector w[i], this recurrent neural network extracts temporal patterns by sequentially transferring information via LSTM cells (with shared parameter vectors) in the image of hidden and cell state vectors c k [i], h k [i] via the LSTM cells. These vectors flow along the LSTM by concatenating k = 1, . . . , K cells, and forming vectors of length 32 each
third and last network is a postprocessing MLP f post (·) with one hidden layer of 32 neurons, and with parameter vector φ post [i], which maps the last LSTM hidden 64length vector h K[i] = [h 1 K [i], h 2 K [i]] into a scalar f post x[i], h K [i] φ post [i] ∈ Rthat estimates the quantile for the output y[i]. Accordingly, the time evolving model parameter is the tupleφ[i] = (φ pre [i], φ 1 LSTM [i], φ 2 LSTM [i], φ post [i]).(39)This model is instantiated twice for the regression problem: one for the α/2 lower quantile and the other for 1 − α/2 upper quantile. For every time instant i, after the new output y[i] is observed, continual learning of the models is taken place by training the models with corresponding pinball losses (11) using the new pair (x[i], y[i]), while initializing the models as the previous models at time instant i − 1.
). Denote the data set D[i] = {z[j]} i−1 j=1 as having all previously observed labeled data set up till time i − 1. The key idea behind RCI is to extend the naïve prediction interval (14) depending on the calibration parameter θ[i] as
channel ID, which determines the carrier frequency used at time i out of the 16 possible bands, as the input x[i]. At time i, we observe a sequence of RSS samples z[1], . . . , z[i − 1] with z[i] = (x[i], y[i]), and the goal is to predict the next RSS sample y[i] via the online set predictor Γ RCIThe second data set[63] reports samples y[i], measured in dBm, on a 5.8 GHz devicetodevice link without additional input. Hence, in this case, we predict the next RSS sample y[i] using the previous RSS samples y[1], . . . , y[i − 1]. Note that the prior works[63],[64] adopted standard probabilistic predictors, while here we focus on set predictors that produce a prediction interval Γ RCIThis is computed as the fraction previous time instants i ∈ {1, ..., t} at which the set predictor Γ i includes the true
RSS value y[i].
We consider two data sets of RSS sequences. The first data set records RSS samples y[i] in logarithmic scale for
an IEEE 802.15.4 radio over time index i [64]. We further use the available side information on the timevariant
i
(24).
i
x[i] {z[j]} i−1
j=1 .
for both data sets, while online CP converges to the target level 1 − α = 0.9. This result is obtained by online CP with a modest increase of around 8% for both data sets in terms of inefficiency.IX. CONCLUSIONSAI in communication engineering should not only target accuracy, but also calibration, ensuring a reliable and safe adoption of machine learning within the overall telecommunication ecosystem. In this paper, we have proposed the adoption of a general framework, known as conformal prediction (CP), to transform any existing AI model into a wellcalibrated model via posthoc calibration for communication engineering. Depending on the situation of interest, posthoc calibration leverages either an heldout (cross) validation set or previous samples. Unlike calibration approaches that do not formally guarantee reliability, such as Bayesian learning or temperature scaling, CP provides formal guarantees of calibration, defined either in terms of ensemble averages or longterm time averages. Calibration is retained irrespective of the accuracy of the trained models, with more accurate models producing smaller set predictions.To validate the reliability of CPbased set predictors, we have provided extensive comparisons with conventional methods based on Bayesian or frequentist learning. Focusing on demodulation, modulation classification, and channel prediction, we have demonstrated that AI models calibrated by CP provide formal guarantees of reliability, which are practically essential to ensure calibration in the regime of limited data availability.Future work may consider applications of CP to other use cases in wireless communication systems, as well as extension of involving trainingbased calibration[55],[84] and/or metalearning[85].FREQUENTIST AND BAYESIAN LEARNING Given access to training data set D = z[i] = (x[i], y[i]) N i=1 with N examples, frequentist learning finds a model parameter vector φ * D by tackling the following empirical risk minimization (ERM) problemAPPENDIX A
Algorithm 1 :
1Rolling Conformal Inference (for Regression)[61] Inputs : α = longterm target miscoverage levelθ[1] = initial calibration parameter φ lo [1], φ hi [1] = initial modelsParameters : I = number of online iterations γ = learning rate for calibration parameter η = learning rate for model updatesOutput : {Γ RCI i x[i] {z[j]} i−1 j=1 } I i=1 = predicted sets for {x[i]} I i=1 1 for i = 1, . . . , I time instants do Retrieve a new data sample (x[i], y[i])Set prediction of new inputCalculate set Γ RCI i x[i] {z[j]} i−1 j=1 using 5 ŷ x[i] φ lo [i] − ϕ(θ[i]),ŷ x[i] φ hi [i] + ϕ(θ[i])Check if prediction is unsuccessfulerr[i] ← 1 y[i] / θ[i + 1] ← θ[i] + γerr[i] − α Update models using new sample 11 φ lo [i + 1] ← φ lo [i] − η∇ φ α/2 y[i],ŷ x[i] φ lo [i] 12 φ hi [i + 1] ← φ hi [i] − η∇ φ 1−α/2 y[i],ŷ x[i] φ hi [i] 13 return predicted sets {Γ RCI i x[i] {z[j]} i−1 j=1 } I IMPLEMENTATION OF ONLINE CHANNEL PREDICTION2
3
4
6
7
∈ Γ RCI
i
x[i] {z[j]} i−1
j=1
8
Update calibration parameter
9
10
i=1
APPENDIX C
https://github.com/kclip/cp4wireless
Advances in cognitive Radio Networks: A survey. B Wang, K R Liu, IEEE Journal of selected topics in signal processing. 51B. Wang and K. R. Liu, "Advances in cognitive Radio Networks: A survey," IEEE Journal of selected topics in signal processing, vol. 5, no. 1, pp. 523, 2010.
Spectrum Leasing to Cooperating Secondary Ad Hoc Networks. O Simeone, I Stanojev, S Savazzi, Y BarNess, U Spagnolini, R Pickholtz, IEEE Journal on Selected Areas in Communications. 261O. Simeone, I. Stanojev, S. Savazzi, Y. BarNess, U. Spagnolini, and R. Pickholtz, "Spectrum Leasing to Cooperating Secondary Ad Hoc Networks," IEEE Journal on Selected Areas in Communications, vol. 26, no. 1, pp. 203213, 2008.
On Calibration of Modern Neural Networks. C Guo, G Pleiss, Y Sun, K Q Weinberger, International Conference on Machine Learning. C. Guo, G. Pleiss, Y. Sun, and K. Q. Weinberger, "On Calibration of Modern Neural Networks," in International Conference on Machine Learning. PMLR, 2017, pp. 13211330.
Learning under Model Misspecification: Applications to Variational and Ensemble Methods. A Masegosa, Proc. Advances in Neural Information Processing Systems (NIPS) as Virtualonly Conference. Advances in Neural Information essing Systems (NIPS) as Virtualonly Conference33A. Masegosa, "Learning under Model Misspecification: Applications to Variational and Ensemble Methods," Proc. Advances in Neural Information Processing Systems (NIPS) as Virtualonly Conference, vol. 33, pp. 54795491, 2020.
PACmBayes: Narrowing the Empirical Risk Gap in the Misspecified Bayesian Regime. W R Morningstar, A Alemi, J V Dillon, Proc. International Conference on Artificial Intelligence and Statistics, as a Virtualonly Conference. PMLR, 2022. International Conference on Artificial Intelligence and Statistics, as a Virtualonly Conference. PMLR, 2022W. R. Morningstar, A. Alemi, and J. V. Dillon, "PACmBayes: Narrowing the Empirical Risk Gap in the Misspecified Bayesian Regime," in Proc. International Conference on Artificial Intelligence and Statistics, as a Virtualonly Conference. PMLR, 2022, pp. 82708298.
Robust PACm: Training Ensemble Models Under Model Misspecification and Outliers. M Zecchin, S Park, O Simeone, M Kountouris, D Gesbert, arXiv:2203.01859arXiv preprintM. Zecchin, S. Park, O. Simeone, M. Kountouris, and D. Gesbert, "Robust PACm: Training Ensemble Models Under Model Misspecification and Outliers," arXiv preprint arXiv:2203.01859, 2022.
Investigating the Impact of Model Misspecification in Neural Simulationbased Inference. P Cannon, D Ward, S M Schmon, arXiv:2209.01845arXiv preprintP. Cannon, D. Ward, and S. M. Schmon, "Investigating the Impact of Model Misspecification in Neural Simulationbased Inference," arXiv preprint arXiv:2209.01845, 2022.
Model Misspecification in Approximate Bayesian Computation: Consequences and Diagnostics. D T Frazier, C P Robert, J Rousseau, Journal of the Royal Statistical Society: Series B (Statistical Methodology). 822D. T. Frazier, C. P. Robert, and J. Rousseau, "Model Misspecification in Approximate Bayesian Computation: Consequences and Diagnostics," Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol. 82, no. 2, pp. 421444, 2020.
Probably Approximate Bayesian Computation: Nonasymptotic Convergence of ABC under Misspecification. J Ridgway, arXiv:1707.05987arXiv preprintJ. Ridgway, "Probably Approximate Bayesian Computation: Nonasymptotic Convergence of ABC under Misspecification," arXiv preprint arXiv:1707.05987, 2017.
Algorithmic Learning in a Random World. V Vovk, A Gammerman, G Shafer, springerNew YorkV. Vovk, A. Gammerman, and G. Shafer, Algorithmic Learning in a Random World. Springer, 2005, springer, New York.
G Zeni, M Fontana, S Vantini, arXiv:2005.07972Conformal Prediction: a Unified Review of Theory and New Challenges. arXiv preprintG. Zeni, M. Fontana, and S. Vantini, "Conformal Prediction: a Unified Review of Theory and New Challenges," arXiv preprint arXiv:2005.07972, 2020.
A Gentle Introduction to Conformal Prediction and DistributionFree Uncertainty Quantification. A N Angelopoulos, S Bates, A. N. Angelopoulos and S. Bates, "A Gentle Introduction to Conformal Prediction and DistributionFree Uncertainty Quantification," 2021.
Evaluating Model Calibration in Classification. J Vaicenavicius, D Widmann, C Andersson, F Lindsten, J Roll, T Schön, Proc. 22nd International Conference on Artificial Intelligence and Statistics (AISTATS). 22nd International Conference on Artificial Intelligence and Statistics (AISTATS)J. Vaicenavicius, D. Widmann, C. Andersson, F. Lindsten, J. Roll, and T. Schön, "Evaluating Model Calibration in Classification," Proc. 22nd International Conference on Artificial Intelligence and Statistics (AISTATS), pp. 34593467, 2019.
Adaptive Conformal Inference Under Distribution Shift. I Gibbs, E Candès, I. Gibbs and E. Candès, "Adaptive Conformal Inference Under Distribution Shift," 2021. [Online]. Available: https://arxiv.org/abs/2106.00170
A Very Brief Introduction to Machine Learning with Applications to Communication Systems. O Simeone, IEEE Transactions on Cognitive Communications and Networking. 44O. Simeone, "A Very Brief Introduction to Machine Learning with Applications to Communication Systems," IEEE Transactions on Cognitive Communications and Networking, vol. 4, no. 4, pp. 648664, 2018.
Deep Learning for Wireless Communications: An Emerging Interdisciplinary Paradigm. L Dai, R Jiao, F Adachi, H V Poor, L Hanzo, IEEE Wireless Communications. 274L. Dai, R. Jiao, F. Adachi, H. V. Poor, and L. Hanzo, "Deep Learning for Wireless Communications: An Emerging Interdisciplinary Paradigm," IEEE Wireless Communications, vol. 27, no. 4, pp. 133139, 2020.
ModelBased Machine Learning for Communications. N Shlezinger, N Farsad, Y C Eldar, A J Goldsmith, arXiv:2101.04726arXiv preprintN. Shlezinger, N. Farsad, Y. C. Eldar, and A. J. Goldsmith, "ModelBased Machine Learning for Communications," arXiv preprint arXiv:2101.04726, 2021.
Deep Learning for Wireless Communications. T Erpek, T J Shea, Y E Sagduyu, Y Shi, T C Clancy, Development and Analysis of Deep Learning Architectures. SpringerT. Erpek, T. J. O'Shea, Y. E. Sagduyu, Y. Shi, and T. C. Clancy, "Deep Learning for Wireless Communications," in Development and Analysis of Deep Learning Architectures. Springer, 2020, pp. 223266.
Amortized Bayesian Prototype Metalearning: A New Probabilistic Metalearning Approach to Fewshot Image Classification. Z Sun, J Wu, X Li, W Yang, J.H Xue, Proc. International Conference on Artificial Intelligence and Statistics. International Conference on Artificial Intelligence and StatisticsPMLRZ. Sun, J. Wu, X. Li, W. Yang, and J.H. Xue, "Amortized Bayesian Prototype Metalearning: A New Probabilistic Metalearning Approach to Fewshot Image Classification," in Proc. International Conference on Artificial Intelligence and Statistics. PMLR, 2021, pp. 14141422.
Bayesian Active MetaLearning for Reliable and Efficient AIBased Demodulation. K M Cohen, S Park, O Simeone, S Shamai, IEEE Transactions on Signal Processing. 70K. M. Cohen, S. Park, O. Simeone, and S. Shamai, "Bayesian Active MetaLearning for Reliable and Efficient AIBased Demodulation," IEEE Transactions on Signal Processing, vol. 70, pp. 53665380, 2022.
Robust Bayesian Learning for Reliable Wireless AI: Framework and Applications. M Zecchin, S Park, O Simeone, M Kountouris, D Gesbert, arXiv:2207.00300arXiv preprintM. Zecchin, S. Park, O. Simeone, M. Kountouris, and D. Gesbert, "Robust Bayesian Learning for Reliable Wireless AI: Framework and Applications," arXiv preprint arXiv:2207.00300, 2022.
Patterns of Scalable Bayesian Inference. E Angelino, M J Johnson, R P Adams, Foundations and Trends in Machine Learning. 9E. Angelino, M. J. Johnson, R. P. Adams et al., "Patterns of Scalable Bayesian Inference," Foundations and Trends in Machine Learning, vol. 9, no. 23, pp. 119247, 2016.
O Simeone, Machine Learning for Engineers. Cambridge University PressO. Simeone, Machine Learning for Engineers. Cambridge University Press, 2022.
Weight Uncertainty in Neural Network. C Blundell, J Cornebise, K Kavukcuoglu, D Wierstra, International Conference on Machine Learning. PMLRC. Blundell, J. Cornebise, K. Kavukcuoglu, and D. Wierstra, "Weight Uncertainty in Neural Network," in International Conference on Machine Learning. PMLR, 2015, pp. 16131622.
Amortized Bayesian MetaLearning. S Ravi, A Beatson, Proc. International Conference on Learning Representations (ICLR). International Conference on Learning Representations (ICLR)Vancouver, CanadaS. Ravi and A. Beatson, "Amortized Bayesian MetaLearning," in Proc. International Conference on Learning Representations (ICLR), in Vancouver, Canada, 2018, pp. 114.
Practical Variational Inference for Neural Networks. A Graves, Proc. Advances in Neural Information Processing Systems (NIPS) in Granada, Spain. Advances in Neural Information essing Systems (NIPS) in Granada, Spain24A. Graves, "Practical Variational Inference for Neural Networks," Proc. Advances in Neural Information Processing Systems (NIPS) in Granada, Spain, vol. 24, 2011.
Efficient and Scalable Bayesian Neural Nets with Rank1 Factors. M Dusenberry, G Jerfel, Y Wen, Y Ma, J Snoek, K Heller, B Lakshminarayanan, D Tran, Proc. International Conference on Machine Learning in. International Conference on Machine Learning inBaltimore , USAPMLRM. Dusenberry, G. Jerfel, Y. Wen, Y. Ma, J. Snoek, K. Heller, B. Lakshminarayanan, and D. Tran, "Efficient and Scalable Bayesian Neural Nets with Rank1 Factors," in Proc. International Conference on Machine Learning in Baltimore , USA. PMLR, 2020, pp. 27822792.
Laplace ReduxEffortless Bayesian Deep Learning. E Daxberger, A Kristiadi, A Immer, R Eschenhagen, M Bauer, P Hennig, Proc. Advances in Neural Information Processing Systems (NIPS) as Virtualonly Conference. Advances in Neural Information essing Systems (NIPS) as Virtualonly Conference34E. Daxberger, A. Kristiadi, A. Immer, R. Eschenhagen, M. Bauer, and P. Hennig, "Laplace ReduxEffortless Bayesian Deep Learning," Proc. Advances in Neural Information Processing Systems (NIPS) as Virtualonly Conference, vol. 34, pp. 20 08920 103, 2021.
Liberty or Depth: Deep Bayesian Neural Nets Do Not Need Complex Weight Posterior Approximations. S Farquhar, L Smith, Y Gal, Proc. Advances in Neural Information Processing Systems (NIPS) as Virtualonly Conference. Advances in Neural Information essing Systems (NIPS) as Virtualonly Conference33S. Farquhar, L. Smith, and Y. Gal, "Liberty or Depth: Deep Bayesian Neural Nets Do Not Need Complex Weight Posterior Approximations," Proc. Advances in Neural Information Processing Systems (NIPS) as Virtualonly Conference, vol. 33, pp. 43464357, 2020.
Handbook of Markov Chain Monte Carlo. R M Neal, 2MCMC using Hamiltonian DynamicsR. M. Neal et al., "MCMC using Hamiltonian Dynamics," Handbook of Markov Chain Monte Carlo, vol. 2, no. 11, 2011.
Bayesian Learning via Stochastic Gradient Langevin Dynamics. M Welling, Y W Teh, Proceedings of the 28th International Conference on Machine learning. the 28th International Conference on Machine learningBellevue, Washington, USA. CiteseerM. Welling and Y. W. Teh, "Bayesian Learning via Stochastic Gradient Langevin Dynamics," in Proceedings of the 28th International Conference on Machine learning (ICML11) in Bellevue, Washington, USA. Citeseer, 2011, pp. 681688.
Cyclical Stochastic Gradient MCMC for Bayesian Deep Learning. R Zhang, C Li, J Zhang, C Chen, A G Wilson, Eighth International Conference on Learning Representations, Virtual Conference. Formerly Addis Ababa EthiopiaR. Zhang, C. Li, J. Zhang, C. Chen, and A. G. Wilson, "Cyclical Stochastic Gradient MCMC for Bayesian Deep Learning," Eighth International Conference on Learning Representations, Virtual Conference, Formerly Addis Ababa Ethiopia, pp. 127, 2020.
Prediction of Active UE Number with Bayesian Neural Networks for SelfOrganizing LTE Networks. O Narmanlioglu, E Zeydan, M Kandemir, T Kranda, 2017 8th International Conference on the Network of the Future (NOF). IEEEO. Narmanlioglu, E. Zeydan, M. Kandemir, and T. Kranda, "Prediction of Active UE Number with Bayesian Neural Networks for SelfOrganizing LTE Networks," in 2017 8th International Conference on the Network of the Future (NOF). IEEE, 2017, pp. 7378.
TransformerBased Online Bayesian Neural Networks for GrantFree Uplink Access in CRAN With Streaming Variational Inference. N K Jha, V K Lau, IEEE Internet of Things Journal. 99N. K. Jha and V. K. Lau, "TransformerBased Online Bayesian Neural Networks for GrantFree Uplink Access in CRAN With Streaming Variational Inference," IEEE Internet of Things Journal, vol. 9, no. 9, pp. 70517064, 2021.
Bayesian Optimization for Radio Resource Management: Open Loop Power Control. L Maggi, A Valcarce, J Hoydis, 10.1109/JSAC.2021.3078490IEEE Journal on Selected Areas in Communications. 397L. Maggi, A. Valcarce, and J. Hoydis, "Bayesian Optimization for Radio Resource Management: Open Loop Power Control," IEEE Journal on Selected Areas in Communications, vol. 39, no. 7, p. 18581871, Jul 2021. [Online]. Available: http://dx.doi.org/10.1109/JSAC.2021.3078490
Stochastic Gradient Langevin Dynamics for Massive MIMO Detection. Z Wu, H Li, IEEE Communications Letters. 265Z. Wu and H. Li, "Stochastic Gradient Langevin Dynamics for Massive MIMO Detection," IEEE Communications Letters, vol. 26, no. 5, pp. 10621065, 2022.
Annealed Langevin Dynamics for Massive MIMO Detection. N Zilberstein, C Dick, R DoostMohammady, A Sabharwal, S Segarra, arXiv:2205.05776arXiv preprintN. Zilberstein, C. Dick, R. DoostMohammady, A. Sabharwal, and S. Segarra, "Annealed Langevin Dynamics for Massive MIMO Detection," arXiv preprint arXiv:2205.05776, 2022.
Improved Downlink Rates for FDD Massive MIMO Systems through Bayesian Neural NetworksBased Channel Prediction. Z Tao, S Wang, IEEE Transactions on Wireless Communications. 213Z. Tao and S. Wang, "Improved Downlink Rates for FDD Massive MIMO Systems through Bayesian Neural NetworksBased Channel Prediction," IEEE Transactions on Wireless Communications, vol. 21, no. 3, pp. 21222134, 2021.
Online Downlink MultiUser Channel Estimation for mmWave Systems using Bayesian Neural Network. N K Jha, V K Lau, IEEE Journal on Selected Areas in Communications. 398N. K. Jha and V. K. Lau, "Online Downlink MultiUser Channel Estimation for mmWave Systems using Bayesian Neural Network," IEEE Journal on Selected Areas in Communications, vol. 39, no. 8, pp. 23742387, 2021.
Joint Channel Estimation and Data Detection in MIMOOFDM Systems: A Sparse Bayesian Learning Approach. R Prasad, C R Murthy, B D Rao, IEEE Transactions on Signal Processing. 6320R. Prasad, C. R. Murthy, and B. D. Rao, "Joint Channel Estimation and Data Detection in MIMOOFDM Systems: A Sparse Bayesian Learning Approach," IEEE Transactions on Signal Processing, vol. 63, no. 20, pp. 53695382, 2015.
Joint Channel Estimation and Impulsive Noise Mitigation Method for OFDM Systems Using Sparse Bayesian Learning. X Lv, Y Li, Y Wu, X Wang, H Liang, IEEE Access. 7X. Lv, Y. Li, Y. Wu, X. Wang, and H. Liang, "Joint Channel Estimation and Impulsive Noise Mitigation Method for OFDM Systems Using Sparse Bayesian Learning," IEEE Access, vol. 7, pp. 74 50074 510, 2019.
Bayesian Neural Networks for Identification and Classification of Radio Frequency Transmitters using Power Amplifiers' Nonlinearity Signatures. J Xu, Y Shen, E Chen, V Chen, IEEE Open Journal of Circuits and Systems. 2J. Xu, Y. Shen, E. Chen, and V. Chen, "Bayesian Neural Networks for Identification and Classification of Radio Frequency Transmitters using Power Amplifiers' Nonlinearity Signatures," IEEE Open Journal of Circuits and Systems, vol. 2, pp. 457471, 2021.
Bayesian Learning Based Multiuser Detection for M2M Communications with TimeVarying User Activities. X Zhang, Y.C Liang, J Fang, 2017 IEEE International Conference on Communications (ICC) in. Paris, FranceIEEEX. Zhang, Y.C. Liang, and J. Fang, "Bayesian Learning Based Multiuser Detection for M2M Communications with TimeVarying User Activities," in 2017 IEEE International Conference on Communications (ICC) in Paris, France. IEEE, 2017, pp. 16.
Novel Bayesian Inference Algorithms for Multiuser Detection in M2M Communications. IEEE Transactions on Vehicular Technology. 669, "Novel Bayesian Inference Algorithms for Multiuser Detection in M2M Communications," IEEE Transactions on Vehicular Technology, vol. 66, no. 9, pp. 78337848, 2017.
Convolutional Radio Modulation Recognition Networks. T J Shea, J Corgan, T C Clancy, International Conference on Engineering Applications of Neural Networks. SpringerT. J. O'Shea, J. Corgan, and T. C. Clancy, "Convolutional Radio Modulation Recognition Networks," in International Conference on Engineering Applications of Neural Networks. Springer, 2016, pp. 213226.
OverTheAir Deep Learning Based Radio Signal classification. T J Shea, T Roy, T C Clancy, IEEE Journal of Selected Topics in Signal Processing. 121T. J. O'Shea, T. Roy, and T. C. Clancy, "OverTheAir Deep Learning Based Radio Signal classification," IEEE Journal of Selected Topics in Signal Processing, vol. 12, no. 1, pp. 168179, 2018.
A Novel Convolutional Neural Network Based Indoor Localization Framework with WiFi Fingerprinting. X Song, X Fan, C Xiang, Q Ye, L Liu, Z Wang, X He, N Yang, G Fang, IEEE Access. 7X. Song, X. Fan, C. Xiang, Q. Ye, L. Liu, Z. Wang, X. He, N. Yang, and G. Fang, "A Novel Convolutional Neural Network Based Indoor Localization Framework with WiFi Fingerprinting," IEEE Access, vol. 7, pp. 110 698110 709, 2019.
A Machine Learning Approach for WiFi RTT Ranging. N Dvorecki, O BarShalom, L Banin, Y Amizur, Proceedings of the 2019 International Technical Meeting of the Institute of Navigation. the 2019 International Technical Meeting of the Institute of NavigationN. Dvorecki, O. BarShalom, L. Banin, and Y. Amizur, "A Machine Learning Approach for WiFi RTT Ranging," in Proceedings of the 2019 International Technical Meeting of the Institute of Navigation, 2019, pp. 435444.
Probabilistic Outputs for Support Vector Machines and Comparisons to Regularized Likelihood Methods. J Platt, Advances in large margin classifiers. 103J. Platt et al., "Probabilistic Outputs for Support Vector Machines and Comparisons to Regularized Likelihood Methods," Advances in large margin classifiers, vol. 10, no. 3, pp. 6174, 1999.
Transforming classifier Scores into Accurate Multiclass Probability Estimates. B Zadrozny, C Elkan, Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining. the eighth ACM SIGKDD international conference on Knowledge discovery and data miningB. Zadrozny and C. Elkan, "Transforming classifier Scores into Accurate Multiclass Probability Estimates," in Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, 2002, pp. 694699.
Verified Uncertainty Calibration. A Kumar, P S Liang, T Ma, Advances in Neural Information Processing Systems. 32A. Kumar, P. S. Liang, and T. Ma, "Verified Uncertainty Calibration," Advances in Neural Information Processing Systems, vol. 32, 2019.
Metacal: WellControlled Posthoc Calibration by Ranking. X Ma, M B Blaschko, International Conference on Machine Learning. PMLR, 2021. X. Ma and M. B. Blaschko, "Metacal: WellControlled Posthoc Calibration by Ranking," in International Conference on Machine Learning. PMLR, 2021, pp. 72357245.
Predictive Inference with the Jackknife+. R F Barber, E J Candes, A Ramdas, R J Tibshirani, The Annals of Statistics. 491R. F. Barber, E. J. Candes, A. Ramdas, and R. J. Tibshirani, "Predictive Inference with the Jackknife+," The Annals of Statistics, vol. 49, no. 1, pp. 486507, 2021.
Learning Optimal Conformal Classifiers. D Stutz, K D Dvijotham, A T Cemgil, A Doucet, International Conference on Learning Representations. D. Stutz, K. D. Dvijotham, A. T. Cemgil, and A. Doucet, "Learning Optimal Conformal Classifiers," in International Conference on Learning Representations, 2021.
B.S Einbinder, Y Romano, M Sesia, Y Zhou, arXiv:2205.05878Training UncertaintyAware Classifiers with Conformalized Deep Learning. arXiv preprintB.S. Einbinder, Y. Romano, M. Sesia, and Y. Zhou, "Training UncertaintyAware Classifiers with Conformalized Deep Learning," arXiv preprint arXiv:2205.05878, 2022.
Conformal Prediction under Covariate Shift. R J Tibshirani, R Barber, E Candes, A Ramdas, Advances in Neural Information Processing Systems. 32R. J. Tibshirani, R. Foygel Barber, E. Candes, and A. Ramdas, "Conformal Prediction under Covariate Shift," Advances in Neural Information Processing Systems, vol. 32, 2019.
FiniteSample Efficient Conformal Prediction. Y Yang, A K Kuchibhotla, arXiv:2104.13871arXiv preprintY. Yang and A. K. Kuchibhotla, "FiniteSample Efficient Conformal Prediction," arXiv preprint arXiv:2104.13871, 2021.
Trainable Calibration Measures for Neural Networks from Kernel Mean Embeddings. A Kumar, S Sarawagi, U Jain, Proceedings of the 35th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research. the 35th International Conference on Machine Learning, ser. Machine Learning ResearchPMLR80A. Kumar, S. Sarawagi, and U. Jain, "Trainable Calibration Measures for Neural Networks from Kernel Mean Embeddings," in Proceedings of the 35th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, vol. 80. PMLR, 1015 Jul 2018, pp. 28052814.
Making Learning More Transparent using Conformalized Performance Prediction. M J Holland, arXiv:2007.04486arXiv preprintM. J. Holland, "Making Learning More Transparent using Conformalized Performance Prediction," arXiv preprint arXiv:2007.04486, 2020.
Beyond Calibration: Estimating the Grouping Loss of Modern Neural Networks. A PerezLebel, M L Morvan, G Varoquaux, arXiv:2210.16315arXiv preprintA. PerezLebel, M. L. Morvan, and G. Varoquaux, "Beyond Calibration: Estimating the Grouping Loss of Modern Neural Networks," arXiv preprint arXiv:2210.16315, 2022.
Conformalized Online Learning: Online Calibration Without a Holdout Set. S Feldman, S Bates, Y Romano, S. Feldman, S. Bates, and Y. Romano, "Conformalized Online Learning: Online Calibration Without a Holdout Set," 2022. [Online].
Calibrating AI Models for FewShot Demodulation via Conformal Prediction. K M Cohen, S Park, O Simeone, S Shamai, submitted to 2023 IEEE International Conference on Acoustics, Speech, and Signal Processing. 2023K. M. Cohen, S. Park, O. Simeone, and S. Shamai, "Calibrating AI Models for FewShot Demodulation via Conformal Prediction," submitted to 2023 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2023.
AIBased Channel Prediction in D2D Links: An Empirical Validation. N Simmons, S B F Gomes, M D Yacoub, O Simeone, S L Cotton, D E Simmons, IEEE Access. 10N. Simmons, S. B. F. Gomes, M. D. Yacoub, O. Simeone, S. L. Cotton, and D. E. Simmons, "AIBased Channel Prediction in D2D Links: An Empirical Validation," IEEE Access, vol. 10, pp. 65 45965 472, 2022.
RSSbased Ranging by Multichannel RSS Averaging. A Zanella, A Bardella, IEEE Wireless Communications Letters. 31A. Zanella and A. Bardella, "RSSbased Ranging by Multichannel RSS Averaging," IEEE Wireless Communications Letters, vol. 3, no. 1, pp. 1013, 2013.
. B De Finetti, Theory of Probability. 1B. de Finetti, "Theory of Probability, vol. 1," 1974.
Calibrated Prediction with Covariate Shift via Unsupervised Domain Adaptation. S Park, O Bastani, J Weimer, I Lee, Proc. International Conference on Artificial Intelligence and Statistics (AISTATS). International Conference on Artificial Intelligence and Statistics (AISTATS)2020S. Park, O. Bastani, J. Weimer, and I. Lee, "Calibrated Prediction with Covariate Shift via Unsupervised Domain Adaptation," Proc. International Conference on Artificial Intelligence and Statistics (AISTATS), 2020.
Regression Quantiles. R Koenker, G BassettJr, Econometrica: Journal of the Econometric Society. R. Koenker and G. Bassett Jr, "Regression Quantiles," Econometrica: Journal of the Econometric Society, pp. 3350, 1978.
Estimating Conditional Quantiles with the Help of the Pinball Loss. I Steinwart, A Christmann, Bernoulli. 171I. Steinwart and A. Christmann, "Estimating Conditional Quantiles with the Help of the Pinball Loss," Bernoulli, vol. 17, no. 1, pp. 211225, 2011.
DistributionFree Predictive Inference for Regression. J Lei, M Sell, A Rinaldo, R J Tibshirani, L Wasserman, Journal of the American Statistical Association. 113523J. Lei, M. G'Sell, A. Rinaldo, R. J. Tibshirani, and L. Wasserman, "DistributionFree Predictive Inference for Regression," Journal of the American Statistical Association, vol. 113, no. 523, pp. 10941111, 2018.
R F Barber, E J Candes, A Ramdas, R J Tibshirani, arXiv:2202.13415Conformal prediction beyond exchangeability. arXiv preprintR. F. Barber, E. J. Candes, A. Ramdas, and R. J. Tibshirani, "Conformal prediction beyond exchangeability," arXiv preprint arXiv:2202.13415, 2022.
Classification with Valid and Adaptive Coverage. Y Romano, M Sesia, E Candes, Advances in Neural Information Processing Systems. 33Y. Romano, M. Sesia, and E. Candes, "Classification with Valid and Adaptive Coverage," Advances in Neural Information Processing Systems, vol. 33, pp. 35813591, 2020.
Probabilistic Conformal Prediction Using Conditional Random Samples. Z Wang, R Gao, M Yin, M Zhou, D M Blei, arXiv:2206.06584arXiv preprintZ. Wang, R. Gao, M. Yin, M. Zhou, and D. M. Blei, "Probabilistic Conformal Prediction Using Conditional Random Samples," arXiv preprint arXiv:2206.06584, 2022.
Adaptive Conformal Predictions for Time Series. M Zaffran, O Féron, Y Goude, J Josse, A Dieuleveut, International Conference on Machine Learning. PMLR, 2022. M. Zaffran, O. Féron, Y. Goude, J. Josse, and A. Dieuleveut, "Adaptive Conformal Predictions for Time Series," in International Conference on Machine Learning. PMLR, 2022, pp. 25 83425 866.
Z Lin, S Trivedi, J Sun, arXiv:2205.09940Conformal Prediction with Temporal Quantile Adjustments. arXiv preprintZ. Lin, S. Trivedi, and J. Sun, "Conformal Prediction with Temporal Quantile Adjustments," arXiv preprint arXiv:2205.09940, 2022.
Learning to Demodulate From Few Pilots via Offline and Online MetaLearning. S Park, H Jang, O Simeone, J Kang, IEEE Transactions on Signal Processing. 69S. Park, H. Jang, O. Simeone, and J. Kang, "Learning to Demodulate From Few Pilots via Offline and Online MetaLearning," IEEE Transactions on Signal Processing, vol. 69, pp. 226239, 2021.
List decoding of polar codes. I Tal, A Vardy, IEEE Transactions on Information Theory. 615I. Tal and A. Vardy, "List decoding of polar codes," IEEE Transactions on Information Theory, vol. 61, no. 5, pp. 22132226, 2015.
Joint Adaptive Compensation of Transmitter and Receiver IQ Imbalance under Carrier Frequency Offset in OFDMBased Systems. D Tandur, M Moonen, IEEE Transactions on Signal Processing. 5511D. Tandur and M. Moonen, "Joint Adaptive Compensation of Transmitter and Receiver IQ Imbalance under Carrier Frequency Offset in OFDMBased Systems," IEEE Transactions on Signal Processing, vol. 55, no. 11, pp. 52465252, 2007.
Automatic Modulation Classification: Principles, Algorithms and Applications. Z Zhu, A K Nandi, John Wiley & SonsZ. Zhu and A. K. Nandi, Automatic Modulation Classification: Principles, Algorithms and Applications. John Wiley & Sons, 2015.
Deep Learning for Modulation Recognition: A Survey with a Demonstration. R Zhou, F Liu, C W Gravelle, IEEE Access. 8R. Zhou, F. Liu, and C. W. Gravelle, "Deep Learning for Modulation Recognition: A Survey with a Demonstration," IEEE Access, vol. 8, pp. 67 36667 376, 2020.
OverTheAir Deep Learning Based Radio Signal Classification. T J Shea, T Roy, T C Clancy, IEEE Journal of Selected Topics in Signal Processing. 121T. J. O'Shea, T. Roy, and T. C. Clancy, "OverTheAir Deep Learning Based Radio Signal Classification," IEEE Journal of Selected Topics in Signal Processing, vol. 12, no. 1, pp. 168179, 2018.
Optimal Resource Allocation in the OFDMA Downlink with Imperfect Channel Knowledge. I C Wong, B L Evans, IEEE Transactions on Communications. 571I. C. Wong and B. L. Evans, "Optimal Resource Allocation in the OFDMA Downlink with Imperfect Channel Knowledge," IEEE Transactions on Communications, vol. 57, no. 1, pp. 232241, 2009.
Modular MetaLearning for Power Control via Random Edge Graph Neural Networks. I Nikoloska, O Simeone, IEEE Transactions on Wireless Communications. I. Nikoloska and O. Simeone, "Modular MetaLearning for Power Control via Random Edge Graph Neural Networks," IEEE Transactions on Wireless Communications, 2022.
Long ShortTerm Memory. S Hochreiter, J Schmidhuber, Neural computation. 98S. Hochreiter and J. Schmidhuber, "Long ShortTerm Memory," Neural computation, vol. 9, no. 8, pp. 17351780, 1997.
Trainable calibration measures for neural networks from kernel mean embeddings. A Kumar, S Sarawagi, U Jain, International Conference on Machine Learning. PMLRA. Kumar, S. Sarawagi, and U. Jain, "Trainable calibration measures for neural networks from kernel mean embeddings," in International Conference on Machine Learning. PMLR, 2018, pp. 28052814.
FewShot Calibration of Set Predictors via MetaLearned CrossValidationBased Conformal Prediction. S Park, K M Cohen, O Simeone, arXiv:2210.03067arXiv preprintS. Park, K. M. Cohen, and O. Simeone, "FewShot Calibration of Set Predictors via MetaLearned CrossValidationBased Conformal Prediction," arXiv preprint arXiv:2210.03067, 2022.
How Good is the Bayes Posterior in. F Wenzel, K Roth, B S Veeling, J Światkowski, L Tran, S Mandt, J Snoek, T Salimans, R Jenatton, S Nowozin, arXiv:2002.02405Deep Neural Networks Really?" arXiv preprint. F. Wenzel, K. Roth, B. S. Veeling, J.Światkowski, L. Tran, S. Mandt, J. Snoek, T. Salimans, R. Jenatton, and S. Nowozin, "How Good is the Bayes Posterior in Deep Neural Networks Really?" arXiv preprint arXiv:2002.02405, 2020.
N Ye, Z Zhu, R K Mantiuk, arXiv:1703.04379Langevin Dynamics with Continuous Tempering for Training Deep Neural Networks. arXiv preprintN. Ye, Z. Zhu, and R. K. Mantiuk, "Langevin Dynamics with Continuous Tempering for Training Deep Neural Networks," arXiv preprint arXiv:1703.04379, 2017.
 {'fraction_non_alphanumeric': 0.06803734711160184, 'fraction_numerical': 0.026959812192481267, 'mean_word_length': 4.422144725370532, 'pattern_counts': {'":': 0, '<': 0, '<?xml version=': 0, '>': 2, 'https://': 3, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 3, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 9, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}  {'abstract': "When used in complex engineered systems, such as communication networks, artificial intelligence (AI) models should be not only as accurate as possible, but also well calibrated. A wellcalibrated AI model is one that can reliably quantify the uncertainty of its decisions, assigning high confidence levels to decisions that are likely to be correct and low confidence levels to decisions that are likely to be erroneous. This paper investigates the application of conformal prediction as a general framework to obtain AI models that produce decisions with formal calibration guarantees. Conformal prediction transforms probabilistic predictors into set predictors that are guaranteed to contain the correct answer with a probability chosen by the designer. Such formal calibration guarantees hold irrespective of the true, unknown, distribution underlying the generation of the variables of interest, and can be defined in terms of ensemble or timeaveraged probabilities. In this paper, conformal prediction is applied for the first time to the design of AI for communication systems in conjunction to both frequentist and Bayesian learning, focusing on demodulation, modulation classification, and channel prediction.A. MotivationHow reliable is your artificial intelligence (AI)based model? The most common metric to design an AI model and to gauge its performance is the average accuracy. However, in applications in which AI decisions are used within a larger system, AI models should not only be as accurate as possible, but they should also be able to reliably quantify the uncertainty of their decisions. As an example, consider an unlicensed link that uses AI tools to predict the best channel to access out of four possible channels. A predictor that assigns the probability vector of [90%, 2%, 5%, 3%] to the possible channels predicts the same best channel the first as a predictor that outputs the probability vector [30%, 20%, 25%, 25%]. However, the latter predictor is less certain of its decision, and it may be preferable for the unlicensed link to refrain from accessing the channel when acting on less confident predictions, e.g., to avoid excessive interference to licensed links [1], [2].As in the example above, AI models typically report a confidence measure associated with each prediction, which reflects the model's selfevaluation of the accuracy of a decision. Notably, neural network models implement probabilistic predictors that produce a probability distribution across all possible values of the output variable.The selfreported model confidence, however, may not be a reliable measure of the true, unknown, accuracy of a prediction. In such situations, the AI model is said to be poorly calibrated.As illustrated in the example inFig. 1, accuracy and calibration are distinct criteria, with neither criterion implying the other. It is, for instance, possible to have an accurate predictor that consistently underestimates the accuracy of its decisions, and/or that is overconfident where making incorrect decisions (see fourth column inFig. 1). Conversely, one can have inaccurate predictions that estimate correctly their uncertainty (see fifth column inFig. 1).Deep learning models tend to produce either overconfident decisions [3], or calibration levels that rely on strong assumptions about the groundtruth, unknown, data generation mechanism [4][9]. This paper investigates the use of conformal prediction (CP) [10][12] as a framework to design provably wellcalibrated AI predictors, with distributionfree calibration guarantees that do not require making any assumption about the groundtruth data generation mechanism.B. Conformal Prediction for AIBased Wireless SystemsCP leverages probabilistic predictors to construct wellcalibrated set predictors. Instead of producing a probability vector, as in the examples inFig. 1, a set predictor outputs a subset of the output space, as exemplified inFig. 2. A set predictor is well calibrated if it contains the correct output with a predefined coverage probability selected by the system designer. For a wellcalibrated set predictor, the size of the prediction set for a given input provides a measure of the uncertainty of the decision. Set predictors with smaller average prediction size are said to be more efficient [10]. This paper investigates CP as a general mechanism to obtain AI models with formal calibration guarantees for communication systems. The calibration guarantees of CP hold irrespective of the true, unknown, distribution underlying the generation of the variables of interest, and are defined either in terms of ensemble averages[10]or", 'arxivid': '2212.07775', 'author': ['Student Member, IEEEKfir M Cohen kfir.cohen@kcl.ac.uk \nViterbi Faculty of Electrical and Computing Engineering\nTechnionIsrael Institute of Technology\nWC2R 2LS, 3200003London, HaifaU.K., Israel\n', 'Member, IEEESangwoo Park sangwoo.park@kcl.ac.uk \nViterbi Faculty of Electrical and Computing Engineering\nTechnionIsrael Institute of Technology\nWC2R 2LS, 3200003London, HaifaU.K., Israel\n', 'Fellow, IEEEOsvaldo Simeone osvaldo.simeone@kcl.ac.uk. \nViterbi Faculty of Electrical and Computing Engineering\nTechnionIsrael Institute of Technology\nWC2R 2LS, 3200003London, HaifaU.K., Israel\n', 'Shlomo Shamai \nViterbi Faculty of Electrical and Computing Engineering\nTechnionIsrael Institute of Technology\nWC2R 2LS, 3200003London, HaifaU.K., Israel\n', 'Life Fellow, IEEEShitz \nViterbi Faculty of Electrical and Computing Engineering\nTechnionIsrael Institute of Technology\nWC2R 2LS, 3200003London, HaifaU.K., Israel\n', 'Shitz)Shlomo Shamai \nViterbi Faculty of Electrical and Computing Engineering\nTechnionIsrael Institute of Technology\nWC2R 2LS, 3200003London, HaifaU.K., Israel\n'], 'authoraffiliation': ['Viterbi Faculty of Electrical and Computing Engineering\nTechnionIsrael Institute of Technology\nWC2R 2LS, 3200003London, HaifaU.K., Israel', 'Viterbi Faculty of Electrical and Computing Engineering\nTechnionIsrael Institute of Technology\nWC2R 2LS, 3200003London, HaifaU.K., Israel', 'Viterbi Faculty of Electrical and Computing Engineering\nTechnionIsrael Institute of Technology\nWC2R 2LS, 3200003London, HaifaU.K., Israel', 'Viterbi Faculty of Electrical and Computing Engineering\nTechnionIsrael Institute of Technology\nWC2R 2LS, 3200003London, HaifaU.K., Israel', 'Viterbi Faculty of Electrical and Computing Engineering\nTechnionIsrael Institute of Technology\nWC2R 2LS, 3200003London, HaifaU.K., Israel', 'Viterbi Faculty of Electrical and Computing Engineering\nTechnionIsrael Institute of Technology\nWC2R 2LS, 3200003London, HaifaU.K., Israel'], 'corpusid': 254685775, 'doi': '10.48550/arxiv.2212.07775', 'github_urls': ['https://github.com/kclip/cp4wireless'], 'n_tokens_mistral': 28619, 'n_tokens_neox': 24679, 'n_words': 14567, 'pdfsha': 'e1b9537f31657163227d033d1f602e2b61dde429', 'pdfurls': ['https://export.arxiv.org/pdf/2212.07775v1.pdf'], 'title': ['Calibrating AI Models for Wireless Communications via Conformal Prediction', 'Calibrating AI Models for Wireless Communications via Conformal Prediction'], 'venue': []} 
arxiv  "\nBayesian Forecasting in the 21st Century: A Modern Review\n7 Dec 2022 December 8, 2022\n\nGael M (...TRUNCATED)  "{'fraction_non_alphanumeric': 0.06291440319133004, 'fraction_numerical': 0.034078067774918654, 'mea(...TRUNCATED)  "{'abstract': 'The Bayesian statistical paradigm provides a principled and coherent approach to prob(...TRUNCATED) 
arxiv  "\nNumerical Methods for Detecting Symmetries and Commutant Algebras\n\n\nSanjay Moudgalya \nDepartm(...TRUNCATED)  "{'fraction_non_alphanumeric': 0.0725471287517388, 'fraction_numerical': 0.035194019406207595, 'mean(...TRUNCATED)  "{'abstract': 'For families of Hamiltonians defined by parts that are local, the most general defini(...TRUNCATED) 
arxiv  "\nOGNIAN KASSABOV\n1982\n\nOGNIAN KASSABOV\n\nTHE AXIOM OF COHOLOMORPHIC (2n + 1)SPHERES IN THE AL(...TRUNCATED)  "{'fraction_non_alphanumeric': 0.09292295628308175, 'fraction_numerical': 0.02577925896882964, 'mean(...TRUNCATED)  "{'abstract': 'In his book on Riemannian geometry [1] E. C a r t a n proved a characterization of a (...TRUNCATED) 
arxiv  "\n* )( * * ), A. Cerica( 1 ), C. DAuria( 1 )\n\n\nV Agostini \n; \nF Casaburo \n; \nG De Bonis \n; (...TRUNCATED)  "{'fraction_non_alphanumeric': 0.05532328633932912, 'fraction_numerical': 0.02975206611570248, 'mean(...TRUNCATED)  "{'abstract': 'Measurement of the cosmic ray flux by an ArduSiPMbased muon telescope in the framewo(...TRUNCATED) 
arxiv  "\nQball candidates for selfinteracting dark matter\nJun 2001 (April, 2001)\n\nAlexander Kusenko \(...TRUNCATED)  "{'fraction_non_alphanumeric': 0.06489413644635803, 'fraction_numerical': 0.04465355763682365, 'mean(...TRUNCATED)  "{'abstract': 'We show that nontopological solitons, known as Qballs, are promising candidates for(...TRUNCATED) 
arxiv  "\nBeam Dynamics in Independent Phased Cavities\n2022\n\nY K Batygin \nLos Alamos National Laborator(...TRUNCATED)  "{'fraction_non_alphanumeric': 0.06233668104398199, 'fraction_numerical': 0.05786607058527217, 'mean(...TRUNCATED)  "{'abstract': 'Linear accelerators containing the sequence of independently phased cavities with con(...TRUNCATED) 
arxiv  "\nAN EFFICIENT THRESHOLD DYNAMICS METHOD FOR TOPOLOGY OPTIMIZATION FOR FLUIDS\n\n\nHuangxin Chen \n(...TRUNCATED)  "{'fraction_non_alphanumeric': 0.0740163744240777, 'fraction_numerical': 0.048609499817693656, 'mean(...TRUNCATED)  "{'abstract': 'We propose an efficient threshold dynamics method for topology optimization for fluid(...TRUNCATED) 
arxiv  "\nOn the MISO Channel with Feedback: Can Infinitely Massive Antennas Achieve Infinite Capacity?\n\n(...TRUNCATED)  "{'fraction_non_alphanumeric': 0.09851615651152885, 'fraction_numerical': 0.048060873268945196, 'mea(...TRUNCATED)  "{'abstract': 'We consider communication over a multipleinput singleoutput (MISO) block fading cha(...TRUNCATED) 
Zyda
Zyda is a 1.3T language modeling dataset created by collecting open and high quality datasets and combining them and performing a uniform filtering and deduplication step. We find that Zyda performs extremely well in ablations and is at least comparable and potentially better to the best openly available datasets available, due to our meticulous postprocessing pipeline. We think the best use of Zyda is either as a standalone dataset for language model training up to the 1T scale, or in combination with Fineweb or Dolma for multitrillion token training.
An early version of Zyda was used as the primary dataset for phase 1 pretraining of Zamba, a model which performs strongly on a pertoken basis, testifying to the strength of Zyda as a pretraining dataset.
Models trained on Zyda significantly outperform identical models of the Pythia suite trained on the Pile for 300B tokens.
Zyda also outperforms Dolma, RefinedWeb, and Fineweb on 1.4B models trained on 50B tokens of each dataset.
According to our evaluations, Zyda is the most performant pertoken open dataset available in its nonstarcoder variant on language tasks. The Zyda starcoder variant ties with fineweb.
These results are aggregate scores of classic language modeling evaluations (PIQA, WinoGrande, OpenBookQA, ARCEasy, ARCChallenge) across time for a 1.4B model trained on 50B tokens of each dataset.
How to download
Full dataset:
import datasets
ds = datasets.load_dataset("Zyphra/Zyda", split="train")
Full dataset without StarCoder:
import datasets
ds = datasets.load_dataset("Zyphra/Zyda", name="zyda_no_starcoder", split="train")
For downloading individual components put their name in the name arg of load_dataset()
:
 zyda_arxiv_only
 zyda_c4en_only
 zyda_peS2o_only
 zyda_pileuncopyrighted_only
 zyda_refinedweb_only
 zyda_slimpajama_only
 zyda_starcoder_only
Breakdown by component
Component  Download size (parquet, GBs)  Documents (millions)  gptneox tokens (billions) 

zyda_refinedweb_only  1,712.4  920.5  564.8 
zyda_c4en_only  366.7  254.5  117.5 
zyda_slimpajama_only  594.7  142.3  242.3 
zyda_pileuncopyrighted_only  189.4  64.9  82.9 
zyda_peS2o_only  133.7  35.7  53.4 
zyda_arxiv_only  8.3  0.3  4.7 
zyda_starcoder_only  299.5  176.1  231.3 
Total  3,304.7  1,594.2  1,296.7 
Dataset Description
 Curated by: Zyphra
 Language(s) (NLP): Primarily English
 License: Open Data Commons License
Dataset Structure
Dataset fields:
text
: contains actual text for trainingsource
: component the text is coming fromfiltering_features
: precomputed values of different features that were used for filtering (converted to json string)source_other
: metadata from the source dataset (converted to json string)
Source Data
Zyda was drawn from seven component open datasets which are wellregarded in the community. These are:
Pile Uncopyrighted: https://huggingface.co/datasets/monology/pileuncopyrighted
C4en: https://huggingface.co/datasets/allenai/c4
peS2o: https://huggingface.co/datasets/allenai/peS2o
RefinedWeb: https://huggingface.co/datasets/tiiuae/falconrefinedweb
SlimPajama: https://huggingface.co/datasets/cerebras/SlimPajama627B
arxiv_s2orc_parsed: https://huggingface.co/datasets/ArtifactAI/arxiv_s2orc_parsed
StarCoder: https://huggingface.co/datasets/bigcode/starcoderdata
Data Collection and Processing
Zyda was created using a two stage postprocessing pipeline consisting of filtering and deduplication.
For the filtering stage, we utilized a set of handcrafted and tuned filters derived from a number of sources such as C4, RedPajama, and Gopher, in addition to our own filters.
For the deduplication stage, we used minhash approximate deduplication. We deduplicated on 13grams and used a minhash signature size of 128 and filtered out documents above a Jaccard similarity of 0.4.
For full details on our data processing, see the Zyda technical report and our dataset processing code.
Personal and Sensitive Information
As a language modelling dataset, it likely contains PII which has not been filtered out of the component datasets and which may have been missed by our own filters.
Bias, Risks, and Limitations
As a dataset comprised of open web scrapes, it is likely that it contains biased and toxic content.
Licensing Information
We are releasing this dataset under the terms of ODCBY. By using this dataset, you are also bound by any license agreements and terms of use of the original data sources.
Citation
If you use our dataset to train a model, please cite us at:
@misc{tokpanov2024zyda,
title={Zyda: A 1.3T Dataset for Open Language Modeling},
author={Yury Tokpanov and Beren Millidge and Paolo Glorioso and Jonathan Pilault and Adam Ibrahim and James Whittington and Quentin Anthony},
year={2024},
eprint={2406.01981},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
 Downloads last month
 68