ACL-OCL / Base_JSON /prefixI /json /insights /2022.insights-1.1.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
112 kB
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:13:09.707916Z"
},
"title": "On Isotropy Calibration of Transformers",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "Ding",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Z\u00fcrich",
"location": {}
},
"email": "yue.ding@uzh.ch"
},
{
"first": "Karolis",
"middle": [],
"last": "Martinkus",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Z\u00fcrich",
"location": {}
},
"email": "martinkus@ethz.ch"
},
{
"first": "Dami\u00e1n",
"middle": [],
"last": "Pascual",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Z\u00fcrich",
"location": {}
},
"email": "damianp@ethz.ch"
},
{
"first": "Simon",
"middle": [],
"last": "Clematide",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Z\u00fcrich",
"location": {}
},
"email": ""
},
{
"first": "Roger",
"middle": [],
"last": "Wattenhofer",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Z\u00fcrich",
"location": {}
},
"email": "wattenhofer@ethz.ch"
},
{
"first": "Eth",
"middle": [],
"last": "Z\u00fcrich",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Z\u00fcrich",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Different studies of the embedding space of transformer models suggest that the distribution of contextual representations is highly anisotropic-the embeddings are distributed in a narrow cone. Meanwhile, static word representations (e.g., Word2Vec or GloVe) have been shown to benefit from isotropic spaces. Therefore, previous work has developed methods to calibrate the embedding space of transformers in order to ensure isotropy. However, a recent study (Cai et al., 2021) shows that the embedding space of transformers is locally isotropic, which suggests that these models are already capable of exploiting the expressive capacity of their embedding space. In this work, we conduct an empirical evaluation of state-of-the-art methods for isotropy calibration on transformers and find that they do not provide consistent improvements across models and tasks. These results support the thesis that, given the local isotropy, transformers do not benefit from additional isotropy calibration.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Different studies of the embedding space of transformer models suggest that the distribution of contextual representations is highly anisotropic-the embeddings are distributed in a narrow cone. Meanwhile, static word representations (e.g., Word2Vec or GloVe) have been shown to benefit from isotropic spaces. Therefore, previous work has developed methods to calibrate the embedding space of transformers in order to ensure isotropy. However, a recent study (Cai et al., 2021) shows that the embedding space of transformers is locally isotropic, which suggests that these models are already capable of exploiting the expressive capacity of their embedding space. In this work, we conduct an empirical evaluation of state-of-the-art methods for isotropy calibration on transformers and find that they do not provide consistent improvements across models and tasks. These results support the thesis that, given the local isotropy, transformers do not benefit from additional isotropy calibration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The impressive performance of transformer models (Vaswani et al., 2017) across almost all areas of Natural Language Processing (NLP) has sparked indepth investigations of these models. A remarkable finding is that the contextual representations computed by transformers are strongly anistropic (Ethayarajh, 2019), i.e., they are unevenly distributed and localized in a narrow cone of the embedding space. This discovery, labeled as the representation degeneration problem by Gao et al. (2019) is surprising since it suggests that most of the expressive capacity of these high-dimensional spaces is neglected by transformers.",
"cite_spans": [
{
"start": 49,
"end": 71,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF23"
},
{
"start": 475,
"end": 492,
"text": "Gao et al. (2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Furthermore, previous work on static word representations, e.g., GloVE (Pennington et al., 2014) or Word2Vec (Mikolov et al., 2013) , established that * First three authors in alphabetic order isotropy is a desirable property in non-contextual embedding spaces (Mu and Viswanath, 2018) . Indeed, Mu and Viswanath (2018) and Liu et al. (2019a) showed that post-processing static word embeddings in order to increase isotropy improves their performance in downstream tasks. Based on these results, recent work has developed methods to correct the anisotropy of the contextual representations generated by transformers (Gao et al., 2019; Wang et al., 2019b; . These isotropy calibration methods have been reported to produce small gains in performance on some NLP tasks.",
"cite_spans": [
{
"start": 71,
"end": 96,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF20"
},
{
"start": 109,
"end": 131,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF18"
},
{
"start": 261,
"end": 285,
"text": "(Mu and Viswanath, 2018)",
"ref_id": "BIBREF19"
},
{
"start": 296,
"end": 319,
"text": "Mu and Viswanath (2018)",
"ref_id": "BIBREF19"
},
{
"start": 324,
"end": 342,
"text": "Liu et al. (2019a)",
"ref_id": "BIBREF14"
},
{
"start": 616,
"end": 634,
"text": "(Gao et al., 2019;",
"ref_id": "BIBREF7"
},
{
"start": 635,
"end": 654,
"text": "Wang et al., 2019b;",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, in a recent study, Cai et al. (2021) show that the space of contextual embeddings of transformers is locally isotropic. By analyzing low dimensional sub-spaces the authors identify isolated clusters and manifolds and argue that isotropy does exist in these manifolds. In the same line, Luo et al. (2021) and Kovaleva et al. (2021) find that in BERT (Devlin et al., 2019) almost all of the embeddings present large values in the same two components of the embedding vector. These large components distort our understanding of the embedding spaces by making all the representations have high cosine similarity. In this work, we perform an extensive empirical evaluation of isotropy calibration methods across different tasks and models to determine if they provide consistent improvements. Our results question the utility of isotropy calibration in transformers, implicitly supporting the argument that transformers do already benefit from local isotropy (Cai et al., 2021) .",
"cite_spans": [
{
"start": 28,
"end": 45,
"text": "Cai et al. (2021)",
"ref_id": "BIBREF3"
},
{
"start": 295,
"end": 312,
"text": "Luo et al. (2021)",
"ref_id": "BIBREF17"
},
{
"start": 317,
"end": 339,
"text": "Kovaleva et al. (2021)",
"ref_id": "BIBREF10"
},
{
"start": 358,
"end": 379,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 963,
"end": 981,
"text": "(Cai et al., 2021)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Since the appearance of the transformer architecture and its multiple variants, of which BERT (Devlin et al., 2019) stands out as the most researched model, a lot of effort has been devoted to understanding their inner workings (Rogers et al., 2020) . Unlike static word embeddings such as GloVE or Word2Vec, transformers build contextual embed-dings, i.e., dynamic representations that aggregate information from other context words. These representations have sparked a lot of research interest. Wu et al. (2020) showed that different transformer architectures produce similar contextual representations. Chronis and Erk (2020) Following the discovery of anisotropy in transformers (Gao et al., 2019; Ethayarajh, 2019) , different isotropy calibration methods have been developed to correct this phenomenon. Gao et al. (2019) and Zhang et al. (2020) introduced regularization objectives that affect the embedding distances. Zhou et al. (2021) presented a module inspired by batch-norm that regularizes the embeddings towards isotropic representations. Wang et al. (2019b) proposed to control the singular value decay of the output layer of transformers and used normalizing flows to map transformer embeddings to an isotropic space. However, Cai et al. (2021) show that contextual representations are locally isotropic and suggest that this property allows transformers to exploit their full expressive capacity, questioning the utility of isotropy calibration.",
"cite_spans": [
{
"start": 94,
"end": 115,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 228,
"end": 249,
"text": "(Rogers et al., 2020)",
"ref_id": "BIBREF21"
},
{
"start": 498,
"end": 514,
"text": "Wu et al. (2020)",
"ref_id": "BIBREF27"
},
{
"start": 607,
"end": 629,
"text": "Chronis and Erk (2020)",
"ref_id": "BIBREF4"
},
{
"start": 684,
"end": 702,
"text": "(Gao et al., 2019;",
"ref_id": "BIBREF7"
},
{
"start": 703,
"end": 720,
"text": "Ethayarajh, 2019)",
"ref_id": "BIBREF6"
},
{
"start": 810,
"end": 827,
"text": "Gao et al. (2019)",
"ref_id": "BIBREF7"
},
{
"start": 832,
"end": 851,
"text": "Zhang et al. (2020)",
"ref_id": "BIBREF28"
},
{
"start": 926,
"end": 944,
"text": "Zhou et al. (2021)",
"ref_id": "BIBREF31"
},
{
"start": 1054,
"end": 1073,
"text": "Wang et al. (2019b)",
"ref_id": "BIBREF25"
},
{
"start": 1244,
"end": 1261,
"text": "Cai et al. (2021)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The output distribution of transformers is typically parameterized as a softmax function:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Isotropy Calibration Methods",
"sec_num": "3"
},
{
"text": "P (Y i = y i |h i ) = exp(h T i W I(y i ) ) N j=1 exp(h T i W j ) ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Isotropy Calibration Methods",
"sec_num": "3"
},
{
"text": "where W \u2208 R N \u00d7d is the output weight matrix, d is the embedding dimension, N is the output size, y i is the i-th output, I(y i ) is the index of y i and h is the contextual embedding produced by the model. Since this constitutes a shared space between model embeddings h \u2208 H and output embeddings, isotropy at the output distribution can be enforced by calibrating either H or W . We experiment with three prominent methods for isotropy calibration on transformers: Gao et al. (2019) introduce a simple regularization term that minimizes the cosine similarity between any two output embeddings in order to increase the aperture of the cone that contains the embeddings. This regularization term is given by:",
"cite_spans": [
{
"start": 467,
"end": 484,
"text": "Gao et al. (2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Isotropy Calibration Methods",
"sec_num": "3"
},
{
"text": "Cosine Regularization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Isotropy Calibration Methods",
"sec_num": "3"
},
{
"text": "R cos = \u03bb c 1 |V| 2 n i n j =i\u0175 T i\u0175 j ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Isotropy Calibration Methods",
"sec_num": "3"
},
{
"text": "where w i is the embedding of the i-th token in the vocabulary V,\u0175 = w ||w|| and \u03bb c is the regularization constant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Isotropy Calibration Methods",
"sec_num": "3"
},
{
"text": "Spectrum Control. Wang et al. (2019b) increase isotropy by mitigating the fast decay of the singular value distribution of the output matrix W . They decompose W using Singular Value Decomposition (SVD), such that W = U \u03a3V T , where \u03a3 \u2208 R d\u00d7d is the diagonal matrix of singular values. Then, they add a regularization term to guide the singular value distribution towards a prespecified slow-decaying prior distribution. This term spreads the variance away from the first few dominating singular values, increasing the isotropy of the space. They propose the following two regularization terms:",
"cite_spans": [
{
"start": 18,
"end": 37,
"text": "Wang et al. (2019b)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Isotropy Calibration Methods",
"sec_num": "3"
},
{
"text": "R pol (\u03a3) = \u03bb p d k=1 (\u03c3 k \u2212 c 1 k \u03b3 ) 2 ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Isotropy Calibration Methods",
"sec_num": "3"
},
{
"text": "for polynomial singular value decay; and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Isotropy Calibration Methods",
"sec_num": "3"
},
{
"text": "R exp (\u03a3) = \u03bb e d k=1 (\u03c3 k \u2212 c 1 exp(\u2212c 2 k \u03b3 )) 2 ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Isotropy Calibration Methods",
"sec_num": "3"
},
{
"text": "for exponential decay, where \u03bb e , \u03bb p , c 1 and c 2 are regularization constants, \u03c3 k is the k-th largest singular value and \u03b3 is a parameter which controls the rate of singular value decay.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Isotropy Calibration Methods",
"sec_num": "3"
},
{
"text": "Flow Model. propose a method that leverages normalizing flows to learn an invertible mapping f \u22121 \u03c6 between the embedding space of the transformer model and an isotropic (Gaussian) space Z. First, an invertible flow model (Kingma and Dhariwal, 2018) f \u03c6 is trained to generate transformer embedding vectors h from Gaussian noise z:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Isotropy Calibration Methods",
"sec_num": "3"
},
{
"text": "z \u223c p Z (z), h = f \u03c6 (z) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Isotropy Calibration Methods",
"sec_num": "3"
},
{
"text": "Then, the model f \u03c6 is inverted to map transformer embeddings h to the new (and isotropic) output embedding space Z. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Isotropy Calibration Methods",
"sec_num": "3"
},
{
"text": "We evaluate the impact of each of these calibration methods on state-of-the-art transformer models in three prominent areas of Natural Language Processing: language understanding, machine translation, and summarization. For all of the models, we use the implementation and fine-tuning parameters from HuggingFace (Wolf et al., 2020) (cf. Appendix B). We run each experiment three times and report the mean and standard deviation. Finetuning time is reported on a Nvidia Titan RTX GPU.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "To characterize the isotropy of the output embedding space we adopt the I 1 and I 2 isotropy measures from (Wang et al., 2019b) , with I 1 (W ) \u2208 [0, 1] and I 2 (W ) \u2265 0. Larger I 1 (W ) and smaller I 2 (W ) indicate more isotropic embeddings (cf. App. A for details).",
"cite_spans": [
{
"start": 107,
"end": 127,
"text": "(Wang et al., 2019b)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We consider three representative transformer models with different sizes, BERT-base (Devlin et al., 2019) , RoBERTa (Liu et al., 2019b) , and Distil-BERT . We evaluate these models on the development set of GLUE (Wang et al., 2019a) , a well-known benchmark for language understanding that consists of nine different tasks. Due to the high computational cost of flow calibration and the large number of tasks, we apply this method only on BERT to save resources.",
"cite_spans": [
{
"start": 84,
"end": 105,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 116,
"end": 135,
"text": "(Liu et al., 2019b)",
"ref_id": "BIBREF16"
},
{
"start": 212,
"end": 232,
"text": "(Wang et al., 2019a)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Language Understanding",
"sec_num": "4.1"
},
{
"text": "In Table 1 we report the performance per task of the calibrated and uncalibrated models. We observe the same pattern for all three models. In the overwhelming majority of cases, the calibrated models perform comparably to or worse than the uncalibrated ones, with calibration improving performance with statistical significance (p < 0.05, two-sample t-test) only in RoBERTa for WNLI with exponential decay and MNLI mismatched with cosine regularization. More specifically, cosine regularization and flow calibration (in BERT) do not affect performance much, while spectrum control in some cases produces severe performance degradation or even prevents learning, e.g., CoLA and STS-B. Furthermore, flow calibration adds a large training overhead, requiring on average 4.2 times more time per training epoch.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Language Understanding",
"sec_num": "4.1"
},
{
"text": "These results reveal that no isotropy calibration method yields consistently better performance than the uncalibrated models in language understanding tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Understanding",
"sec_num": "4.1"
},
{
"text": "We test multilingual BART (M-BART) on English-Romanian and German-English WMT16 (Bojar et al., 2016) translation datasets. In Table 2 we report BLUE scores, compute time, and the isotropy metrics, for the uncalibrated and calibrated models. To reduce the high computational cost of flow calibration, we apply this method only on a reduced version of 50 000 samples for both tasks, English-Romanian and German-English translation. As a reference, we also provide the scores of the uncalibrated model on the small datasets. We find, that while cosine regularization does not significantly affect either BLEU scores or isotropy metrics, both variants of spectrum control improve isotropy but produce a performance degradation of over 3 and 5 BLEU points in the English-Romanian and German-English tasks respectively, while requiring 25% to 50% more computation time. On the other hand, flow calibration yields comparable BLEU score to the uncalibrated model but requires on average 10.5 times more computation per epoch. These results suggest a negative and counter-intuitive relation between isotropy and downstream performance: when isotropy increases, performance decreases. We observe a similar trend for language understanding in Appendix C. Overall, and in line with the results in the previous section, isotropy calibration in machine translation tends to degrade performance and increase the computational budget.",
"cite_spans": [
{
"start": 80,
"end": 100,
"text": "(Bojar et al., 2016)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 126,
"end": 133,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Machine Translation",
"sec_num": "4.2"
},
{
"text": "3 EN-RO DE-EN Model BLEU (\u2191) I 1 (\u2191) I 2 (\u2193) Time (min) BLEU (\u2191) I 1 (\u2191) I 2 (\u2193) Time (min) M-BART",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Translation",
"sec_num": "4.2"
},
{
"text": "We evaluate BART on the CNN/DM summarization task (Hermann et al., 2015); again we use a reduced dataset (20 000 articles) for flow calibration. The results in Table 3 show that none of the calibrated models performs significantly better than their uncalibrated counterparts in terms of ROUGE score (Lin, 2004) (cf. Appendix D) . Cosine regularization does not affect performance nor isotropy, while spectrum control improves isotropy (I 1 and I 2 ) at the cost of a small performance drop. The flow model performs comparably to uncalibrated BART but requires 5.5 times more computation. Overall, we find no evidence that isotropy calibration provides gains in summarization.",
"cite_spans": [
{
"start": 299,
"end": 327,
"text": "(Lin, 2004) (cf. Appendix D)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 160,
"end": 167,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Summarization",
"sec_num": "4.3"
},
{
"text": "Our extensive evaluation shows that none of the considered isotropy calibration methods produce consistent improvements over the uncalibrated models across tasks, domains and architectures. In fact, we observe a negative relation between isotropy calibration and downstream performance. The most aggressive method, i.e., spectrum control, produces the largest improvement in isotropy Table 3 : ROUGE-1 score, isotropy (I 1 and I 2 ), and fine-tuning time per epoch with different calibration methods on BART for summarization. Due to computational cost, the flow calibration method was tested on a smaller version of the dataset.",
"cite_spans": [],
"ref_spans": [
{
"start": 384,
"end": 391,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "metrics as well as the most significant performance drop. On the other hand, the effect of cosine regularization and flow calibration is small in both, isotropy and performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "According to Cai et al. (2021) , the local isotropy of the embedding space of transformers may enable them to exploit their full expressive capacity. Furthermore, concurrent findings by Luo et al. (2021) and Kovaleva et al. (2021) reveal that certain components of the contextual embeddings consistently present very large magnitudes, which distort the cosine distances in the embedding space and questions their anisotropy. This could explain why additional isotropy calibration does not consistently improve the performance of transformers in downstream tasks.",
"cite_spans": [
{
"start": 13,
"end": 30,
"text": "Cai et al. (2021)",
"ref_id": "BIBREF3"
},
{
"start": 186,
"end": 203,
"text": "Luo et al. (2021)",
"ref_id": "BIBREF17"
},
{
"start": 208,
"end": 230,
"text": "Kovaleva et al. (2021)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "In light of our results, we discourage isotropy calibration of transformers as a means of improving downstream performance. However, we believe that further investigation of the embedding space of transformers may be beneficial to increase our ability to interpret these models and improve their architecture.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
}
],
"back_matter": [
{
"text": "To characterize the isotropy of the output embedding space we adopt the I 1 and I 2 isotropy measures from (Wang et al., 2019b) .is based on the observation by (Arora et al., 2016) , that the partition function Z(v) = n i=1 exp(v T w i ) should be close to a constant for any unit vector v if the embedding matrix W is isotropic. Here, we abuse notation and w i \u2208 W is the i-th row of the embedding matrix W . Following (Mu and Viswanath, 2018) we use the set of eigenvectors of W T W as V . The second measureis the sample standard deviation of the partition function Z(v) normalized by its averageZ(v). This way, I 1 (W ) \u2208 [0, 1] and I 2 (W ) \u2265 0. Larger I 1 (W ) and smaller I 2 (W ) indicate more isotropic embeddings.",
"cite_spans": [
{
"start": 107,
"end": 127,
"text": "(Wang et al., 2019b)",
"ref_id": "BIBREF25"
},
{
"start": 160,
"end": 180,
"text": "(Arora et al., 2016)",
"ref_id": "BIBREF0"
},
{
"start": 420,
"end": 444,
"text": "(Mu and Viswanath, 2018)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Isotropy Metrics",
"sec_num": null
},
{
"text": "For all the models used in his work we use the implementation from HuggingFace and follow their instructions for the hyperparameters. In particular, we use the following configurations:BERT and DistilBERT. Learning rate 2e \u22125 without scheduling, batch size 32, 3 training epochs for all GLUE tasks except for MRPC and WNLI, for which we train during 5 epochs.RoBERTa. Learning rate of 1e \u22125 for all GLUE tasks except for SST-2 and STS-B, for which the learning rate is set to 1e \u22125 , same number of epochs as for BERT and DistilBERT, batch size of 32.M-BART and BART. Learning rate of 3e \u22125 with polynomial decay, batch size 48, and 5 training epochs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Model Hyperparameter Configuration",
"sec_num": null
},
{
"text": "Here, in Table 4 , we present the isotropy scores obtained in our evaluation of GLUE with BERT, RoBERTa, and DistilBERT, which were not included in the main text due to lack of space. The isotropy metrics I 1 and I 2 show the opposite trend to the performance metrics. An improvement in isotropy reflects a decrease in downstream performance. This way, we see that across models and tasks, cosine regularization and flow calibration (for BERT) have a small impact on isotropy and that the performance of the models calibrated with these techniques is close to the that of the uncalibrated models. On the other hand, spectrum control produces a very significant increase in isotropy, with many tasks reaching a I 1 of 1.00; while in Table 1 we see how it produces strong performance degradation. This, further suggests a negative relation between isotropy and the downstream performance of transformers.BERT 0.91 \u00b10.01 0.4 \u00b10 0.91 \u00b10.01 0.38 \u00b10.01 0.91 \u00b10.01 0.39 \u00b10.01 +Cosreg 0.91 \u00b10.2 0.39 \u00b10.02 0.92 \u00b10.01 0.39 \u00b10.2 0.91 \u00b10.01 0.39 \u00b10.01 +Spectrum-Pol 1.00 \u00b10 0.007 \u00b10.003 1.00 \u00b10 7e \u22124 \u00b13e \u22124 1.00 \u00b10 6e \u22124 \u00b11e \u22124 +Spectrum-Exp 0.99 \u00b10.01 0.02 \u00b10.02 1.00 \u00b10 6e \u22124 \u00b12e \u22124 1.00 \u00b10 7e \u22124 \u00b13e \u22124 +Flow 0.92 \u00b10.01 0.40 \u00b10 0.91 \u00b10.01 0.40 \u00b10 0.91 \u00b10.01 0.39 \u00b10.01 RoBERTa 0.91 \u00b10.01 0.39 \u00b10.01 0.92 \u00b10.01 0.39 \u00b10.01 0.91 \u00b10.01 0.40 \u00b10.01 +Cosreg 0.92 \u00b10.01 0.40 \u00b10.01 0.91 \u00b10.01 0.39 \u00b10.01 0.91 \u00b10.01 0.40 \u00b10.01 +Spectrum-Pol 1.00 \u00b10 0.008 \u00b10.002 1.00 \u00b10 5e \u22124 \u00b14e \u22124 1.00 \u00b10 5e \u22124 \u00b12e \u22124 +Spectrum-Exp 1.00 \u00b10 0.005 \u00b10.004 1.00 \u00b10 1e \u22124 \u00b12e \u22124 1.00 \u00b10 6e \u22124 \u00b14e \u22124DistilBERT 0.91 \u00b10.01 0.38 \u00b10.01 0.92 \u00b10.01 0.39 \u00b10.01 0.92 \u00b10.01 0.38 \u00b10.01 +Cosreg 0.91 \u00b10.01 0.39 \u00b10.01 0.92 \u00b10.01 0.38 \u00b10.01 0.92 \u00b10.01 0.38 \u00b10.01 +Spectrum-Pol 1.00 \u00b10.01 0.012 \u00b10.016 1.00 \u00b107e \u22124 \u00b15e \u22124 1.00 \u00b10 11e \u22124 \u00b19e \u22124 +Spectrum-Exp 1.00 \u00b10.01 0.009 \u00b10.010 1.00 \u00b107e \u22124 \u00b15e \u22124 1.00 \u00b10 11e \u22124 \u00b19e \u22124 2e \u22124 \u00b11e \u22124 1.00 \u00b10 1e \u22124 \u00b12e \u22124 1.00 \u00b10 0.002 \u00b10 +Spectrum-Exp 1.00 \u00b103e \u22124 \u00b12e \u22124 1.00 \u00b10 2e \u22124 \u00b13e \u22124 1.00 \u00b10 13e \u22124 \u00b16e \u22124 +Flow 0.92 \u00b10.01 0.39 3e \u22124 \u00b12e \u22124 1.00 \u00b10 3e \u22124 \u00b11e \u22124 1.00 \u00b10 7e \u22124 \u00b13e \u22124 +Spectrum-Exp 1.00 \u00b103e \u22124 \u00b12e \u22124 1.00 \u00b10 3e \u22124 \u00b11e \u22124 1.00 \u00b10 2e \u22124 \u00b13e \u22124 1.00 \u00b10 1e \u22124 \u00b12e \u22124 1.00 \u00b10 9e \u22124 \u00b11e \u22124 +Spectrum-Exp 1.00 \u00b102e \u22124 \u00b13e \u22124 1.00 \u00b10 1e \u22124 \u00b12e \u22124 1.00 \u00b10 9e \u22124 \u00b11e \u22124",
"cite_spans": [
{
"start": 1148,
"end": 1153,
"text": "\u00b10.02",
"ref_id": null
},
{
"start": 1256,
"end": 1261,
"text": "\u00b10.01",
"ref_id": null
}
],
"ref_spans": [
{
"start": 9,
"end": 16,
"text": "Table 4",
"ref_id": null
},
{
"start": 732,
"end": 739,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "C Isotropy Scores on GLUE",
"sec_num": null
},
{
"text": "ModelBERT 0.92 \u00b10.01 0.39 \u00b10.01 0.93 \u00b10.01 0.32 \u00b10 0.92 \u00b10.01 0.39 \u00b10.01 +Cosreg 0.92 \u00b10.01 0.39 \u00b10.01 0.93 \u00b10.01 0.32 \u00b10.01 0.9 \u00b10 0.39 \u00b10.01 +Spectrum-Pol 0.99 \u00b10.01 0.06 \u00b10.02 0.95 \u00b10.01 0.21 \u00b10.04 0.92 \u00b10.02 0.39 \u00b10.06 +Spectrum-Exp 1.00 \u00b105e \u22124 \u00b11e \u22124 0.98 \u00b10.01 0.08 ",
"cite_spans": [
{
"start": 173,
"end": 178,
"text": "\u00b10.02",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "QNLI MNLI QQP",
"sec_num": null
},
{
"text": "Here we report the complete summarization results, including the ROUGE-2 and ROUGE-L metrics, omitted in the main text. Table 5 : Complete BART summariation performance, embedding space isotropy and fine-tuning time per epoch using different calibration methods on the CNN / DailyMail dataset. Due to computational cost, the flow calibration method was tested on a smaller version of the dataset with 20 000 articles.The performance in terms of ROUGE-2 and ROUGE-L scores follows the same patterns as ROUGE-1. Similar to language understanding and machine translation, increasing isotropy does not improve performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 120,
"end": 127,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "D Complete Summarization Results",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A latent variable model approach to pmi-based word embeddings",
"authors": [
{
"first": "Sanjeev",
"middle": [],
"last": "Arora",
"suffix": ""
},
{
"first": "Yuanzhi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yingyu",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Tengyu",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Andrej",
"middle": [],
"last": "Risteski",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "385--399",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. 2016. A latent variable model approach to pmi-based word embeddings. Transac- tions of the Association for Computational Linguis- tics, 4:385-399.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Findings of the 2016 conference on machine translation",
"authors": [
{
"first": "Ondrej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Rajen",
"middle": [],
"last": "Chatterjee",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Huck",
"suffix": ""
},
{
"first": "Antonio",
"middle": [
"Jimeno"
],
"last": "Yepes",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Varvara",
"middle": [],
"last": "Logacheva",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
},
{
"first": "Matteo",
"middle": [],
"last": "Negri",
"suffix": ""
},
{
"first": "Aurelie",
"middle": [],
"last": "Neveol",
"suffix": ""
},
{
"first": "Mariana",
"middle": [],
"last": "Neves",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Popel",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
},
{
"first": "Raphael",
"middle": [],
"last": "Rubino",
"suffix": ""
},
{
"first": "Carolina",
"middle": [],
"last": "Scarton",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Turchi",
"suffix": ""
},
{
"first": "Karin",
"middle": [],
"last": "Verspoor",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the First Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "131--198",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ondrej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Aure- lie Neveol, Mariana Neves, Martin Popel, Matt Post, Raphael Rubino, Carolina Scarton, Lucia Spe- cia, Marco Turchi, Karin Verspoor, and Marcos Zampieri. 2016. Findings of the 2016 conference on machine translation. In Proceedings of the First Conference on Machine Translation, pages 131-198, Berlin, Germany. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "On identifiability in transformers",
"authors": [
{
"first": "Gino",
"middle": [],
"last": "Brunner",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Damian",
"middle": [],
"last": "Pascual",
"suffix": ""
},
{
"first": "Oliver",
"middle": [],
"last": "Richter",
"suffix": ""
},
{
"first": "Massimiliano",
"middle": [],
"last": "Ciaramita",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Wattenhofer",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gino Brunner, Yang Liu, Damian Pascual, Oliver Richter, Massimiliano Ciaramita, and Roger Watten- hofer. 2019. On identifiability in transformers. In International Conference on Learning Representa- tions.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Isotropy in the contextual embedding space: Clusters and manifolds",
"authors": [
{
"first": "Xingyu",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Jiaji",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Yuchen",
"middle": [],
"last": "Bian",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Church",
"suffix": ""
}
],
"year": 2021,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xingyu Cai, Jiaji Huang, Yuchen Bian, and Kenneth Church. 2021. Isotropy in the contextual embed- ding space: Clusters and manifolds. In International Conference on Learning Representations.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "When is a bishop not like a rook? when it's like a rabbi! multiprototype bert embeddings for estimating semantic relationships",
"authors": [
{
"first": "Gabriella",
"middle": [],
"last": "Chronis",
"suffix": ""
},
{
"first": "Katrin",
"middle": [],
"last": "Erk",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 24th Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "227--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabriella Chronis and Katrin Erk. 2020. When is a bishop not like a rook? when it's like a rabbi! multi- prototype bert embeddings for estimating semantic relationships. In Proceedings of the 24th Confer- ence on Computational Natural Language Learning, pages 227-244.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "How contextual are contextualized word representations? Comparing the geometry of BERT, ELMo, and GPT-2 embeddings",
"authors": [
{
"first": "Kawin",
"middle": [],
"last": "Ethayarajh",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "55--65",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kawin Ethayarajh. 2019. How contextual are contextu- alized word representations? Comparing the geom- etry of BERT, ELMo, and GPT-2 embeddings. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 55-65, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Representation degeneration problem in training natural language generation models",
"authors": [
{
"first": "Jun",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Liwei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Tieyan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jun Gao, Di He, Xu Tan, Tao Qin, Liwei Wang, and Tieyan Liu. 2019. Representation degenera- tion problem in training natural language generation models. In International Conference on Learning Representations.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Teaching machines to read and comprehend",
"authors": [
{
"first": "Karl",
"middle": [],
"last": "Moritz Hermann",
"suffix": ""
},
{
"first": "Tom\u00e1s",
"middle": [],
"last": "Kocisk\u00fd",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Lasse",
"middle": [],
"last": "Espeholt",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Kay",
"suffix": ""
},
{
"first": "Mustafa",
"middle": [],
"last": "Suleyman",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2015,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "1693--1701",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karl Moritz Hermann, Tom\u00e1s Kocisk\u00fd, Edward Grefen- stette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In NIPS, pages 1693-1701.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Glow: Generative flow with invertible 1x1 convolutions",
"authors": [
{
"first": "P",
"middle": [],
"last": "Durk",
"suffix": ""
},
{
"first": "Prafulla",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dhariwal",
"suffix": ""
}
],
"year": 2018,
"venue": "Advances in Neural Information Processing Systems",
"volume": "31",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Durk P Kingma and Prafulla Dhariwal. 2018. Glow: Generative flow with invertible 1x1 convolutions. In Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "BERT busters: Outlier dimensions that disrupt transformers",
"authors": [
{
"first": "Olga",
"middle": [],
"last": "Kovaleva",
"suffix": ""
},
{
"first": "Saurabh",
"middle": [],
"last": "Kulshreshtha",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rogers",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rumshisky",
"suffix": ""
}
],
"year": 2021,
"venue": "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
"volume": "",
"issue": "",
"pages": "3392--3405",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Olga Kovaleva, Saurabh Kulshreshtha, Anna Rogers, and Anna Rumshisky. 2021. BERT busters: Out- lier dimensions that disrupt transformers. In Find- ings of the Association for Computational Linguis- tics: ACL-IJCNLP 2021, pages 3392-3405, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Bart: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal ; Abdelrahman Mohamed",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "7871--7880",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. Bart: Denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7871-7880.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "On the sentence embeddings from pre-trained language models",
"authors": [
{
"first": "Bohan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Junxian",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Mingxuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "9119--9130",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bohan Li, Hao Zhou, Junxian He, Mingxuan Wang, Yiming Yang, and Lei Li. 2020. On the sentence embeddings from pre-trained language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9119-9130, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Rouge: A package for automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Text summarization branches out",
"volume": "",
"issue": "",
"pages": "74--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Unsupervised post-processing of word vectors via conceptor negation",
"authors": [
{
"first": "Tianlin",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Lyle",
"middle": [],
"last": "Ungar",
"suffix": ""
},
{
"first": "Joao",
"middle": [],
"last": "Sedoc",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "6778--6785",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianlin Liu, Lyle Ungar, and Joao Sedoc. 2019a. Un- supervised post-processing of word vectors via con- ceptor negation. In Proceedings of the AAAI Con- ference on Artificial Intelligence, volume 33, pages 6778-6785.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Multilingual denoising pre-training for neural machine translation",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Xian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Marjan",
"middle": [],
"last": "Ghazvininejad",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2020,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "8",
"issue": "",
"pages": "726--742",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pre-training for neural machine translation. Transac- tions of the Association for Computational Linguis- tics, 8:726-742.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Positional artefacts propagate through masked language model embeddings",
"authors": [
{
"first": "Ziyang",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Artur",
"middle": [],
"last": "Kulmizev",
"suffix": ""
},
{
"first": "Xiao-Xi",
"middle": [],
"last": "Mao",
"suffix": ""
}
],
"year": 2021,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ziyang Luo, Artur Kulmizev, and Xiao-Xi Mao. 2021. Positional artefacts propagate through masked lan- guage model embeddings. In ACL.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Gregory",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Efficient estimation of word rep- resentations in vector space. In ICLR.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "All-but-thetop: Simple and effective postprocessing for word representations",
"authors": [
{
"first": "Jiaqi",
"middle": [],
"last": "Mu",
"suffix": ""
},
{
"first": "Pramod",
"middle": [],
"last": "Viswanath",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiaqi Mu and Pramod Viswanath. 2018. All-but-the- top: Simple and effective postprocessing for word representations. In International Conference on Learning Representations.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language process- ing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A primer in bertology: What we know about how bert works",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Rogers",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Kovaleva",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rumshisky",
"suffix": ""
}
],
"year": 2020,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "8",
"issue": "",
"pages": "842--866",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anna Rogers, Olga Kovaleva, and Anna Rumshisky. 2020. A primer in bertology: What we know about how bert works. Transactions of the Association for Computational Linguistics, 8:842-866.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2020. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 31st International Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "6000--6010",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Sys- tems, pages 6000-6010.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Glue: A multi-task benchmark and analysis platform for natural language understanding",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel R",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2019,
"venue": "7th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2019a. Glue: A multi-task benchmark and analysis platform for natural language understanding. In 7th Inter- national Conference on Learning Representations, ICLR 2019.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Improving neural language generation with spectrum control",
"authors": [
{
"first": "Lingxiao",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Ziniu",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Guangtao",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Quanquan",
"middle": [],
"last": "Gu",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lingxiao Wang, Jing Huang, Kevin Huang, Ziniu Hu, Guangtao Wang, and Quanquan Gu. 2019b. Improv- ing neural language generation with spectrum con- trol. In International Conference on Learning Rep- resentations.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "Remi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Drame",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language process- ing. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Similarity analysis of contextual word representation models",
"authors": [
{
"first": "John",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Hassan",
"middle": [],
"last": "Sajjad",
"suffix": ""
},
{
"first": "Nadir",
"middle": [],
"last": "Durrani",
"suffix": ""
},
{
"first": "Fahim",
"middle": [],
"last": "Dalvi",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4638--4655",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Wu, Yonatan Belinkov, Hassan Sajjad, Nadir Dur- rani, Fahim Dalvi, and James Glass. 2020. Similar- ity analysis of contextual word representation mod- els. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4638-4655.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Revisiting representation degeneration problem in language modeling",
"authors": [
{
"first": "Zhong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chongming",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Cong",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Miao",
"suffix": ""
},
{
"first": "Qinli",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Junming",
"middle": [],
"last": "Shao",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
"volume": "",
"issue": "",
"pages": "518--527",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhong Zhang, Chongming Gao, Cong Xu, Rui Miao, Qinli Yang, and Junming Shao. 2020. Revisit- ing representation degeneration problem in language modeling. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Process- ing: Findings, pages 518-527.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Quantifying the contextualization of word representations with semantic class probing",
"authors": [
{
"first": "Mengjie",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Dufter",
"suffix": ""
},
{
"first": "Yadollah",
"middle": [],
"last": "Yaghoobzadeh",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
"volume": "",
"issue": "",
"pages": "1219--1234",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mengjie Zhao, Philipp Dufter, Yadollah Yaghoobzadeh, and Hinrich Sch\u00fctze. 2020. Quanti- fying the contextualization of word representations with semantic class probing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pages 1219-1234.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Of non-linearity and commutativity in bert",
"authors": [
{
"first": "Sumu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Dami\u00e1n",
"middle": [],
"last": "Pascual",
"suffix": ""
},
{
"first": "Gino",
"middle": [],
"last": "Brunner",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Wattenhofer",
"suffix": ""
}
],
"year": 2021,
"venue": "2021 International Joint Conference on Neural Networks (IJCNN)",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sumu Zhao, Dami\u00e1n Pascual, Gino Brunner, and Roger Wattenhofer. 2021. Of non-linearity and commuta- tivity in bert. In 2021 International Joint Confer- ence on Neural Networks (IJCNN), pages 1-8.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Isobn: Fine-tuning bert with isotropic batch normalization",
"authors": [
{
"first": "Wenxuan",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Bill Yuchen Lin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ren",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "35",
"issue": "",
"pages": "14621--14629",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenxuan Zhou, Bill Yuchen Lin, and Xiang Ren. 2021. Isobn: Fine-tuning bert with isotropic batch normal- ization. Proceedings of the AAAI Conference on Ar- tificial Intelligence, 35(16):14621-14629.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "studied the similarity and relatedness of contextual representations in the embedding spaces of BERT, while Brunner et al. (2019) studied how identifiable the intermediate representations of BERT are with respect to the input. Zhao et al. (2020) quantified the contextual knowledge of BERT and Zhao et al. (2021) analyzed the embedding spaces of BERT in order to quantify the non-linearity of its layers.",
"uris": null
},
"TABREF0": {
"html": null,
"content": "<table><tr><td/><td>SST-2</td><td>MRPC</td><td>CoLA</td><td>RTE</td><td>WNLI</td><td>STS-B</td><td>QNLI</td><td colspan=\"2\">MNLI</td><td>QQP</td></tr><tr><td>Model</td><td>Accuracy</td><td>F1</td><td>Mat. corr.</td><td>Accuracy</td><td>Accuracy</td><td>Pearson corr.</td><td>Accuracy</td><td>Match acc.</td><td>Mismatch acc.</td><td>Accuracy</td></tr><tr><td>BERT</td><td>91.44</td><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>",
"type_str": "table",
"num": null,
"text": "\u00b10.52 88.80\u00b10.99 53.16\u00b11.82 58.97\u00b11.82 53.52\u00b14.88 80.86 \u00b12.11 88.78\u00b10.57 81.02\u00b10.17 81.78\u00b10.40 89.31\u00b10.06 +Cosreg 90.71 \u00b11.00 88.17 \u00b10.38 46.94 \u00b14.29 56.43 \u00b15.16 50.23 \u00b14.95 78.23 \u00b12.19 89.58 \u00b10.19 81.20 \u00b10.41 82.04 \u00b10.21 89.26 \u00b10.10 +Spectrum-Pol 90.86 \u00b11.35 81.22 \u00b10 0 49.58 \u00b13.62 56.34 \u00b10 NaN 81.24 \u00b14.45 64.33 \u00b127.80 64.76 \u00b127.48 87.15 \u00b12.23 +Spectrum-Exp 91.21 \u00b10.37 81.22 \u00b10 0 50.90 \u00b13.45 56.34 \u00b10 NaN 86.42 \u00b10.42 62.43 \u00b124.97 63.12 \u00b125.20 89.16 \u00b10.45 +Flow 91.09 \u00b10.54 86.99 \u00b10.89 51.19 \u00b11.81 54.27 \u00b11.46 48.36 \u00b15.86 78.88 \u00b13.46 86.21 \u00b13.38 80.65 \u00b10.46 81.15 \u00b10.21 89.36 \u00b10.10 RoBERTa 92.97 \u00b10.63 85.35 \u00b18.52 53.67 \u00b13.32 53.19 \u00b10.55 54.46 \u00b10.81 83.10 \u00b12.87 91.00 \u00b10.46 85.16 \u00b10.28 85.19 \u00b10.15 89.85 \u00b10.13 +Cosreg 92.66 \u00b10.23 89.17 \u00b12.28 48.99 \u00b15.61 53.67 \u00b11.16 53.52 \u00b11.41 28.44 \u00b144.84 90.89 \u00b10.19 85.41 \u00b10.09 85.64 \u00b10.22 * 89.87 \u00b10.12 +Spectrum-Pol 88.08 \u00b10.99 81.22 \u00b10 0 52.71 \u00b10 57.28 \u00b11.62 * NaN 83.89 \u00b12.46 50.63 \u00b129.72 51.14 \u00b129.29 81.76 \u00b112.76 +Spectrum-Exp 90.71 \u00b11.09 81.22 \u00b10 0 52.95 \u00b10.42 56.34 \u00b10 NaN 82.25 \u00b13.14 84.46 \u00b10.51 84.77 0.41 80.95 \u00b113.89 DistilBERT 88.23 \u00b11.79 87.97 \u00b11.02 44.11 \u00b12.09 56.68 \u00b10.62 51.17 \u00b15.69 23.63 \u00b141.08 87.53 \u00b10.13 78.84 \u00b10.27 79.50 \u00b10.32 88.28 \u00b10.25 +Cosreg 88.53 \u00b11.55 87.88 \u00b11.36 43.13 \u00b10.85 58.24 \u00b11.78 52.11 \u00b12.44 -0.50 \u00b12.08 87.15 \u00b10.84 78.69 \u00b10.17 79.42 \u00b10.28 88.38 \u00b10.05 +Spectrum-Pol 88.80 \u00b10.37 81.22 \u00b10 0 54.15 \u00b12.50 55.87 \u00b10.81 NaN 85.47 \u00b10.96 78.39 \u00b10.17 79.13 \u00b10.05 88.41 \u00b10.43 +Spectrum-Exp 88.92 \u00b10.67 81.22 \u00b10 0 54.27 \u00b12.71 55.87 \u00b10.81 NaN 86.25 \u00b10.80 78.38 \u00b11.34 79.03 \u00b10.34 88.12 \u00b10.58"
},
"TABREF1": {
"html": null,
"content": "<table/>",
"type_str": "table",
"num": null,
"text": "Performance for different models and calibration methods on GLUE; * denotes significantly better performance than the corresponding uncalibrated model (p < 0.05, two-sample t-test). The NaN and 0 scores are caused by the model always predicting the same class."
},
"TABREF2": {
"html": null,
"content": "<table><tr><td>M-BART (small dataset) 9.09 \u00b11.02 +Flow 8.57 \u00b12.52</td><td>0.88 \u00b10 0.89 \u00b10</td><td>0.60 \u00b10 9 \u00b10 0.60 \u00b10 95 \u00b10</td><td>11.61 \u00b12.25 0.88 \u00b10 10.93 \u00b10.70 0.88 \u00b10</td><td>0.60 \u00b10 9 \u00b10 0.60 \u00b10 96 \u00b11</td></tr></table>",
"type_str": "table",
"num": null,
"text": "26.15 \u00b10.08 0.88 \u00b10.01 0.60 \u00b10 108 \u00b10 22.81 \u00b10.35 0.89 \u00b10.01 0.60 \u00b10 176 \u00b10 +Cosreg 26.07 \u00b10.10 0.88 \u00b10.01 0.60 \u00b10 110 \u00b10 23.03 \u00b10.27 0.89 \u00b10.01 0.60 \u00b10 188 \u00b11 +Spectrum-Pol 22.94 \u00b10.18 1.00 \u00b10 0.02 \u00b10 176 \u00b12 16.27 \u00b10.06 1.00 \u00b10 0.02 \u00b10 265 \u00b10 +Spectrum-Exp 22.92 \u00b10.05 1.00 \u00b10 0.02 \u00b10 170 \u00b11 16.24 \u00b10.12 1.00 \u00b10 0.02 \u00b10 230 \u00b118"
},
"TABREF3": {
"html": null,
"content": "<table/>",
"type_str": "table",
"num": null,
"text": "Multilingual BART performance, isotropy (I 1 and I 2 ) and fine-tuning time per epoch with different calibration methods for English -Romanian and German -English translation. Due to computational cost, the flow method was tested only on a smaller version of the EN-RO dataset with 50 000 sentences."
},
"TABREF4": {
"html": null,
"content": "<table><tr><td/><td/><td colspan=\"2\">CNN / Daily Mail</td><td/></tr><tr><td>Model</td><td>R-1 (\u2191)</td><td>I 1 (\u2191)</td><td>I 2 (\u2193)</td><td>Time (min)</td></tr><tr><td>BART</td><td>38.21</td><td/><td/><td/></tr></table>",
"type_str": "table",
"num": null,
"text": "\u00b10.05 0.95 \u00b10.01 0.25 \u00b10 246 \u00b18 +Cosreg 38.21 \u00b10.05 0.95 \u00b10.01 0.25 \u00b10 240 \u00b18 +Spectrum-Pol 37.36 \u00b10.08 0.99 \u00b10 0.04 \u00b10 245 \u00b120 +Spectrum-Exp 37.43 \u00b10.08 0.99 \u00b10 0.04 \u00b10 230 \u00b118 BART (small d.) 36.56 \u00b10.25 0.94 \u00b10 0.25 \u00b10 17 \u00b10 +Flow 36.15 \u00b10.30 0.94 \u00b10 0.25 \u00b10 95 \u00b12"
}
}
}
}