ACL-OCL / Base_JSON /prefixB /json /blackboxnlp /2021.blackboxnlp-1.16.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:09:38.896927Z"
},
"title": "Relating Neural Text Degeneration to Exposure Bias",
"authors": [
{
"first": "Ting-Rui",
"middle": [],
"last": "Chiang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": "tingruic@andrew.cmu.edu"
},
{
"first": "Yun-Nung",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Taiwan University",
"location": {}
},
"email": "y.v.chen@ieee.org"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This work focuses on relating two mysteries in neural-based text generation: exposure bias, and text degeneration. Despite the long time since exposure bias was mentioned and the numerous studies for its remedy, to our knowledge, its impact on text generation has not yet been verified. Text degeneration is a problem that the widely-used pre-trained language model GPT-2 was recently found to suffer from (Holtzman et al., 2020). Motivated by the unknown causation of the text degeneration, in this paper we attempt to relate these two mysteries. Specifically, we first qualitatively and quantitatively identify mistakes made before text degeneration occurs. Then we investigate the significance of the mistakes by inspecting the hidden states in GPT-2. Our results show that text degeneration is likely to be partly caused by exposure bias. We also study the self-reinforcing mechanism of text degeneration, explaining why the mistakes amplify. In sum, our study provides a more concrete foundation for further investigation on exposure bias and text degeneration problems.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "This work focuses on relating two mysteries in neural-based text generation: exposure bias, and text degeneration. Despite the long time since exposure bias was mentioned and the numerous studies for its remedy, to our knowledge, its impact on text generation has not yet been verified. Text degeneration is a problem that the widely-used pre-trained language model GPT-2 was recently found to suffer from (Holtzman et al., 2020). Motivated by the unknown causation of the text degeneration, in this paper we attempt to relate these two mysteries. Specifically, we first qualitatively and quantitatively identify mistakes made before text degeneration occurs. Then we investigate the significance of the mistakes by inspecting the hidden states in GPT-2. Our results show that text degeneration is likely to be partly caused by exposure bias. We also study the self-reinforcing mechanism of text degeneration, explaining why the mistakes amplify. In sum, our study provides a more concrete foundation for further investigation on exposure bias and text degeneration problems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "One mythology in neural text generation is exposure bias (Bengio et al., 2015; Pomerleau, 1989; Thrun, 1995) . In the context of text generation, exposure bias refers to mistakes made by the model at the beginning of text generation, which may amplify, and lead the model to a state unseen in training time, and may thus cause misbehavior in the following generation. Phenomena related to exposure bias were first observed in (Pomerleau, 1989) in the self-driving vehicles field. After that, exposure bias was mainly discussed in the context of imitation learning (Thrun, 1995; Ross and Bagnell, 2010; Ross et al., 2011) . In 2015, Bengio et al. (2015) introduced it in the context of neural text generation. However, its impact on text generation is questionable from both the empirical and the theoretical perspectives. Empirically, despite the number of studies for its remedy (Bengio et al., 2015; Husz\u00e1r, 2015; Ranzato et al., 2016; Lamb et al., 2016; Yu et al., 2017; Wiseman and Rush, 2016; Schmidt, 2019; Zhang et al., 2019a) , phenomena resulted from exposure bias have not yet been explicitly identified. On the other hand, theories attained in the context of imitation learning may not be applicable to the above text generation tasks. For example, (Ross and Bagnell, 2010) shows a O(T 2 ) trend of cost with respect to the number of steps T in an episode. It implies that the cost grows quadratically when T is large. However, most of natural language process tasks, e.g. machine translation and image captioning, do not generate very long text. The impact of exposure bias is thus not clear for text generation tasks.",
"cite_spans": [
{
"start": 57,
"end": 78,
"text": "(Bengio et al., 2015;",
"ref_id": "BIBREF0"
},
{
"start": 79,
"end": 95,
"text": "Pomerleau, 1989;",
"ref_id": "BIBREF14"
},
{
"start": 96,
"end": 108,
"text": "Thrun, 1995)",
"ref_id": null
},
{
"start": 426,
"end": 443,
"text": "(Pomerleau, 1989)",
"ref_id": "BIBREF14"
},
{
"start": 564,
"end": 577,
"text": "(Thrun, 1995;",
"ref_id": null
},
{
"start": 578,
"end": 601,
"text": "Ross and Bagnell, 2010;",
"ref_id": "BIBREF19"
},
{
"start": 602,
"end": 620,
"text": "Ross et al., 2011)",
"ref_id": "BIBREF20"
},
{
"start": 632,
"end": 652,
"text": "Bengio et al. (2015)",
"ref_id": "BIBREF0"
},
{
"start": 880,
"end": 901,
"text": "(Bengio et al., 2015;",
"ref_id": "BIBREF0"
},
{
"start": 902,
"end": 915,
"text": "Husz\u00e1r, 2015;",
"ref_id": "BIBREF5"
},
{
"start": 916,
"end": 937,
"text": "Ranzato et al., 2016;",
"ref_id": "BIBREF17"
},
{
"start": 938,
"end": 956,
"text": "Lamb et al., 2016;",
"ref_id": "BIBREF7"
},
{
"start": 957,
"end": 973,
"text": "Yu et al., 2017;",
"ref_id": "BIBREF26"
},
{
"start": 974,
"end": 997,
"text": "Wiseman and Rush, 2016;",
"ref_id": "BIBREF25"
},
{
"start": 998,
"end": 1012,
"text": "Schmidt, 2019;",
"ref_id": null
},
{
"start": 1013,
"end": 1033,
"text": "Zhang et al., 2019a)",
"ref_id": "BIBREF27"
},
{
"start": 1260,
"end": 1284,
"text": "(Ross and Bagnell, 2010)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A younger mystery is the recently discussed enigma of text degeneration (Holtzman et al., 2020) . It refers to the phenomenon in which bland or strange repetitive texts may be generated when the likelihood is the objective of generation, for example, when some commonly used strategies, such as greedy decoding and beam-search decoding, are used. Especially, the prior work (Holtzman et al., 2020) observed such problems in GPT-2 (Radford et al.) , a pre-trained language model that has been shown useful in many NLP tasks (Radford, 2018; Zhang et al., 2019b; Petroni et al., 2019; Talmor et al., 2019; See et al., 2019) . Despite many attempts proposed to address this issue (Holtzman et al., 2020; Welleck et al., 2020; Li et al., 2019) , its root cause remains unknown.",
"cite_spans": [
{
"start": 72,
"end": 95,
"text": "(Holtzman et al., 2020)",
"ref_id": "BIBREF4"
},
{
"start": 374,
"end": 397,
"text": "(Holtzman et al., 2020)",
"ref_id": "BIBREF4"
},
{
"start": 430,
"end": 446,
"text": "(Radford et al.)",
"ref_id": null
},
{
"start": 523,
"end": 538,
"text": "(Radford, 2018;",
"ref_id": "BIBREF15"
},
{
"start": 539,
"end": 559,
"text": "Zhang et al., 2019b;",
"ref_id": "BIBREF28"
},
{
"start": 560,
"end": 581,
"text": "Petroni et al., 2019;",
"ref_id": "BIBREF13"
},
{
"start": 582,
"end": 602,
"text": "Talmor et al., 2019;",
"ref_id": null
},
{
"start": 603,
"end": 620,
"text": "See et al., 2019)",
"ref_id": null
},
{
"start": 676,
"end": 699,
"text": "(Holtzman et al., 2020;",
"ref_id": "BIBREF4"
},
{
"start": 700,
"end": 721,
"text": "Welleck et al., 2020;",
"ref_id": "BIBREF24"
},
{
"start": 722,
"end": 738,
"text": "Li et al., 2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Motivated by the unknown issues, we wonder whether text degeneration can be connected to the well-known exposure bias. If text degeneration is the misbehavior caused by exposure bias, it actually provides us a perfect opportunity to identify the existence of exposure bias. One of misbehavior of text degeneration is the occurrence of repetitive loops. It is a phenomenon that a model tends to GraphQL is an interesting technology originating at Facebook. It is a query language that allows you to query a database and then query the database for the results.\\n \\n The query language is called QueryQL. It is a query language ...",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We first saw Anki Overdrive, the company's follow-up to the original game, in the early 2000s. It was a game that was a bit of a hit, and it was a game that was a bit of a hit that was a bit of a hit that was a hit that ... Table 1 : Randomly sampled examples generated by GPT-2 by greedy decoding. The bold part are the text conditioned on, and the italic part are the text in the repetitive loop.",
"cite_spans": [],
"ref_spans": [
{
"start": 224,
"end": 231,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "repeat a span of text during generation (an example is shown in Table 1 . This phenomenon is salient enough to be detected automatically, and occurs when greedy decoding strategy is used with high probability 1 . The easiness of spotting can help the identification of exposure bias. Therefore, this work aims at looking for the indications of exposure bias when repetitive loops are generated by the greedy decoding strategy. We will focus on GPT-2, because it is the only publicly available language model trained on a massive amount of data at the time this work is done, and is widely used by the community.",
"cite_spans": [],
"ref_spans": [
{
"start": 64,
"end": 71,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To the best of our knowledge, this paper is the first work that attempts to relate text degeneration to exposure bias. We first conclude two necessary conditions of its occurrence based on the intuition of exposure bias in literature in Section 3.4. We then inspect the two necessary conditions qualitatively and quantitatively. In Section 4.1, we find that before text repeating starts, GPT-2 generates unnatural text. In Section 4.3, we show that the hidden states of GPT-2 deviate to an area less similar to the states generated by encoding real text. The above observations satisfy the intuition of exposure bias that mistakes are made in the early stage and are amplified afterward. According to the indications we discover, we conclude that exposure bias is likely to co-occur with repetitive loops. Finally, we investigate how the mistakes are amplified after repetitive loops occur in Section 5. We discover the self-reinforcing mechanism of text degeneration. The results provide a possible outline of how a model is trapped in repetitive loops. These findings should be helpful for future studies on exposure bias and remedies for text degeneration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1 93% in our experiment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Imitation learning aims at imitating an expert policy \u03c0 * by learning from trajectories generated by the expert, namely finding the polic\u0177",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exposure Bias in Imitation Learning",
"sec_num": "2.1"
},
{
"text": "\u03c0 = arg min \u03c0 E s\u223cd \u03c0 * I[\u03c0(s) = \u03c0 * (s)], (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exposure Bias in Imitation Learning",
"sec_num": "2.1"
},
{
"text": "where d \u03c0 * is the distribution of states visited by the expert policy \u03c0 * . It is very similar to training a language model with maximum likelihood objective, and has succeeded in many applications (Pomerleau, 1989; Schaal, 1999; Muller et al., 2006; Ratliff et al., 2006) . However, it was mentioned in (Pomerleau, 1989 ) that when a model makes a mistake and thus encounters a state that the expert rarely encounters, it may fail to recover from the mistake. It was the first time the concept of exposure bias was mentioned. Similar issues were also considered in (Thrun, 1995; Daum\u00e9 et al., 2009) . Ross and Bagnell proved that the cost in a trajectory grows at the rate O(T 2 ) instead of O(T ) if mistakes are made with a non-zero probability. It can be seen as a theoretical analysis of exposure bias. Nevertheless, in the context of text generation, the total number of steps in a trajectory is finite and is usually not large. Therefore, it is still not clear how meaningful this growth rate of cost is for text generation tasks. In Ross and Bagnell (2010) ; Ross et al. (2011) , theoretically-grounded algorithms are proposed. However, they require the access of expert policy to annotate the trajectories generated by the learnt agent. It is generally not feasible in text generation tasks.",
"cite_spans": [
{
"start": 199,
"end": 216,
"text": "(Pomerleau, 1989;",
"ref_id": "BIBREF14"
},
{
"start": 217,
"end": 230,
"text": "Schaal, 1999;",
"ref_id": "BIBREF21"
},
{
"start": 231,
"end": 251,
"text": "Muller et al., 2006;",
"ref_id": "BIBREF11"
},
{
"start": 252,
"end": 273,
"text": "Ratliff et al., 2006)",
"ref_id": "BIBREF18"
},
{
"start": 305,
"end": 321,
"text": "(Pomerleau, 1989",
"ref_id": "BIBREF14"
},
{
"start": 567,
"end": 580,
"text": "(Thrun, 1995;",
"ref_id": null
},
{
"start": 581,
"end": 600,
"text": "Daum\u00e9 et al., 2009)",
"ref_id": "BIBREF1"
},
{
"start": 1042,
"end": 1065,
"text": "Ross and Bagnell (2010)",
"ref_id": "BIBREF19"
},
{
"start": 1068,
"end": 1086,
"text": "Ross et al. (2011)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Exposure Bias in Imitation Learning",
"sec_num": "2.1"
},
{
"text": "Then the concept of exposure bias is introduced in the context of text generation by (Bengio et al., 2015; Ranzato et al., 2016) . Since then, there have been many methods proposed to tackle this problem (Bengio et al., 2015; Husz\u00e1r, 2015; Ranzato et al., 2016; Lamb et al., 2016; Yu et al., 2017; Wiseman and Rush, 2016; Schmidt, 2019; Zhang et al., 2019a; Wang and Sennrich, 2020) . They proposed their remedies based on the assumption that exposure bias is causing problems, and their approaches were justified by the improvement of performance when they are adopted. However, to our knowledge, He et al. (2019) is the only study attempting to verify the impact of exposure bias, where they proposed metrics for estimating the impact of exposure bias in models. Different from the prior work, this paper focuses on directly checking whether a specific phenomenon is the result of exposure bias.",
"cite_spans": [
{
"start": 85,
"end": 106,
"text": "(Bengio et al., 2015;",
"ref_id": "BIBREF0"
},
{
"start": 107,
"end": 128,
"text": "Ranzato et al., 2016)",
"ref_id": "BIBREF17"
},
{
"start": 204,
"end": 225,
"text": "(Bengio et al., 2015;",
"ref_id": "BIBREF0"
},
{
"start": 226,
"end": 239,
"text": "Husz\u00e1r, 2015;",
"ref_id": "BIBREF5"
},
{
"start": 240,
"end": 261,
"text": "Ranzato et al., 2016;",
"ref_id": "BIBREF17"
},
{
"start": 262,
"end": 280,
"text": "Lamb et al., 2016;",
"ref_id": "BIBREF7"
},
{
"start": 281,
"end": 297,
"text": "Yu et al., 2017;",
"ref_id": "BIBREF26"
},
{
"start": 298,
"end": 321,
"text": "Wiseman and Rush, 2016;",
"ref_id": "BIBREF25"
},
{
"start": 322,
"end": 336,
"text": "Schmidt, 2019;",
"ref_id": null
},
{
"start": 337,
"end": 357,
"text": "Zhang et al., 2019a;",
"ref_id": "BIBREF27"
},
{
"start": 358,
"end": 382,
"text": "Wang and Sennrich, 2020)",
"ref_id": "BIBREF23"
},
{
"start": 598,
"end": 614,
"text": "He et al. (2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Exposure Bias in Text Generation",
"sec_num": "2.2"
},
{
"text": "The term neural text degeneration was first defined recently in Holtzman et al. (2020) , which focused on GPT-2. Similar phenomenon was also observed in LSTM language models (Strobelt et al., 2018) . Regarding its causation, Welleck et al. (2020) summarized three possible reasons about repetitive loops generated by GPT-2: i) The Transformer architecture of GPT-2 prefers repeating. ii) Repeating is an intrinsic property of human language. iii) The model is unable to model real language usage due to the fixed training corpora. However, none of them have been proven theoretically or verified empirically.",
"cite_spans": [
{
"start": 64,
"end": 86,
"text": "Holtzman et al. (2020)",
"ref_id": "BIBREF4"
},
{
"start": 174,
"end": 197,
"text": "(Strobelt et al., 2018)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Text Degeneration",
"sec_num": "2.3"
},
{
"text": "Before this work, this phenomenon has not been linked to exposure bias, and thus remedies different from those for exposure bias are proposed. Holtzman et al. 2020proposed sampling from the language model with nucleus sampling. Welleck et al. (2020) proposed to train neural language models with an unlikelihood as a regularization. Li et al. (2019) further applied unlikelihood training on dialogue tasks. Since in this work we discover the linkage between exposure bias and text degeneration, new approaches that specifically tackle exposure bias may be found effective for text degeneration in the future.",
"cite_spans": [
{
"start": 228,
"end": 249,
"text": "Welleck et al. (2020)",
"ref_id": "BIBREF24"
},
{
"start": 333,
"end": 349,
"text": "Li et al. (2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Text Degeneration",
"sec_num": "2.3"
},
{
"text": "To better elaborate the investigation of the above problems, background knowledge and notations are briefly introduced here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background and Notations",
"sec_num": "3"
},
{
"text": "Considering that this paper focuses on analyzing the issues in text generation, we first define real passages as natural language and artificial passage as the generated language for following study.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Real and Artificial Passages",
"sec_num": "3.1"
},
{
"text": "Real Passages and Real Distribution Real passages and real distribution are related to training data. Given Y denoting the training set, a real passage y \u2208 Y is a sequence of tokens {y 1 , y 2 , \u2022 \u2022 \u2022 , y T }, and real distribution P Y is the distribution passages y \u2208 Y are drawn from, and it can be factorized as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Real and Artificial Passages",
"sec_num": "3.1"
},
{
"text": "P Y (y) = P Y (y 1 ) T t=2 P Y (y t | y 1 , y 2 , \u2022 \u2022 \u2022 , y t\u22121 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Real and Artificial Passages",
"sec_num": "3.1"
},
{
"text": "(2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Real and Artificial Passages",
"sec_num": "3.1"
},
{
"text": "A artificial passage\u0177 is a sequence of tokens",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Artificial Passages and Artificial Distribution",
"sec_num": null
},
{
"text": "{\u0177 1 ,\u0177 2 , \u2022 \u2022 \u2022 ,\u0177 T }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Artificial Passages and Artificial Distribution",
"sec_num": null
},
{
"text": "generated by a model. We denote the set of generated passages as\u0176 , where each\u0177 is generated based on the conditional probability,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Artificial Passages and Artificial Distribution",
"sec_num": null
},
{
"text": "P M (\u0177 t |\u0177 1 ,\u0177 2 , \u2022 \u2022 \u2022 ,\u0177 t\u22121 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Artificial Passages and Artificial Distribution",
"sec_num": null
},
{
"text": ", predicted by an autoregressive language model M such as GPT-2. We define artificial distribution P\u0176 as the distribution of \u0177 \u2208\u0176 detailed below. Note that P\u0176 could be different from P M , depending on the decoding strategy used. A decoding strategy is how a token y t is chosen based on the conditional probability",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Artificial Passages and Artificial Distribution",
"sec_num": null
},
{
"text": "P M (\u0177 t |\u0177 1 ,\u0177 2 , \u2022 \u2022 \u2022 ,\u0177 t\u22121 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Artificial Passages and Artificial Distribution",
"sec_num": null
},
{
"text": "In this work, we considered the greedy strategy and the sampling-based strategies, including the top-k candidates at each step (Fan et al., 2018) , nucleus sampling (Holtzman et al., 2020) . Details are included in the appendix.",
"cite_spans": [
{
"start": 127,
"end": 145,
"text": "(Fan et al., 2018)",
"ref_id": "BIBREF2"
},
{
"start": 165,
"end": 188,
"text": "(Holtzman et al., 2020)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Artificial Passages and Artificial Distribution",
"sec_num": null
},
{
"text": "GPT-2 is a pre-trained language model constituted with L layers of Transformer blocks (Vaswani et al., 2017) . Considering that exposure bias is described as a general problem of neural text generation models, we pick GPT-2 as an example model for the study. When the tokens {y t } t=1,2,\u2022\u2022\u2022 ,T \u22121 , which we refer to as the conditioned passage, are fed in, we denote the states outputted by each layers as",
"cite_spans": [
{
"start": 86,
"end": 108,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "States of GPT-2",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "[h (y) 1,1 , h (y) 1,2 , \u2022 \u2022 \u2022 , h (y) 1,T ] = transformer 1 (embedding([y 1 , \u2022 \u2022 \u2022 , y T ])), (3) [h (y) l,1 , h (y) l,2 , \u2022 \u2022 \u2022 , h (y) l,T ] = transformer l ([h (y) l\u22121,1 , \u2022 \u2022 \u2022 , h (y) l\u22121,T ]) \u2200l = 1, 2, ..., L.",
"eq_num": "(4)"
}
],
"section": "States of GPT-2",
"sec_num": "3.2"
},
{
"text": "It predicts the conditional probability as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "States of GPT-2",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (y T | {y t } t=1...T \u22121 ) = softmax(MLP(h (\u0177) L,T \u22121 ).",
"eq_num": "(5)"
}
],
"section": "States of GPT-2",
"sec_num": "3.2"
},
{
"text": "We refer to real states as the states outputted when y \u223c Y is fed in, and artificial states as the states when\u0177 \u223c\u0176 is fed in. States of a token y t refer to the set of states h",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "States of GPT-2",
"sec_num": "3.2"
},
{
"text": "(y) l,t l=1,2,\u2022\u2022\u2022 ,L .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "States of GPT-2",
"sec_num": "3.2"
},
{
"text": "Let the time step at which a passage\u0177 starts to repeat be \u03c1, and the length of the repeated part be \u03bb. Then a passage\u0177, where a repetitive loop occurs, is of the form",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Repetitive Loops",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "y =\u0177 1 ,\u0177 2 , \u2022 \u2022 \u2022 ,\u0177 \u03c1\u22121 ,\u0177 \u03c1 , \u2022 \u2022 \u2022\u0177 \u03c1+\u03bb , y \u03c1 , \u2022 \u2022 \u2022\u0177 \u03c1+\u03bb ,\u0177 \u03c1 , \u2022 \u2022 \u2022\u0177 \u03c1+\u03bb , \u2022 \u2022 \u2022",
"eq_num": "(6)"
}
],
"section": "Repetitive Loops",
"sec_num": "3.3"
},
{
"text": "We refer to the repeated part\u0177 \u03c1 , \u2022 \u2022 \u2022\u0177 \u03c1+\u03bb as a looping sequence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Repetitive Loops",
"sec_num": "3.3"
},
{
"text": "In the literature, exposure bias was conceptually proposed (Bengio et al., 2015) , which is described as the discrepancy between the way the model is used during training and the way during inference. When training, at the time step t, the model objective is to maximize the probability of the correct token y t conditioning on the real past tokens",
"cite_spans": [
{
"start": 59,
"end": 80,
"text": "(Bengio et al., 2015)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Exposure Bias",
"sec_num": "3.4"
},
{
"text": "y 1 , y 2 , \u2022 \u2022 \u2022 , y t\u22121 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exposure Bias",
"sec_num": "3.4"
},
{
"text": "However, during inference,\u0177 t is predicted conditioning on the generated past to-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exposure Bias",
"sec_num": "3.4"
},
{
"text": "kens\u0177 1 ,\u0177 2 , \u2022 \u2022 \u2022 ,\u0177 t\u22121",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exposure Bias",
"sec_num": "3.4"
},
{
"text": ". Therefore, mistakes in the early stage may lead the model to a state unseen in training time, and errors may consequently amplify quickly. More explicitly, based on the description of bias in Bengio et al. (2015) , we summarize the necessary conditions as follow: If some misbehavior, such as repetitive loop, starts at time \u03c1 is the result of exposure bias, then the two indications must be observed:",
"cite_spans": [
{
"start": 194,
"end": 214,
"text": "Bengio et al. (2015)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Exposure Bias",
"sec_num": "3.4"
},
{
"text": "1. Mistakes are made in the early phase: In the context of text generation, qualitatively, it means the unnatural sequence is generated before time step \u03c1. Quantitatively, it means that",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exposure Bias",
"sec_num": "3.4"
},
{
"text": "P Y (\u0177 1 ,\u0177 2 , \u2022 \u2022 \u2022 ,\u0177 \u03c1\u22121 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exposure Bias",
"sec_num": "3.4"
},
{
"text": ", the likelihood that the previous generated text is real, is low.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exposure Bias",
"sec_num": "3.4"
},
{
"text": "Mistakes are significant to the model: The mistakes must be significant enough to lead the model to a state unseen in training time. Specifically, here we analyze the hidden states of GPT-2. We posit that, if some misbehavior is due to exposure bias, then the mistakes in the early stage should be significant enough to cause the model to generate an unseen state.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.",
"sec_num": null
},
{
"text": "In this section, we investigate whether the conditions in Section 3.4 are satisfied when text degener-ation occurs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relating Text Degeneration to Exposure Bias",
"sec_num": "4"
},
{
"text": "As in Holtzman et al. (2020) , we focus on the pretrained language model GPT-2 2 . GPT-2 is trained on the WebText dataset. We use the training, validation and testing subsets of WebText released by OpenAI 3 . When generating passages, first 50 tokens from passages y \u2208 Y are given as the condition. Therefore, for different conditions y, even if the decoding strategy is deterministic, the generated passages\u0177 could be different. We empirically observe that repetitive loops tend to occur later when the number of conditioned tokens is greater. We choose to condition on 50 tokens, so the sequences before repetitive loops are lengthy enough for analysis while the computation power required is affordable.",
"cite_spans": [
{
"start": 6,
"end": 28,
"text": "Holtzman et al. (2020)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setting",
"sec_num": "4.1"
},
{
"text": "We inspect the first condition about exposure bias by subjectively examining the passages generated before a repetitive loop occurs. For each passage\u0177 generated by conditioning on {y} t=1,2,\u2022\u2022\u2022 ,50 , we compare the pair\u0177 t=51,\u2022\u2022\u2022 ,\u03c1\u22121 (generated) and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Inspection on Generated Tokens Prior to Repetitive Loops",
"sec_num": "4.2"
},
{
"text": "y t=51,\u2022\u2022\u2022 ,\u03c1\u22121 (real),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Inspection on Generated Tokens Prior to Repetitive Loops",
"sec_num": "4.2"
},
{
"text": "where \u03c1 is the time step where the repeating sentence first appears. We want to check if the model does make mistakes during t = 51, \u2022 \u2022 \u2022 , \u03c1 \u2212 1. We manually examine 50 randomly sampled pairs 4 . We observe that the generated passages are often less informative, less relevant or coherent to {y} 50 t=1 . As a result, without knowing which passage in the pair is real, we can still correctly identify the generated ones for 78% of them. We also inspect the sequence pairs from time 0 to \u03c1 + \u03bb \u2212 1, the time step after which the model starts to repeat. In that case, our correctness is even higher, up to 92%. Note that even though the annotation is not done by many people, the fact that the fake sentences can be identified accurately is suffice to claim that a portion of passages generated in the early stage are perceivably dissimilar to real language. Namely",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Inspection on Generated Tokens Prior to Repetitive Loops",
"sec_num": "4.2"
},
{
"text": "P Y (\u0177 1 ,\u0177 2 , \u2022 \u2022 \u2022 ,\u0177 \u03c1\u22121 ) and P Y (\u0177 1 ,\u0177 2 , \u2022 \u2022 \u2022 ,\u0177 \u03c1+\u03bb\u22121 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Inspection on Generated Tokens Prior to Repetitive Loops",
"sec_num": "4.2"
},
{
"text": "is low from human judgement. Thus, qualitatively we can say mistakes are made before the repeating loop occurs. It satisfies the first condition of expsure bias.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Inspection on Generated Tokens Prior to Repetitive Loops",
"sec_num": "4.2"
},
{
"text": "We further inspect the first condition of exposure bias quantitavely and objectively. We want to estimate",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantitative Inspection on Generated Tokens Prior to Repetitive Loops",
"sec_num": "4.3"
},
{
"text": "P Y (\u0177 1 ,\u0177 2 , \u2022 \u2022 \u2022 ,\u0177 \u03c1\u22121 ), the likelihood y 1 ,\u0177 2 , \u2022 \u2022 \u2022 ,\u0177 \u03c1\u22121",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantitative Inspection on Generated Tokens Prior to Repetitive Loops",
"sec_num": "4.3"
},
{
"text": "is real. However, the true P Y is not tractable. Using an auto-regressive model to estimate the likelihood is not feasible either, since they may give higher probability to passages that is also generated by auto-regressive models and thus favor GPT-2. Thus we use a pre-trained masked language model RoBERTa-Large (Liu et al., 2019) . It is trained non-autoregressively, so it does not favor auto-regressively generated passages. Therefore it should be a good proxy estimating the realness of the passages generated by GPT-2. Specifically, to estimate the likelihood of tokens in a passage, real passages and artificial passages with repetitive loops are fed in RoBERTa with 15% randomly selected tokens masked. Log likelihood of recovering the masked tokens is calculated. To anneal the randomness due to the selection of masked tokens, this process is repeated 10 times for each passage. Finally, the likelihood for each time step is averaged. Figure 1 shows that the likelihood of the generated passages is generally lower than real text starting from the time step where the conditioned passages end (dashed line). Especially, even though the likelihood of the text generated with greedy decoding strategy grows after a few time steps, the likelihood drop significantly at the beginning. Considering that the mask language model is sensitive to the context around the masked token, it may indicate that the text generated at the beginning is very unnatural.",
"cite_spans": [
{
"start": 315,
"end": 333,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 947,
"end": 955,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Quantitative Inspection on Generated Tokens Prior to Repetitive Loops",
"sec_num": "4.3"
},
{
"text": "We then check how significant the mistakes are to the GPT-2 model. Though the previous sections have shown the existence of the mistakes in the early stage. However, to cause misbehavior, the mistakes must be significant enough to cause GPT-2 to behave differently. Therefore, we check how differently GPT-2 processes the generated text compared to the way it processes the real ones. -4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Significance of Mistakes Prior to Repetitive Loops",
"sec_num": "4.4"
},
{
"text": "Figure 1: The average log likelihood (y-axis) predicted by RoBERTa at each time step (x-axis). The first dotted line is the averaged length of prefix, and the second one is the averaged \u03c1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Significance of Mistakes Prior to Repetitive Loops",
"sec_num": "4.4"
},
{
"text": "To measure the significance of mistakes, we inspect the hidden states of GPT-2 when generating passages. For each layer l > 1 and time step t, the artificial state h l,t is the result of applying the transformer function l \u2212 1 times over the input sequence {\u0177 l\u22121,\u03c4 } \u03c4 =1...t\u22121 , which is the prefix of the artificial passage. Therefore, if a artificial state h (\u0177) l,t is significantly dissimilar to any real states, then it implies that the generated passage {\u0177 l\u22121,\u03c4 } \u03c4 =1...t\u22121 contains mistakes that are significant to the model, and that the mistakes do lead the model to an unseen state. Thus, the similarity between the artificial states and the real state indicates how significant the mistakes in the passage are.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring the Significance of Mistakes",
"sec_num": "4.4.1"
},
{
"text": "Specifically, we measure how many real state is similar to a artificial state. It is done by counting the number of real states in the neighbor of the artificial state. A lower number of real neighborhoods suggests that the artificial state is more unseen, and thus implies higher significance of the mistakes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring the Significance of Mistakes",
"sec_num": "4.4.1"
},
{
"text": "Formally, given a hidden state h (\u0177) l,t at the time step t in the layer l, we count the number of real states in a support set H Y l,t which is close to h",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring the Significance of Mistakes",
"sec_num": "4.4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(\u0177) l,t : N (h (\u0177) l,t ) = h (\u0177) l,t \u2212 h 2 < r h \u2208 H Y l,t",
"eq_num": "(7)"
}
],
"section": "Measuring the Significance of Mistakes",
"sec_num": "4.4.1"
},
{
"text": "where r is the predefined radius. We use different H l,t depending on the layer l and the time step t of the hidden state h l,t to be considered. We compare h l,t only with the real states of the same layer, H Y l,t only contains state of the same layer. To reduces the required computation power, we also limit the set H l,t to the state of the tokens whose time step differ to t by less than \u03b4 5 . This limitation is reasonable, because we found the position of the states are time-step-dependent. We found this by projecting the real states to their first two principle components with PCA (Pearson, 1901) . As shown in figure 2, states of nearby time steps are clustered together. Formally, the support set of real neighbors is written as",
"cite_spans": [
{
"start": 593,
"end": 608,
"text": "(Pearson, 1901)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring the Significance of Mistakes",
"sec_num": "4.4.1"
},
{
"text": "H Y l,t = {h (y) l,\u03c4 | \u03c4 \u2208 [t \u2212 \u03b4, t + \u03b4], y \u2208 Y }. (8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring the Significance of Mistakes",
"sec_num": "4.4.1"
},
{
"text": "Note that the constitution of H Y l,t depends on a set of real passages Y . We will discuss the choice of Y in the next section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring the Significance of Mistakes",
"sec_num": "4.4.1"
},
{
"text": "Roughly speaking, we perform our experiment with two sets of passages Y sup and Y cond . We use Y cond to generate real states, and from them we build H Y cond l,t for all layer l and time step t. H Y cond l,t can be used to evaluate any state of layer l and t. As for Y cond , we use it to generate states to be evaluated. By using the prefix of y \u2208 Y cond as condition, we can use it to generate artificial passage\u0176 and artificial states. We also generate real states by encoding the whole y \u2208 Y cond with GPT-2. We expect that the states of y \u2208 Y cond to be similar to the states of y \u2208 Y sup , while the artificial state\u0177 \u2208\u0176 to be dissimilar to the artificial ones. Specifically, we conduct the experiment with the following steps:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setting",
"sec_num": "4.4.2"
},
{
"text": "1. We prepare two disjoint sets of sequences Y cond and Y sup . There two sets are parts of the union of the training, validation and testing subsets of WikiText released by OpenAI (as described in Section 4.1). 2. For all y in Y sup , we collect a set of real states h sup by using GPT-2 to encode y. These real states are used to construct the H l,t as mentioned in 8. 3. For all y in Y cond , we generate artificial sequences\u0177 by conditioning GPT-2 on the first 50 tokens of y cond \u2208 Y cond . We experiment with the generation strategies mentioned in Section 3.1. The hidden states\u0125 are also collected. 4. For all y in Y cond , we also use GPT-2 to encode the whole passage y and collect the states h cond . Since the sequences y \u2208 Y cond are real, the states h cond are real too. 5. Finally, we evaluate how the states collected with Y cond are similar to the real states from Y real . We calculate N (\u0125), the numbers of artificial states' real neighbor in h sup . We also use y cond calculate N (h cond ). It is referred to as \"real\" in Figure 3 and 4. We prepare Y sup and Y cond in two ways:",
"cite_spans": [],
"ref_spans": [
{
"start": 1042,
"end": 1050,
"text": "Figure 3",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Experimental Setting",
"sec_num": "4.4.2"
},
{
"text": "compare-seen The training split is used as Y sup . It is seen when training. Real passages in the validation split and the testing split are used as Y cond .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setting",
"sec_num": "4.4.2"
},
{
"text": "compare-unseen The union of the validation split and the testing split is split into two disjoint subsets by ratio 9:1. They are used as Y sup and Y cond respectively. Y sup is unseen when training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setting",
"sec_num": "4.4.2"
},
{
"text": "We experiment with a set of shuffled states as a sanity check of our approach. It verifies whether the number of neighbors is an indicative measure of the significance of mistakes. The shuffled set is constructed by first shuffling the real passages in the Y cond , and is then encoded with GPT-2. The shuffled passages have the same 1-gram distribution as real natural language, but have low likelihood to be real. We expect them to have low numbers of real neighbors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sanity Check",
"sec_num": "4.4.3"
},
{
"text": "The results show that the number of real neighbors is a good indication of mistakes for middle layers from layer 5 to layer 9 when r = 1024 for both the compare-seen and compare-unseen settings. For smaller r \u2208 32, 64, 128, 256, 512, the results are not stable. The average number of neighbors for different time steps at the seventh layer is plot in figure 3 . We include the results of other layers in the appendix. The figure shows that the number of neighbors of the shuffled states are consistently low for all time steps. It implies that the number of neighbors is indicative for detecting unreal passages. However, it is less indicative when R is small. We posit that it is due to the high sparsity of the states due to their high dimensionality 6 .",
"cite_spans": [],
"ref_spans": [
{
"start": 351,
"end": 359,
"text": "figure 3",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Sanity Check",
"sec_num": "4.4.3"
},
{
"text": "Figure 3 also plots the number of real neighbors (h) for states generated with greedy strategy and the sampling-based strategies (\u0125). For the greedy strategy, the number of neighbors declines rapidly when the time step increases. Note that we observe that repetitive loops occur in about 93% of the sequences. It shows that GPT-2 indeed fails to recover from mistakes, and the mistakes are amplified through time. It is aligned with the description of exposure bias. On the other hand, compared with real sequences (the control group), the number only decreases slightly when sampling-based strategies are used. In contrast to the case of greedy decoding, repetitive loops are rarely observed when those sampling-based methods are used (< 1% for all of the strategies). It implies that if GPT-2 has misbehavior when using those strategies, the misbehavior is less likely to be related to exposure bias.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4.4"
},
{
"text": "We further inspect the number of neighbors of the artificial state prior to the time step \u03c1 + \u03bb, when a repetitive loop starts. We want to know whether the model does make significant mistakes before \u03c1 + \u03bb. It is not shown in Figure 3 , as it only shows the significance of mistakes in the late stage. To this end, we plot the number of neighbors again in Figure 4 . Different from Figure 3 , in Figure 4 , the x-axis is the time step relative to \u03c1 + \u03bb, so the significance of mistakes before repetitive loops can be manifested. In particular, we compare the number of real neighbors around the real states and the artificial states. Formally, for each artificial passage\u0177 conditioning on y 1,2,\u2022\u2022\u2022 ,50 , we compare the number of neighbors around the state of n ",
"cite_spans": [],
"ref_spans": [
{
"start": 226,
"end": 234,
"text": "Figure 3",
"ref_id": "FIGREF5"
},
{
"start": 356,
"end": 364,
"text": "Figure 4",
"ref_id": "FIGREF8"
},
{
"start": 382,
"end": 390,
"text": "Figure 3",
"ref_id": "FIGREF5"
},
{
"start": 396,
"end": 404,
"text": "Figure 4",
"ref_id": "FIGREF8"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4.4"
},
{
"text": "l,t \u2212 n (y) l,t )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4.4"
},
{
"text": ". Surprisingly, in Figure 4 , the compare-seen and compare-unseen settings show different trends. At the beginning, the number of neighbors decreases relatively slowly in both of the two settings. At around x = \u221210, the number in both of them drop to less than zero. It indicates that at this time step, some significant mistakes are made. However, the number in the compare-seen setting dramatically grows while the number continues decreasing in the compare-unseen setting. The low number of neighbors in the compare-unseen indicates the low realness of the generated passages. The high number of neighbors in the seen-setting indicates that the model encodes those unreal passages to space close to the states of training data. It may imply that, at this moment, the model fails to generalize, so it incorrectly encodes the unreal passages as seen ones. Finally, the mistakes are amplified. Consequently, the number in both of the settings drops to less than zero. In sum, Figure 4 shows the significance of mistakes made before it starts repeating a looping sequence. Therefore, the second indication of exposure bias is observed. ",
"cite_spans": [],
"ref_spans": [
{
"start": 19,
"end": 27,
"text": "Figure 4",
"ref_id": "FIGREF8"
},
{
"start": 976,
"end": 984,
"text": "Figure 4",
"ref_id": "FIGREF8"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4.4"
},
{
"text": "While the above experiments show the indications of exposure bias, in this section we further investigate how the early stage mistakes cause the model to degenerate. Figure 4 indicates some mistakes are made prior to time step \u03c1 + \u03bb. Thus, in this section, we investigate the characteristics of the sequence generated prior to \u03c1 + \u03bb, the looping sequence\u0177 \u03c1 \u2022 \u2022 \u2022\u0177 \u03c1+\u03bb (as defined in Section 3.3).",
"cite_spans": [],
"ref_spans": [
{
"start": 166,
"end": 174,
"text": "Figure 4",
"ref_id": "FIGREF8"
}
],
"eq_spans": [],
"section": "Mechanisms after a Repetitive Loop Starts",
"sec_num": "5"
},
{
"text": "We investigate how the looping sequences are loopinducing by using them as conditions when generating text. We construct a looping sequence set that is constituted with all looping sequences generated when conditioning on the first 50 tokens of real sequences. In a generated sequence\u0177, since\u0177 \u03c1 may not be a start point of a grammatical sentence, we use the sequence\u0177 \u03c1+\u03b4+1 ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Looping Sequence is Loop-Inducing",
"sec_num": "5.1"
},
{
"text": "\u2022 \u2022 \u2022 ,\u0177 \u03c1+\u03bb\u0177\u03c1 \u2022 \u2022 \u2022\u0177 \u03c1+\u03b4 ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Looping Sequence is Loop-Inducing",
"sec_num": "5.1"
},
{
"text": "where \u03b4 is chosen based on the punctuation in it 7 . As control groups, we also construct two real sequence sets, first sentence set and last sentence set. They consist of the first sentence and the last sentence of the articles in WikiText validation split and testing split.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Looping Sequence is Loop-Inducing",
"sec_num": "5.1"
},
{
"text": "To measure how those sequences are loopinginducing, we calculate the similarity between x and\u0177, where x is the sequence used as condition, and\u0177 is the generated passage. Specifically, we measure ROUGE-L (Lin, 2004) 8 between x and the first length(x) tokens of\u0177. A higher score implies higher similarity, and thus more loopinginducing. Results shown in Table 2 indicate that looping sequences are indeed more loop-inducing.",
"cite_spans": [
{
"start": 203,
"end": 214,
"text": "(Lin, 2004)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 353,
"end": 360,
"text": "Table 2",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "The Looping Sequence is Loop-Inducing",
"sec_num": "5.1"
},
{
"text": "Loop-Inducing",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Any Repeating Sequence is",
"sec_num": "5.2"
},
{
"text": "We further discover that any sequence that is repeated is loop-inducing, regardless of contexts. We create the conditioned sequence by concatenating c with x repeated from 1 to 3 times, where c is the first 5 sentences from a random article of WebText, and x is either from the looping sequence set or the real sets. Measurement, the same as in Section 5.1 is applied on x and the generated passages. The results are shown in Table 3 , and it shows that even when the conditioned sequence is real, it is more loop-inducing if it is repeated more times.",
"cite_spans": [],
"ref_spans": [
{
"start": 426,
"end": 433,
"text": "Table 3",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Any Repeating Sequence is",
"sec_num": "5.2"
},
{
"text": "In sum, in this section, we discover the selfreinforcing mechanism of text degeneration. First, Section 5.1 a looping sequence is loop-inducing. Thus, after a looping sequence is generated, it is likely to be repeated. Second, Section 5.2 shows that when a sequence is repeated, then GPT-2 would be more likely to continue repeating it. Therefore, it shows how GPT-2 fails to recover from the mistake.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Self-Reinforcing Mechanism of Text Degeneration",
"sec_num": "5.3"
},
{
"text": "In conclusion, we provide a deeper insight into the relation between exposure bias and text degeneration. We qualitatively and quantitatively show that mistakes are indeed made in the early stage of generation. In Particular, some significant mistakes are made prior to \u03c1 + \u03bb, the time step when the model starts repeating. We then show why the model fails to recover from the mistakes. The looping sequence, which is the sequence generated prior to \u03c1 + \u03bb, and repeated sequences are loopinginducing. That is how the model fails to recover from the mistakes, and how the mistakes amplify. Our contributions are four-fold: 1) We explicitly formulate the necessary indications for the detection of exposure bias. 2) For each condition, we design the associated experiments for validation. 3) By the experiments, we show that text degeneration is likely to be partly caused by exposure bias. 4) Finally, we provide a possible explanation how GPT-2 fails to recover from the mistake. Our formulation and the conducted experiments build a solid foundation for future study on exposure bias.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "\u2022 Sampling:\u0177 t is directly sampled from the conditional probability",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Sample-based Decoding Strategies",
"sec_num": null
},
{
"text": "P M (\u0177 t |\u0177 1 ,\u0177 2 , \u2022 \u2022 \u2022 ,\u0177 t\u22121 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Sample-based Decoding Strategies",
"sec_num": null
},
{
"text": "\u2022 Top-k sampling (Fan et al., 2018) : At the time step t,\u0177 t is sampled from the conditional probability:",
"cite_spans": [
{
"start": 17,
"end": 35,
"text": "(Fan et al., 2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Sample-based Decoding Strategies",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P Y (\u0177 t |\u0177 1 ,\u0177 2 , \u2022 \u2022 \u2022 ,\u0177 t\u22121 ) \u221d P M (\u0177 t |\u0177 1 ,\u0177 2 , \u2022 \u2022 \u2022 ,\u0177 t\u22121 ) if\u0177 t \u2208 top-k, 0 otherwise.",
"eq_num": "(9)"
}
],
"section": "A Sample-based Decoding Strategies",
"sec_num": null
},
{
"text": "\u2022 Nucleus sampling (Holtzman et al., 2020) : At the time step t,\u0177 t is sampled from the conditional probability",
"cite_spans": [
{
"start": 19,
"end": 42,
"text": "(Holtzman et al., 2020)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Sample-based Decoding Strategies",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P Y (\u0177 t |\u0177 1 ,\u0177 2 , \u2022 \u2022 \u2022 ,\u0177 t\u22121 ) \u221d P M (\u0177 t |\u0177 1 ,\u0177 2 , \u2022 \u2022 \u2022 ,\u0177 t\u22121 ) if\u0177 t \u2208 V (p) 0 otherwise. ,",
"eq_num": "(10)"
}
],
"section": "A Sample-based Decoding Strategies",
"sec_num": null
},
{
"text": "and for a predefined p \u2208 (0, 1], V (p) is the minimal set that satisfies",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Sample-based Decoding Strategies",
"sec_num": null
},
{
"text": "v\u2208V (p) P M (v |\u0177 1 ,\u0177 2 , \u2022 \u2022 \u2022 ,\u0177 t\u22121 ) \u2265 p (11) B Dataset",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Sample-based Decoding Strategies",
"sec_num": null
},
{
"text": "We use the subsets of WebText released by OpenAI (https://github.com/openai/ gpt-2-output-dataset). It is an English dataset. There are 25000, 5000, 5000 passages in the train, validation, testing splits respectively. For experiments in Section 4.3 and Section 4.4), we only use the passages with more than 512 tokens. After passages with less than 512 tokens are removed, there are 5269 passages in the union of the validation split and the testing split. We use Faiss (Johnson et al., 2017) to calculate the number of neighbor vectors within a radius. For Figure 3 , the number of neighbors is calculated for 20 time steps. For Figure 4 , the number of neighbors is calculated at time steps {-32, -16, -10, -8, -6, -4, -2, 0, 2, 4, 6, 8, 10, 16, 32, 64, 128} relative to \u03c1 + \u03bb.",
"cite_spans": [
{
"start": 470,
"end": 492,
"text": "(Johnson et al., 2017)",
"ref_id": "BIBREF6"
},
{
"start": 693,
"end": 760,
"text": "{-32, -16, -10, -8, -6, -4, -2, 0, 2, 4, 6, 8, 10, 16, 32, 64, 128}",
"ref_id": null
}
],
"ref_spans": [
{
"start": 558,
"end": 566,
"text": "Figure 3",
"ref_id": "FIGREF5"
},
{
"start": 630,
"end": 638,
"text": "Figure 4",
"ref_id": "FIGREF8"
}
],
"eq_spans": [],
"section": "A Sample-based Decoding Strategies",
"sec_num": null
},
{
"text": "We sampled 2500 passages from the WebText training split. Each line in Figure 4 is the average over 500 passages generated by each decoding strategy. The result in Figure 4 is the average over 1000 passages.",
"cite_spans": [],
"ref_spans": [
{
"start": 71,
"end": 79,
"text": "Figure 4",
"ref_id": "FIGREF8"
},
{
"start": 164,
"end": 172,
"text": "Figure 4",
"ref_id": "FIGREF8"
}
],
"eq_spans": [],
"section": "Seen-setting:",
"sec_num": null
},
{
"text": "Unseen-setting: We first combine the validation split and the testing split as the set of all real unseen text\u0232 . Then we split it into 10 equal-sized subsets Y 1 ,\u0232 2 , \u2022 \u2022 \u2022 ,\u0232 10 . We repeat the following process 3 times:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Seen-setting:",
"sec_num": null
},
{
"text": "\u2022 From {\u0232 1 ,\u0232 2 , \u2022 \u2022 \u2022 ,\u0232 10 }, a subset Y real is selected, and the rest\u0232 \\Y real is used as the support set Y support .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Seen-setting:",
"sec_num": null
},
{
"text": "\u2022 Real states are collected by encoding passages in Y support with GPT-2. When we are calculating the number of neighbors, only these real states are counted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Seen-setting:",
"sec_num": null
},
{
"text": "\u2022 Artificial passages are generated by conditioning on the first 50 tokens for passages in Y real using the decoding strategies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Seen-setting:",
"sec_num": null
},
{
"text": "\u2022 The number of neighbors is calculated for each decoding strategies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Seen-setting:",
"sec_num": null
},
{
"text": "Finally, the result is averaged to plot Figure 3 and and all i such that \u03c1 + i\u03bb < T .",
"cite_spans": [],
"ref_spans": [
{
"start": 40,
"end": 48,
"text": "Figure 3",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Seen-setting:",
"sec_num": null
},
{
"text": "Each of our experiments were run on a workstation with 187 GiB RAM. A workstation is equipped with either two Intel Xeon 5218 CPUs or two Intel Xeon 4110 CPUs. Every experiment can be run with 1 Nvidia GTX 2080Ti GPU. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "D Computing Infrastructure",
"sec_num": null
},
{
"text": "We use the implementation from Hugging Face (https://huggingface.co/transformers/ index.html).3 https://github.com/openai/ gpt-2-output-dataset4 We didn't use crowdsource, since this inspection needs to be done very carefully, and workers could be uncareful.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We use \u03b4 = 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Each state \u2208 R 1536",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For example, if the looping sequence is \"an apple. It is\", we use \"It is an apple.\"8 We use the implementation in https://github. com/google-research/google-research/ tree/master/rouge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to than Ting-Yun Chang for in-depth discussions. We are thankful to the anonymous reviewers for their insightful comments on the paper. This work was financially supported from the Young Scholar Fellowship Program by Ministry of Science and Technology (MOST) in Taiwan, under Grant 110-2636-E-002-003.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Scheduled sampling for sequence prediction with recurrent neural networks",
"authors": [
{
"first": "Samy",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Navdeep",
"middle": [],
"last": "Jaitly",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "1171--1179",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for se- quence prediction with recurrent neural networks. In Advances in Neural Information Processing Sys- tems, pages 1171-1179.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Search-based structured prediction. Machine learning",
"authors": [
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Langford",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "75",
"issue": "",
"pages": "297--325",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hal Daum\u00e9, John Langford, and Daniel Marcu. 2009. Search-based structured prediction. Machine learn- ing, 75(3):297-325.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Hierarchical neural story generation",
"authors": [
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Dauphin",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1805.04833"
]
},
"num": null,
"urls": [],
"raw_text": "Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hi- erarchical neural story generation. arXiv preprint arXiv:1805.04833.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Quantifying exposure bias for neural language generation",
"authors": [
{
"first": "Tianxing",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jingzhao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhiming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1905.10617"
]
},
"num": null,
"urls": [],
"raw_text": "Tianxing He, Jingzhao Zhang, Zhiming Zhou, and James Glass. 2019. Quantifying exposure bias for neural language generation. arXiv preprint arXiv:1905.10617.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The curious case of neural text degeneration",
"authors": [
{
"first": "Ari",
"middle": [],
"last": "Holtzman",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Buys",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Maxwell",
"middle": [],
"last": "Forbes",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text de- generation. In International Conference on Learn- ing Representations.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "How (not) to train your generative model: Scheduled sampling, likelihood",
"authors": [
{
"first": "Ferenc",
"middle": [],
"last": "Husz\u00e1r",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1511.05101"
]
},
"num": null,
"urls": [],
"raw_text": "Ferenc Husz\u00e1r. 2015. How (not) to train your genera- tive model: Scheduled sampling, likelihood, adver- sary? arXiv preprint arXiv:1511.05101.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Billion-scale similarity search with gpus",
"authors": [
{
"first": "Jeff",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Matthijs",
"middle": [],
"last": "Douze",
"suffix": ""
},
{
"first": "Herv\u00e9",
"middle": [],
"last": "J\u00e9gou",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1702.08734"
]
},
"num": null,
"urls": [],
"raw_text": "Jeff Johnson, Matthijs Douze, and Herv\u00e9 J\u00e9gou. 2017. Billion-scale similarity search with gpus. arXiv preprint arXiv:1702.08734.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Professor forcing: A new algorithm for training recurrent networks",
"authors": [
{
"first": "Alex M",
"middle": [],
"last": "Lamb",
"suffix": ""
},
{
"first": "Anirudh",
"middle": [],
"last": "Goyal Alias Parth",
"suffix": ""
},
{
"first": "Ying",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Saizheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Aaron",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2016,
"venue": "Advances In Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "4601--4609",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex M Lamb, Anirudh Goyal Alias Parth Goyal, Ying Zhang, Saizheng Zhang, Aaron C Courville, and Yoshua Bengio. 2016. Professor forcing: A new algorithm for training recurrent networks. In Ad- vances In Neural Information Processing Systems, pages 4601-4609.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Don't say that! making inconsistent dialogue unlikely with unlikelihood training",
"authors": [
{
"first": "Margaret",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Roller",
"suffix": ""
},
{
"first": "Ilia",
"middle": [],
"last": "Kulikov",
"suffix": ""
},
{
"first": "Sean",
"middle": [],
"last": "Welleck",
"suffix": ""
},
{
"first": "Y-Lan",
"middle": [],
"last": "Boureau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.03860"
]
},
"num": null,
"urls": [],
"raw_text": "Margaret Li, Stephen Roller, Ilia Kulikov, Sean Welleck, Y-Lan Boureau, Kyunghyun Cho, and Ja- son Weston. 2019. Don't say that! making incon- sistent dialogue unlikely with unlikelihood training. arXiv preprint arXiv:1911.03860.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "ROUGE: A package for automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Text Summarization Branches Out",
"volume": "",
"issue": "",
"pages": "74--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Off-road obstacle avoidance through end-to-end learning",
"authors": [
{
"first": "Urs",
"middle": [],
"last": "Muller",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Ben",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Cosatto",
"suffix": ""
},
{
"first": "Beat",
"middle": [],
"last": "Flepp",
"suffix": ""
},
{
"first": "Yann L",
"middle": [],
"last": "Cun",
"suffix": ""
}
],
"year": 2006,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "739--746",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Urs Muller, Jan Ben, Eric Cosatto, Beat Flepp, and Yann L Cun. 2006. Off-road obstacle avoidance through end-to-end learning. In Advances in neural information processing systems, pages 739-746.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Liii. on lines and planes of closest fit to systems of points in space. The London, Edinburgh, and Dublin Philosophical Magazine",
"authors": [
{
"first": "Karl",
"middle": [],
"last": "Pearson",
"suffix": ""
}
],
"year": 1901,
"venue": "Journal of Science",
"volume": "2",
"issue": "11",
"pages": "559--572",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karl Pearson. 1901. Liii. on lines and planes of clos- est fit to systems of points in space. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 2(11):559-572.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Association for Computational Linguistics",
"authors": [
{
"first": "Fabio",
"middle": [],
"last": "Petroni",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rockt\u00e4schel",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Anton",
"middle": [],
"last": "Bakhtin",
"suffix": ""
},
{
"first": "Yuxiang",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "2463--2473",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1250"
]
},
"num": null,
"urls": [],
"raw_text": "Fabio Petroni, Tim Rockt\u00e4schel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowl- edge bases? In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 2463-2473, Hong Kong, China. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Alvinn: An autonomous land vehicle in a neural network",
"authors": [
{
"first": "A",
"middle": [],
"last": "Dean",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pomerleau",
"suffix": ""
}
],
"year": 1989,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "305--313",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dean A Pomerleau. 1989. Alvinn: An autonomous land vehicle in a neural network. In Advances in neural information processing systems, pages 305- 313.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Improving language understanding by generative pre-training",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford. 2018. Improving language understand- ing by generative pre-training.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language mod- els are unsupervised multitask learners.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Sequence level training with recurrent neural networks",
"authors": [
{
"first": "Aurelio",
"middle": [],
"last": "Marc",
"suffix": ""
},
{
"first": "Sumit",
"middle": [],
"last": "Ranzato",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zaremba",
"suffix": ""
}
],
"year": 2016,
"venue": "4th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level train- ing with recurrent neural networks. In 4th Inter- national Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Maximum margin planning",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Nathan D Ratliff",
"suffix": ""
},
{
"first": "Martin",
"middle": [
"A"
],
"last": "Bagnell",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zinkevich",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 23rd international conference on Machine learning",
"volume": "",
"issue": "",
"pages": "729--736",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathan D Ratliff, J Andrew Bagnell, and Martin A Zinkevich. 2006. Maximum margin planning. In Proceedings of the 23rd international conference on Machine learning, pages 729-736.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Efficient reductions for imitation learning",
"authors": [
{
"first": "St\u00e9phane",
"middle": [],
"last": "Ross",
"suffix": ""
},
{
"first": "Drew",
"middle": [],
"last": "Bagnell",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the thirteenth international conference on artificial intelligence and statistics",
"volume": "",
"issue": "",
"pages": "661--668",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "St\u00e9phane Ross and Drew Bagnell. 2010. Efficient re- ductions for imitation learning. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pages 661-668.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A reduction of imitation learning and structured prediction to no-regret online learning",
"authors": [
{
"first": "St\u00e9phane",
"middle": [],
"last": "Ross",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Gordon",
"suffix": ""
},
{
"first": "Drew",
"middle": [],
"last": "Bagnell",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the fourteenth international conference on artificial intelligence and statistics",
"volume": "",
"issue": "",
"pages": "627--635",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "St\u00e9phane Ross, Geoffrey Gordon, and Drew Bagnell. 2011. A reduction of imitation learning and struc- tured prediction to no-regret online learning. In Pro- ceedings of the fourteenth international conference on artificial intelligence and statistics, pages 627- 635.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Is imitation learning the route to humanoid robots?",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "Schaal",
"suffix": ""
}
],
"year": 1999,
"venue": "Trends in cognitive sciences",
"volume": "3",
"issue": "6",
"pages": "233--242",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefan Schaal. 1999. Is imitation learning the route to humanoid robots? Trends in cognitive sciences, 3(6):233-242.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "On exposure bias, hallucination and domain shift in neural machine translation",
"authors": [
{
"first": "Chaojun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3544--3552",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.326"
]
},
"num": null,
"urls": [],
"raw_text": "Chaojun Wang and Rico Sennrich. 2020. On exposure bias, hallucination and domain shift in neural ma- chine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 3544-3552, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Neural text generation with unlikelihood training",
"authors": [
{
"first": "Sean",
"middle": [],
"last": "Welleck",
"suffix": ""
},
{
"first": "Ilia",
"middle": [],
"last": "Kulikov",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Roller",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Dinan",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sean Welleck, Ilia Kulikov, Stephen Roller, Emily Di- nan, Kyunghyun Cho, and Jason Weston. 2020. Neu- ral text generation with unlikelihood training. In International Conference on Learning Representa- tions.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Sequence-to-sequence learning as beam-search optimization",
"authors": [
{
"first": "Sam",
"middle": [],
"last": "Wiseman",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1296--1306",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1137"
]
},
"num": null,
"urls": [],
"raw_text": "Sam Wiseman and Alexander M. Rush. 2016. Sequence-to-sequence learning as beam-search op- timization. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Process- ing, pages 1296-1306, Austin, Texas. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Seqgan: Sequence generative adversarial nets with policy gradient",
"authors": [
{
"first": "Lantao",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Weinan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2017,
"venue": "Thirty-First AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017. Seqgan: Sequence generative adversarial nets with policy gradient. In Thirty-First AAAI Confer- ence on Artificial Intelligence.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Bridging the gap between training and inference for neural machine translation",
"authors": [
{
"first": "Wen",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Fandong",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "You",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4334--4343",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1426"
]
},
"num": null,
"urls": [],
"raw_text": "Wen Zhang, Yang Feng, Fandong Meng, Di You, and Qun Liu. 2019a. Bridging the gap between train- ing and inference for neural machine translation. In Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4334- 4343, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Dialogpt: Large-scale generative pre-training for conversational response generation",
"authors": [
{
"first": "Yizhe",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Siqi",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Yen-Chun",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Jingjing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.00536"
]
},
"num": null,
"urls": [],
"raw_text": "Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2019b. Dialogpt: Large-scale generative pre-training for conversational response generation. arXiv preprint arXiv:1911.00536.",
"links": null
}
},
"ref_entries": {
"FIGREF4": {
"text": "Hidden states projected to their first two principal components. Figures from left to right include states in layers 1, 3, 5, 7, 9, 11. Colors from red to green indicate the time steps from 0 to 512.",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF5": {
"text": "Number of neighbors in the seen-case (top) and unseen-case (bottom) at layer 7. The x-axis is the time step of the tokens. The y-axis is the number of real neighbors with the radius.",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF6": {
"text": "(\u0177) l,t , and the state of the real passage following the condition n",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF7": {
"text": "(y) l,t . Here we set the y-axis of Figure 4 to be the difference (n (\u0177)",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF8": {
"text": "Number of neighbors for setting compareseen (upper) and compare-unseen (lower). The x-axis is the time step of the tokens relative to \u03c1 + \u03bb. The y-axis is (n",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF9": {
"text": "C.3 Automatic Detection of Looping SequenceGiven a passagex 1 , x 2 , \u2022 \u2022 \u2022 , xT , we first search for the length of a repetitive loop by com-paring x T \u2212\u03bb+1 , \u2022 \u2022 \u2022 x T and x T \u22122\u03bb+1 , \u2022 \u2022 \u2022 x T \u2212\u03bb for \u03bb = 4, 5, \u2022 \u2022 \u2022 , l/2, 1, 2, 3. If there exists some \u03bb such that x T \u2212\u03bb+1 , \u2022 \u2022 \u2022 , x T = x T \u22122\u03bb+1 , \u2022 \u2022 \u2022 , x T \u2212\u03bb, then we search \u03c1 as the first place such thatx \u03c1+i\u03bb , \u2022 \u2022 \u2022 , x \u03c1+(i+1)\u03bb\u22121 = x T \u22122\u03bb+1+\u03b4 , \u2022 \u2022 \u2022 , x T \u2212\u03bb+\u03b4 for some \u03b4 \u2208 [0, \u03bb \u2212 1]",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF10": {
"text": "Number of neighbors for compare-seen setting. The figures are the number of layer 1, 3, 5, 7, 9, 11, from left to right, top to bottom. The x-axis is the time step of the tokens. The y-axis is the number of real neighbors with the radius. Number of neighbors for compare-unseen setting. The figures are the number of layers 1, 3, 5, 7, 9, 11, from left to right, top to bottom. The x-axis is the time step of the tokens. The y-axis is the number of real neighbors with the radius.",
"num": null,
"type_str": "figure",
"uris": null
},
"TABREF0": {
"html": null,
"text": "Similarity between the conditioned passage and the generated passage of the same length.",
"num": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"4\">Conditioned Sentences Similarity (mean/std)</td></tr><tr><td colspan=\"2\">Looping sequences</td><td colspan=\"2\">0.7327 / 0.3226</td></tr><tr><td colspan=\"2\">First sentences</td><td colspan=\"2\">0.2157 / 0.1911</td></tr><tr><td colspan=\"2\">Last sentences</td><td colspan=\"2\">0.1837 / 0.1848</td></tr><tr><td colspan=\"2\">Repeat # Looping Seq.</td><td>First Sent.</td><td>Last Sent.</td></tr><tr><td>1</td><td>0.451 / 0.368</td><td colspan=\"2\">0.148 / 0.155 0.131 / 0.144</td></tr><tr><td>2</td><td>0.681 / 0.423</td><td colspan=\"2\">0.331 / 0.373 0.337 / 0.377</td></tr><tr><td>3</td><td>0.888 / 0.282</td><td colspan=\"2\">0.492 / 0.435 0.578 / 0.431</td></tr></table>"
},
"TABREF1": {
"html": null,
"text": "",
"num": null,
"type_str": "table",
"content": "<table><tr><td>: The ROUGE-L (mean/std) between the sen-</td></tr><tr><td>tences in the generated repetitive loops and x, when</td></tr><tr><td>GPT-2 conditions on the pattern c, x, \u2022 \u2022 \u2022 , x.</td></tr></table>"
}
}
}
}