ACL-OCL / Base_JSON /prefixI /json /insights /2022.insights-1.10.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
82.1 kB
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:13:24.225358Z"
},
"title": "What GPT Knows About Who is Who",
"authors": [
{
"first": "Xiaohan",
"middle": [],
"last": "Yang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Harvard University",
"location": {}
},
"email": "yang@g.harvard.edu"
},
{
"first": "Eduardo",
"middle": [],
"last": "Peynetti",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Harvard University",
"location": {}
},
"email": "eduardo.peynetti@gmail.com"
},
{
"first": "Vasco",
"middle": [],
"last": "Meerman",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Harvard University",
"location": {}
},
"email": "vmeerman@g.harvard.edu"
},
{
"first": "Chris",
"middle": [],
"last": "Tanner",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Harvard University",
"location": {}
},
"email": "christanner@g.harvard.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Coreference resolution-which is a crucial task for understanding discourse and language at large-has yet to witness widespread benefits from large language models (LLMs). Moreover, coreference resolution systems largely rely on supervised labels, which are highly expensive and difficult to annotate, thus making it ripe for prompt engineering. In this paper, we introduce a QA-based prompt-engineering method and discern generative, pre-trained LLMs' abilities and limitations toward the task of coreference resolution. Our experiments show that GPT-2 and GPT-Neo can return valid answers, but that their capabilities to identify coreferent mentions are limited and promptsensitive, leading to inconsistent results.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Coreference resolution-which is a crucial task for understanding discourse and language at large-has yet to witness widespread benefits from large language models (LLMs). Moreover, coreference resolution systems largely rely on supervised labels, which are highly expensive and difficult to annotate, thus making it ripe for prompt engineering. In this paper, we introduce a QA-based prompt-engineering method and discern generative, pre-trained LLMs' abilities and limitations toward the task of coreference resolution. Our experiments show that GPT-2 and GPT-Neo can return valid answers, but that their capabilities to identify coreferent mentions are limited and promptsensitive, leading to inconsistent results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Coreference resolution (CR) aims to identify and cluster all words (i.e., mentions) that refer to the same entity or event. Solving this task is essential for natural language understanding, as mismatched references will lead to bias. Recent improvements in CR have been incremental (Lee et al., 2017; Cattan et al., 2020) , compared to other NLP tasks that have demonstrated more real-world impact. One reason is the limited training corpora. For example, one of the primary datasets, ECB+ (Cybulska and Vossen, 2014) , contains only 984 documents, including 6,833 mentions and 2,741 clusters. Moreover, this dataset was built around 43 news topics ten years ago, potentially leading to generalization problems for the state-of-the-art (SOTA) models.",
"cite_spans": [
{
"start": 283,
"end": 301,
"text": "(Lee et al., 2017;",
"ref_id": "BIBREF10"
},
{
"start": 302,
"end": 322,
"text": "Cattan et al., 2020)",
"ref_id": "BIBREF2"
},
{
"start": 491,
"end": 518,
"text": "(Cybulska and Vossen, 2014)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "When dealing with low-resource tasks, there is an emerging trend to perform prompt engineering with pre-trained LMs. Unlike fine-tuning (Brown et al., 2020; Wei et al., 2021) , prompt engineering does not update the pre-trained model's weights when completing the downstream task. Instead, one transforms the downstream task to match the original task of the pre-trained model (Liu et al., 2021) . For example, for machine translation, one can create prompts such as \"English: I love bread. French:\" and input them to a generative LM (e.g., . If the pre-trained model encountered similar patterns during training, it should be able to generate the translated French sentence. Nevertheless, to the best of our knowledge, there is scarce research on applying this approach to coreference resolution (Sanh et al., 2021) .",
"cite_spans": [
{
"start": 136,
"end": 156,
"text": "(Brown et al., 2020;",
"ref_id": null
},
{
"start": 157,
"end": 174,
"text": "Wei et al., 2021)",
"ref_id": "BIBREF22"
},
{
"start": 377,
"end": 395,
"text": "(Liu et al., 2021)",
"ref_id": "BIBREF12"
},
{
"start": 797,
"end": 816,
"text": "(Sanh et al., 2021)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To better understand if pre-trained LMs can help resolve coreferences, we construct a QA-based prompting method and experiment with both GPT-2 (Radford et al., 2019) and GPT-Neo (Gao et al., 2020) . By using this prompting methodology, we measure if these models can predict whether two mentions are coreferent. For evaluation, we use the ECB+ dataset, which provides gold mentions and clustering labels. We compare the results with unsupervised and supervised coreference resolution models, including a classic rule-based system (Lee et al., 2011) , the seminal end-to-end neural model (Lee et al., 2017) , and a recent SOTA model (Cattan et al., 2020) .",
"cite_spans": [
{
"start": 143,
"end": 165,
"text": "(Radford et al., 2019)",
"ref_id": "BIBREF16"
},
{
"start": 170,
"end": 196,
"text": "GPT-Neo (Gao et al., 2020)",
"ref_id": null
},
{
"start": 530,
"end": 548,
"text": "(Lee et al., 2011)",
"ref_id": "BIBREF9"
},
{
"start": 587,
"end": 605,
"text": "(Lee et al., 2017)",
"ref_id": "BIBREF10"
},
{
"start": 632,
"end": 653,
"text": "(Cattan et al., 2020)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Prompt-based learning Prompt-based learning is a fast-growing area in NLP, as it can reduce the need to fine-tune models and rely on supervised labels. According to the survey by Liu et al., over 120 papers have been published since 2019, which collectively demonstrates effectiveness toward many different tasks: text classification (Tam et al., 2021; Holtzman et al., 2021) , factual probing (Perez et al., 2021) , question-answering (Tsimpoukelli et al., 2021) , and more. Nevertheless, to the best of our knowledge, only one prompt-based learning paper concerned CR. Specifically, Sanh et al. introduces T0, a zero-shot generalization of T5 (Raffel et al., 2019) . The authors convert various supervised datasets into task-specific prompts, Figure 1 : An example of prompt-based learning for CR. The green block represents the prefix, which serves as the description of the CR task and remains unchanged throughout an experiment for all inputs x. The purple block is the unfilled prompt, which changes for each input x and serves as the prediction. Moreover, in each block, the yellow part is the prompting function while the blue and red parts are the original data x and y, respectively. including CR. Using the WSC dataset (Levesque et al., 2012) , they achieve over 60% accuracy. Although this result is not comparable with supervised state-of-the-art (SOTA) models, it still offers compelling results and suggests the model might contain CR knowledge without requiring supervised training on the task. However, since the WSC dataset only focuses on highly ambiguous pronouns, it is not as complete as the standard CR task that involves named and nominal mentions.",
"cite_spans": [
{
"start": 334,
"end": 352,
"text": "(Tam et al., 2021;",
"ref_id": null
},
{
"start": 353,
"end": 375,
"text": "Holtzman et al., 2021)",
"ref_id": "BIBREF7"
},
{
"start": 394,
"end": 414,
"text": "(Perez et al., 2021)",
"ref_id": "BIBREF15"
},
{
"start": 436,
"end": 463,
"text": "(Tsimpoukelli et al., 2021)",
"ref_id": "BIBREF21"
},
{
"start": 645,
"end": 666,
"text": "(Raffel et al., 2019)",
"ref_id": "BIBREF17"
},
{
"start": 1230,
"end": 1253,
"text": "(Levesque et al., 2012)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 745,
"end": 753,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Traditional CR Models Similar to other NLP tasks, most CR models can be categorized as being either unsupervised or supervised. A commonly used unsupervised model is the Multi-Pass Sieve model (Lee et al., 2011) . This rule-based system extracts entity mentions and clusters them by applying 13 \"filters\" in successive manner. Amongst supervised models, e2e-coref (Lee et al., 2017) is the seminal end-to-end neural model. This model performs within-document CR and was trained on the OntoNotes (CoNLL-2012) dataset. Building on this architecture, Cattan et al. (2020) performs cross-document CR for entities and events by training on the ECB+ dataset and using RoBERTa as an encoder. Although supervised models offer significant improvements over unsupervised models, they are expensive to train; most SOTA models have O(n 4 ) complexity, where n is the length of each document.",
"cite_spans": [
{
"start": 193,
"end": 211,
"text": "(Lee et al., 2011)",
"ref_id": "BIBREF9"
},
{
"start": 364,
"end": 382,
"text": "(Lee et al., 2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "This section introduces our prompt-based learning method for CR. Typically, CR models can be broken down into three sub-tasks: (1) detecting mentions; (2) predicting whether two given mentions are coreferent or not; (3) and clustering mentions accordingly. The crux of CR research centers around the second part, which is also our focus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "Building on the approach introduced by Sanh et al. 2021, we define our input x as [text, m 1 , m 2 ] and output y as a binary label. Specifically, m 1 and m 2 are a pair of gold mentions in a document, and the text are the sentences containing those mentions. For example, in Figure 1, within each green box, the successive blue parts are text, m 1 , m 2 , respectively. We define a prompting function f , which takes x as input and produces a question prompt q x (Equation 1). Further details about f are in Appendix A.",
"cite_spans": [],
"ref_spans": [
{
"start": 276,
"end": 282,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "q x = f (x) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "Moreover, to allow the model to understand the task, we use few-shot learning (Triantafillou et al., 2017) by constructing a filled prefix. In particular, we select k examples, A, from the training dataset and feed these examples into the same prompting function f . Then, we append the true label ('Yes' or 'No') to the outputs, yielding the filled prefix q A (Equation 2). To be clear, each individual prefix q i\u2208k constitutes a single green box in Figure 1 .",
"cite_spans": [
{
"start": 78,
"end": 106,
"text": "(Triantafillou et al., 2017)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 451,
"end": 459,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "q A = f (A) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "Last, adding the unfilled prompt q x to the filled prefix q A will give us the full prompt for data point x. This allows us to get a prediction z without updating any parameters \u03b8 in the pre-trained LM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "z = P (q A + q x ; \u03b8)",
"eq_num": "(3)"
}
],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "Since we use pre-trained LMs directly, without fine-tuning, we do not have control over its output; the model can generate invalid answers beyond our desired outputs, 'Yes' or 'No'. Therefore, we repeat the process m times to get a more robust predictionz. To mitigate the bias of one specific f , we average the output of n different prompt formulas to get the final prediction (Equation 4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "y = n i=1z i n (4) 4 Experimental Setup",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "Datasets We use the ECB+ dataset (Cybulska and Vossen, 2014) as our input source, which contains both within-and cross-document coreference information for both event and entity mentions. This dataset consists of 984 documents around 43 news topics, among which 196 documents are in the development set. After preprocessing the data, as described in Appendix B, our development set consists of 172 documents.",
"cite_spans": [
{
"start": 33,
"end": 60,
"text": "(Cybulska and Vossen, 2014)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "To generate a prefix x 0 , we experiment with three data sources: the training sets of WSC (Levesque et al., 2012) and ECB+ (Cybulska and Vossen, 2014) , and a simple dataset that we manually generated. The WSC dataset was used in the research most similar to ours, T0 (Sanh et al., 2021 ), which we compare against while using much smaller pretrained LMs (i.e., GPT-2 and GPT-Neo). As mentioned, ECB+ provides more natural and comprehensive references than WSC. Our manually generated dataset uses 10 very simple examples -allowing one to discern the impact on performance.",
"cite_spans": [
{
"start": 91,
"end": 114,
"text": "(Levesque et al., 2012)",
"ref_id": "BIBREF11"
},
{
"start": 124,
"end": 151,
"text": "(Cybulska and Vossen, 2014)",
"ref_id": "BIBREF3"
},
{
"start": 269,
"end": 287,
"text": "(Sanh et al., 2021",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "When using the ECB+ dataset, we only considered pairs of mentions that are within the same or successive sentences. When evaluating our model, we considered all mention-pair combinations, [m 1 , m 2 ], within said sentences. Relying on the gold mentions, we obtain a dataset with 17832 candidate mention pairs, among which 7.86% are positive samples. Finally, we apply 5 prompt functions from Sanh et al. to generate the full prompts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "Models We used three traditional CR models as baselines: Multi-Pass Sieve (Lee et al., 2011) , the seminal end-to-end neural model (e2e-coref ) (Lee et al., 2017) , and a SOTA extension (the Streamlining model) (Cattan et al., 2020) . Respectively, these models represent three categories: a rulesbased model, a supervised model trained on a different dataset, and a supervised model trained on the same dataset. In terms of implementations, we use the CoreNLP toolkit for the Multi-Pass Sieve model (Manning et al., 2014) and AllenNLP (Gardner et al., 2018) for e2e-coref. Since there is no publicly available pre-trained Streamlining model (Cattan et al., 2020) , we fully train the model from scratch using a V100 GPU on Google Colab. To fairly compare with other models, we set a 0.5 threshold for the pairwise scorer in the Streamlining model. We evaluate all models by their mention pairwise scorers, not their clustering decisions.",
"cite_spans": [
{
"start": 74,
"end": 92,
"text": "(Lee et al., 2011)",
"ref_id": "BIBREF9"
},
{
"start": 144,
"end": 162,
"text": "(Lee et al., 2017)",
"ref_id": "BIBREF10"
},
{
"start": 211,
"end": 232,
"text": "(Cattan et al., 2020)",
"ref_id": "BIBREF2"
},
{
"start": 500,
"end": 522,
"text": "(Manning et al., 2014)",
"ref_id": "BIBREF14"
},
{
"start": 642,
"end": 663,
"text": "(Cattan et al., 2020)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "Limited by our computational resources, we choose GPT-2 and GPT-Neo-125M as our pretrained LMs 1 . During inference, the output token length is set to 1, since our expected output is one word (i.e., 'Yes' or 'No') . To generate more robust results, the repetition parameter m is set to 5. We ran our text generative models with multiple temperature settings ranging from 0 to 1, none of which produced significant changes. We settled on using a value of 0.7, to limit the greediness of the generated responses. In terms of few-shot learning, we experimented with k \u2208 {0, 2, 4, 10} and display the results from the 4-shot setting since it produces the best accuracy. To reduce bias introduced by prefixes, we ensure each prefix has equally-balanced samples. For example, for the 4-shot setting, the filled prefix will have 2 positive examples and 2 negative examples.",
"cite_spans": [
{
"start": 199,
"end": 213,
"text": "'Yes' or 'No')",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "Yes/No Predictions 0-shot 5% 2-shot 93.7% 4-shot 96.2% 10-shot 98% We first question if GPT-based models can produce valid answers. In Figure 1 , we observe that GPT-2 predicts 'Yes' or 'No' for over 93.7% samples when at least 2 filled prefixes are provided.",
"cite_spans": [],
"ref_spans": [
{
"start": 135,
"end": 143,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "5"
},
{
"text": "However, although the answers are valid, they are inaccurate. In Figure 2 , we plot the distribution of predicted labels for each model, where the red bars denote the distribution of positive examples (ground truth), and the blue bars denote negative ones (ground truth). Traditional CR models generally predict low values for negative examples, indicated by blue bars being concentrated at 0. As for positive examples, e2e-coref shows better precision since more positive examples are classified correctly at 1. Yet, GPT-2 seems to be both sensitive to prompts and unstable over the repetitions of each prompt. Furthermore, GPT-Neo's predictions are inaccurate and no better than random, even though it predicts consistent results for multiple runs with the same prompt.",
"cite_spans": [],
"ref_spans": [
{
"start": 65,
"end": 73,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "5"
},
{
"text": "Similar conclusions can be drawn from Table 2 , where GPT-based models have the lowest AUC and F1 scores. Specifically, the extremely low precision causes the bad results. Since the ECB+ dataset is highly imbalanced, random predictions from GPTbased models will lead to a low precision, reflecting the proportion of positive samples. For completeness, we also perform an experiment on the WSC dataset (see GPT-2 wsc ), which is a test dataset used by Sanh et al. (2021) . GPT-2 also fails on this task, as its mean prediction averaged across different prompts is always \"Yes\" . POS and Entity Types While the overall performance indicates that GPT models are comparable to a random model, we hypothesize that for some subset of mention pairs, GPT might perform better.",
"cite_spans": [
{
"start": 451,
"end": 469,
"text": "Sanh et al. (2021)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 38,
"end": 45,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "5"
},
{
"text": "To investigate, we conducted experiments based on part-of-speech (POS) tags and named-entity types. Figure 3 shows that both GPT-2 and GPT-Neo can capture coreferent relationships relatively better when the second mention is a pronoun. Moreover, this trend is stronger when the first mention is a pronoun or a proper noun. Nonetheless, e2e-coref performs better than both GPT models across all POS tags, and the gap is widest when the second mention is a nominal noun phrase. As for named entities, Figure 4 shows that both GPT-2 and GPT-Neo perform better in precision when one mention is of type PERSON. Moreover, GPT-Neo can identify coreferent relationships more precisely if the second mention is Non-GPE locations (i.e., LOC). However, their precision scores are far lower than the scores from classical CR models. In particular, both the multi-pass sieve model and e2e-coref model reach the highest precision when a mention is a PRODUCT object (e.g., vehicle, food) or a NORP object (e.g., nationality, religious or political group). Mention Similarity In addition to inspecting how performance varies with mention types, we also considered how performance is affected by mentions' similarity. Using pre-trained BERT (Devlin et al., 2018) , we encode each mention into span representations by averaging its tokens' last hidden states. Then, we measure cosine similarity between mention pairs. Figure 5 shows that F1 scores generally improve as the semantic similarity increases. Although, the multi-pass sieve model maintains a low F1 because it is a rule-based model that tends to predict False for most samples -which yields a high accuracy for unbalanced datasets. The e2e-coref model performs well on mentions that are not so similar, while the performance of Streamlining model improves drastically as similarity is greater than 50%. However, both GPT-2 and GPT-NEO yield low F1 (approximately 0.2) for mention pairs with less than 70% similarity. When considering mentions of higher similarity, GPT-based models can achieve over 0.4 F1 score.",
"cite_spans": [
{
"start": 1224,
"end": 1245,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 100,
"end": 108,
"text": "Figure 3",
"ref_id": "FIGREF1"
},
{
"start": 499,
"end": 507,
"text": "Figure 4",
"ref_id": "FIGREF2"
},
{
"start": 1400,
"end": 1408,
"text": "Figure 5",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Acc Prec",
"sec_num": null
},
{
"text": "In this paper, we rely on prompt-based learning to analyze how much GPT-like models know about coreference resolution. Despite the popularity of prompting in recent NLP research, we find that LLMs perform poorly on this task without finetuning. Nonetheless, these models achieve relatively better performance for specific types of mentions, including pronouns and person objects, and mention pairs with high similarity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "A Prompt formulas Figure 6 : Prompt Formulas. We experiment with these 5 prompt formulas mentioned in Sanh et al. (2021) . Here, each block is one formula and the parts highlighted in blue are [text, m 1 , m 2 ] respectively.",
"cite_spans": [
{
"start": 102,
"end": 120,
"text": "Sanh et al. (2021)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 18,
"end": 26,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "The original ECB+ dataset is in XML format, where everything is tokenized. Moreover, the information about gold mentions and gold clusters is related to token ids. However, we cannot easily get the plain text by joining tokens with a space character. If we do so, we will get strange looking text as shown below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Data Preprocessing",
"sec_num": null
},
{
"text": "http : / / www . accesshollywood . com / lindsaylohan -leaves -betty -ford -checks -into -maliburehab article 80744 Lindsay Lohan Leaves Betty Ford , Checks Into Malibu Rehab First Published : June 13 , 2013 4 : 59 PM EDT Lindsay Lohan has left the Betty Ford Center and is moving to a rehab facility in Malibu , Calif . , Access Hollywood has confirmed .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Data Preprocessing",
"sec_num": null
},
{
"text": "In this example, we can see objects like urls, datetime and punctuation are not in the right format. Since we are using the text as an input to the prompt function, we need to properly format them to align with normal text that GPTs are trained on. Moreover, as gold mention and gold clusters are based on original token ids in ECB+, when we parsed and re-formatted the data, we could match these ids again. Continuing with the previous example, our parsing algorithm cleans up the previous text to be something as follows. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Data Preprocessing",
"sec_num": null
},
{
"text": "Here are additional results for our experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Additional Results",
"sec_num": null
},
{
"text": "Experiments on Prefix The aggregate results from few shot learning are displayed in Table 3 . Our results show that 4-shots learning performs the best for both GPT-2 and GPT-NEO in terms of accuracy. Unexpectedly, as we increase the size of examples, the result does not improve accordingly. Given 10 examples in prefix, the model tend to predict \"yes\" more easily. One possible explanation might be that we have balanced examples in prefix while the actual querying data only have around 8% positive samples.",
"cite_spans": [],
"ref_spans": [
{
"start": 84,
"end": 91,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "C Additional Results",
"sec_num": null
},
{
"text": "F1 AUC 2-shots 0.39 0.08 0.64 0.14 0.50 4-shots 0.51 0.08 0.51 0.14 0.51 10-shots 0.19 0.08 0.90 0.15 0.51 Moreover, we experiment with various datasets for prefix as discussed in section 4. The results in Table 4 shows that prefix does have an impact on the results. The prefix generated from ECB+ dataset performs slightly better than others regarding to AUC. This is understandable because we evaluate on the ECB+ development set. Beyond our expectation, WSC-prefix result in a perfect recall and a super bad accuracy, which means that this prefix lead models to generate \"yes\" regardless of the context. This result further proves that GPT-2 is very sensitive to prompts.",
"cite_spans": [],
"ref_spans": [
{
"start": 206,
"end": 213,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Acc Prec Recall",
"sec_num": null
},
{
"text": "Our code can be found at https://github.com/ AwesomeCoref/prompt-coref",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF2": {
"ref_id": "b2",
"title": "Streamlining crossdocument coreference resolution: Evaluation and modeling",
"authors": [
{
"first": "Arie",
"middle": [],
"last": "Cattan",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Eirew",
"suffix": ""
},
{
"first": "Gabriel",
"middle": [],
"last": "Stanovsky",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2009.11032"
]
},
"num": null,
"urls": [],
"raw_text": "Arie Cattan, Alon Eirew, Gabriel Stanovsky, Mandar Joshi, and Ido Dagan. 2020. Streamlining cross- document coreference resolution: Evaluation and modeling. arXiv preprint arXiv:2009.11032.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Using a sledgehammer to crack a nut? lexical diversity and event coreference resolution",
"authors": [
{
"first": "Agata",
"middle": [],
"last": "Cybulska",
"suffix": ""
},
{
"first": "Piek",
"middle": [],
"last": "Vossen",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)",
"volume": "",
"issue": "",
"pages": "4545--4552",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Agata Cybulska and Piek Vossen. 2014. Using a sledge- hammer to crack a nut? lexical diversity and event coreference resolution. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 4545-4552, Reyk- javik, Iceland. European Language Resources Asso- ciation (ELRA).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The pile: An 800gb dataset of diverse text for language modeling",
"authors": [
{
"first": "Leo",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Stella",
"middle": [],
"last": "Biderman",
"suffix": ""
},
{
"first": "Sid",
"middle": [],
"last": "Black",
"suffix": ""
},
{
"first": "Laurence",
"middle": [],
"last": "Golding",
"suffix": ""
},
{
"first": "Travis",
"middle": [],
"last": "Hoppe",
"suffix": ""
},
{
"first": "Charles",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Phang",
"suffix": ""
},
{
"first": "Horace",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Anish",
"middle": [],
"last": "Thite",
"suffix": ""
},
{
"first": "Noa",
"middle": [],
"last": "Nabeshima",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2101.00027"
]
},
"num": null,
"urls": [],
"raw_text": "Leo Gao, Stella Biderman, Sid Black, Laurence Gold- ing, Travis Hoppe, Charles Foster, Jason Phang, Ho- race He, Anish Thite, Noa Nabeshima, et al. 2020. The pile: An 800gb dataset of diverse text for lan- guage modeling. arXiv preprint arXiv:2101.00027.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Allennlp: A deep semantic natural language processing platform",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Grus",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Oyvind",
"middle": [],
"last": "Tafjord",
"suffix": ""
},
{
"first": "Pradeep",
"middle": [],
"last": "Dasigi",
"suffix": ""
},
{
"first": "Nelson",
"middle": [
"F"
],
"last": "Liu",
"suffix": ""
},
{
"first": "Matthew",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Schmitz",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew E. Peters, Michael Schmitz, and Luke Zettlemoyer. 2018. Allennlp: A deep semantic natural language processing platform. CoRR, abs/1803.07640.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Surface form competition: Why the highest probability answer isn't always right",
"authors": [
{
"first": "Ari",
"middle": [],
"last": "Holtzman",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "West",
"suffix": ""
},
{
"first": "Vered",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2104.08315"
]
},
"num": null,
"urls": [],
"raw_text": "Ari Holtzman, Peter West, Vered Schwartz, Yejin Choi, and Luke Zettlemoyer. 2021. Surface form competi- tion: Why the highest probability answer isn't always right. arXiv preprint arXiv:2104.08315.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Span-BERT: Improving Pre-training by Representing and Predicting Spans",
"authors": [
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2020,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "8",
"issue": "",
"pages": "64--77",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020. Span- BERT: Improving Pre-training by Representing and Predicting Spans. Transactions of the Association for Computational Linguistics, 8:64-77.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Stanford's multi-pass sieve coreference resolution system at the conll-2011 shared task",
"authors": [
{
"first": "Heeyoung",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Yves",
"middle": [],
"last": "Peirsman",
"suffix": ""
},
{
"first": "Angel",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Nathanael",
"middle": [],
"last": "Chambers",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 15th conference on computational natural language learning: Shared task",
"volume": "",
"issue": "",
"pages": "28--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heeyoung Lee, Yves Peirsman, Angel Chang, Nathanael Chambers, Mihai Surdeanu, and Dan Ju- rafsky. 2011. Stanford's multi-pass sieve coreference resolution system at the conll-2011 shared task. In Proceedings of the 15th conference on computational natural language learning: Shared task, pages 28-34. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "End-to-end neural coreference resolution",
"authors": [
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenton Lee, Luheng He, Mike Lewis, and Luke Zettle- moyer. 2017. End-to-end neural coreference resolu- tion.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The winograd schema challenge",
"authors": [
{
"first": "Hector",
"middle": [],
"last": "Levesque",
"suffix": ""
},
{
"first": "Ernest",
"middle": [],
"last": "Davis",
"suffix": ""
},
{
"first": "Leora",
"middle": [],
"last": "Morgenstern",
"suffix": ""
}
],
"year": 2012,
"venue": "Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hector Levesque, Ernest Davis, and Leora Morgenstern. 2012. The winograd schema challenge. In Thir- teenth International Conference on the Principles of Knowledge Representation and Reasoning.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing",
"authors": [
{
"first": "Pengfei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Weizhe",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Jinlan",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "Zhengbao",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Hiroaki",
"middle": [],
"last": "Hayashi",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2021,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2021. Pre- train, prompt, and predict: A systematic survey of prompting methods in natural language processing. ArXiv, abs/2107.13586.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The stanford corenlp natural language processing toolkit",
"authors": [
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Jenny",
"middle": [
"Rose"
],
"last": "Bauer",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Finkel",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mc-Closky",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations",
"volume": "",
"issue": "",
"pages": "55--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David Mc- Closky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of 52nd annual meeting of the association for computational linguis- tics: system demonstrations, pages 55-60.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "True few-shot learning with language models",
"authors": [
{
"first": "Ethan",
"middle": [],
"last": "Perez",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2105.11447"
]
},
"num": null,
"urls": [],
"raw_text": "Ethan Perez, Douwe Kiela, and Kyunghyun Cho. 2021. True few-shot learning with language models. arXiv preprint arXiv:2105.11447.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "OpenAI blog",
"volume": "1",
"issue": "8",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Exploring the limits of transfer learning with a unified text-to-text transformer",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Sharan",
"middle": [],
"last": "Narang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Matena",
"suffix": ""
},
{
"first": "Yanqi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Peter J",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.10683"
]
},
"num": null,
"urls": [],
"raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text trans- former. arXiv preprint arXiv:1910.10683.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "2021. Improving and simplifying pattern exploiting training 2021",
"authors": [
{
"first": "D",
"middle": [],
"last": "Tam",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Menon",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Srivastava",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2103.11955"
]
},
"num": null,
"urls": [],
"raw_text": "D Tam, RR Menon, M Bansal, S Srivastava, and C Raf- fel. 2021. Improving and simplifying pattern exploit- ing training 2021. arXiv preprint arXiv:2103.11955.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Few-shot learning through an information retrieval lens",
"authors": [
{
"first": "Eleni",
"middle": [],
"last": "Triantafillou",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zemel",
"suffix": ""
},
{
"first": "Raquel",
"middle": [],
"last": "Urtasun",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eleni Triantafillou, Richard Zemel, and Raquel Urta- sun. 2017. Few-shot learning through an information retrieval lens.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Multimodal few-shot learning with frozen language models",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Tsimpoukelli",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Menick",
"suffix": ""
},
{
"first": "Serkan",
"middle": [],
"last": "Cabi",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Eslami",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
}
],
"year": 2021,
"venue": "Thirty-Fifth Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maria Tsimpoukelli, Jacob Menick, Serkan Cabi, SM Ali Eslami, Oriol Vinyals, and Felix Hill. 2021. Multimodal few-shot learning with frozen language models. In Thirty-Fifth Conference on Neural Infor- mation Processing Systems.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Finetuned language models are zero-shot learners",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Maarten",
"middle": [],
"last": "Bosma",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Vincent",
"suffix": ""
},
{
"first": "Kelvin",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Adams",
"middle": [
"Wei"
],
"last": "Guu",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Lester",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2109.01652"
]
},
"num": null,
"urls": [],
"raw_text": "Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, An- drew M Dai, and Quoc V Le. 2021. Finetuned lan- guage models are zero-shot learners. arXiv preprint arXiv:2109.01652.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Distribution of predicted values"
},
"FIGREF1": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Model's precision over various types of noun phrases, including pronouns, proper nouns and nominal nouns. Each bar's hue intensity denotes the data density."
},
"FIGREF2": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "GPT-2's performance on different namedentity types. We use colors to denote performance and the text to show data density in each category."
},
"FIGREF3": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Different models' F1 score over various level of mention similarities based on BERT embedding."
},
"TABREF0": {
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null,
"text": ""
},
"TABREF2": {
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null,
"text": "Performance of different models."
},
"TABREF3": {
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null,
"text": "http://www.accesshollywood.com/lindsaylohan-leaves-betty-ford-checks-into-maliburehab article 80744 [EOS] Lindsay Lohan Leaves Betty Ford, Checks Into Malibu Rehab First Published: June 13, 2013 4: 59 PM EDT [EOS] Lindsay Lohan has left the Betty Ford Center and is moving to a rehab facility in Malibu, Calif., Access Hollywood has confirmed. [EOS]"
},
"TABREF4": {
"type_str": "table",
"html": null,
"content": "<table><tr><td/><td colspan=\"2\">Acc Prec Recall</td><td>F1 AUC</td></tr><tr><td colspan=\"2\">simple 0.61 0.08</td><td colspan=\"2\">0.36 0.13 0.50</td></tr><tr><td>WSC</td><td>0.08 0.08</td><td colspan=\"2\">1.00 0.15 0.50</td></tr><tr><td>ecb+</td><td>0.54 0.08</td><td colspan=\"2\">0.48 0.14 0.51</td></tr></table>",
"num": null,
"text": ": n-shot performance from the text generative models"
},
"TABREF5": {
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null,
"text": "Average results from each dataset that is used for the experiments"
}
}
}
}