|
{ |
|
"paper_id": "2022", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T16:43:34.775485Z" |
|
}, |
|
"title": "Posthoc Verification and the Fallibility of the Ground Truth", |
|
"authors": [ |
|
{ |
|
"first": "Yifan", |
|
"middle": [], |
|
"last": "Ding", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Notre Dame Notre Dame", |
|
"location": { |
|
"region": "IN", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "yding4@nd.edu" |
|
}, |
|
{ |
|
"first": "Nicholas", |
|
"middle": [], |
|
"last": "Botzer", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Notre Dame Notre Dame", |
|
"location": { |
|
"region": "IN", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "nbotzer@nd.edu" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Weninger", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Notre Dame Notre Dame", |
|
"location": { |
|
"region": "IN", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "tweninge@nd.edu" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Classifiers commonly make use of preannotated datasets, wherein a model is evaluated by pre-defined metrics on a held-out test set typically made of human-annotated labels. Metrics used in these evaluations are tied to the availability of well-defined ground truth labels, and these metrics typically do not allow for inexact matches. These noisy ground truth labels and strict evaluation metrics may compromise the validity and realism of evaluation results. In the present work, we conduct a systematic label verification experiment on the entity linking (EL) task. Specifically, we ask annotators to verify the correctness of annotations after the fact (i.e., posthoc). Compared to pre-annotation evaluation, state-of-the-art EL models performed extremely well according to the posthoc evaluation methodology. Surprisingly, we find predictions from EL models had a similar or higher verification rate than the ground truth. We conclude with a discussion on these findings and recommendations for future evaluations. The source code, raw results, and evaluation scripts are publicly available via the MIT license at https://github. com/yifding/e2e_EL_evaluate", |
|
"pdf_parse": { |
|
"paper_id": "2022", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Classifiers commonly make use of preannotated datasets, wherein a model is evaluated by pre-defined metrics on a held-out test set typically made of human-annotated labels. Metrics used in these evaluations are tied to the availability of well-defined ground truth labels, and these metrics typically do not allow for inexact matches. These noisy ground truth labels and strict evaluation metrics may compromise the validity and realism of evaluation results. In the present work, we conduct a systematic label verification experiment on the entity linking (EL) task. Specifically, we ask annotators to verify the correctness of annotations after the fact (i.e., posthoc). Compared to pre-annotation evaluation, state-of-the-art EL models performed extremely well according to the posthoc evaluation methodology. Surprisingly, we find predictions from EL models had a similar or higher verification rate than the ground truth. We conclude with a discussion on these findings and recommendations for future evaluations. The source code, raw results, and evaluation scripts are publicly available via the MIT license at https://github. com/yifding/e2e_EL_evaluate", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The general machine learning pipeline starts with a dataset (a collection of documents, images, medical records, etc.). When labels are not inherent to the data, they must be annotated -usually by humans. A label error occurs when an annotator provides a label that is \"incorrect.\" But this raises an interesting question: who gets to decide that some annotation is incorrect?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "One solution is to ask k annotators and combine their labels somehow (e.g., majority vote, probability distribution). Subjectivity comes into play here. Given identical instructions and identical items, some annotators may focus on different attributes of the item or have a different interpretation of the labeling criteria. Understanding and modelling label uncertainty remains a compelling challenge in evaluating machine learning systems (Sommerauer, Fokkens, and Vossen, 2020; Resnick et al., 2021) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 442, |
|
"end": 481, |
|
"text": "(Sommerauer, Fokkens, and Vossen, 2020;", |
|
"ref_id": "BIBREF47" |
|
}, |
|
{ |
|
"start": 482, |
|
"end": 503, |
|
"text": "Resnick et al., 2021)", |
|
"ref_id": "BIBREF37" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Tasks that require free-form, soft, or multi-class annotations present another dimension to this challenge. For example, natural language processing tasks like named entity recognition (NER) and entity linking (EL) rely heavily on datasets comprised of free-form human annotations. These tasks are typically evaluated against a held out portion of the already-annotated dataset. A problem arises when NER and EL tasks produce labels that are not easily verified as \"close enough\" to the correct groundtruth (Ribeiro et al., 2020) . Instead, like the example in Fig. 1 , most NER and EL evaluation metrics require exact matches against freeform annotations (Sevgili et al., 2020; Goel et al., 2021) . This strict evaluation methodology may unreasonably count labels that are \"close enough\" as incorrect and is known to dramatically change performance metrics (Gashteovski et al., 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 507, |
|
"end": 529, |
|
"text": "(Ribeiro et al., 2020)", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 656, |
|
"end": 678, |
|
"text": "(Sevgili et al., 2020;", |
|
"ref_id": "BIBREF46" |
|
}, |
|
{ |
|
"start": 679, |
|
"end": 697, |
|
"text": "Goel et al., 2021)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 858, |
|
"end": 884, |
|
"text": "(Gashteovski et al., 2020)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 561, |
|
"end": 567, |
|
"text": "Fig. 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Producing a verifiable answer is not the same as producing the correct answer. This distinction is critical. Asking a machine learning system to independently provide the same label as an annotator is a wildly different task than asking an annotator to verify the output of a predictor (posthoc verification). Unfortunately the prevailing test and evaluation regime requires predictors to exactly match noisy, free-form, and subjective human annotations. This paradigm represents a mismatch that, if left unaddressed, threatens to undermine future progress in machine learning.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Main Contributions. We show that the distinction between pre-annotated and posthoc-annotated labels is substantial and the distinction presents consequences for how we determine the state-ofthe-art in machine learning systems. We conducted systematic experiments using posthoc analysis on a large case study of eight popular entity linking datasets with two state-of-the-art entity linking models, and report some surprising findings: First, state-of-the-art EL models generally predicted labels with higher verification rate than the ground truth labels. Second, there was substantial disagreement among annotators as to what constitutes a label that is \"good enough\" to be verified. Third, a large proportion (between 10%-70% depending on the dataset) of verified entities were missing from the ground truth dataset.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The goal of EL is to identify words or phrases that represent real-world entities and match each identified phrase to a listing in some knowledge base. Like most classification systems, EL models are typically trained and tested on large pre-annotated benchmark datasets. Table 1 describes eight such benchmark datasets that are widely used throughout the EL and broader NLP communities.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 272, |
|
"end": 279, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Setting: Entity Linking", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EL Models. In order to better understand the effect of pre-annotated benchmarks on machine learning systems, it is necessary to test a handful of state-of-the-art EL systems. Specifically, we chose: (1) The end-to-end (E2E) entity linking model, which generates and selects span candidates with associated entity labels. The E2E model is a word-level model that utilizes word and entity embeddings to compute span-level contextual scores. Word and entity embeddings are trained on Wikipedia, and the final model is trained and validated using AIDA-train and AIDA-A respectively (Kolitsas, Ganea, and Hofmann, 2018) . 2The Radboud Entity Linker (REL), which combines the Flair (Akbik, Blythe, and Vollgraf, 2018) NER system with the mulrel-nel (Le and Titov, 2018) entity disambiguation system to create a holistic EL pipeline (van Hulst et al., 2020) . In addition, our methodology permits the evaluation of the GT as if it were a competing model. The relative performance of E2E and REL can then compared with the GT to better understand the performance of the posthoc annotations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 578, |
|
"end": 614, |
|
"text": "(Kolitsas, Ganea, and Hofmann, 2018)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 676, |
|
"end": 711, |
|
"text": "(Akbik, Blythe, and Vollgraf, 2018)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 826, |
|
"end": 850, |
|
"text": "(van Hulst et al., 2020)", |
|
"ref_id": "BIBREF51" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Setting: Entity Linking", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Data collection. We have previously argued that these evaluation metrics may not faithfully simulate in vivo performance because (1) the ground truth annotations are noisy and subjective, and (2) exact matching is too strict. We test this argument by collecting posthoc verifications of the three models, including the pre-annotated GT, over the datasets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Setting: Entity Linking", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We created a simple verification system, illustrated in Fig. 2 , and used Amazon Mechanical Turk to solicit workers. For each document and model, we asked a single worker to verify all present entity annotations (i.e., an entity mention and its linked entity). Annotators can then choose to (1) Verify the annotation (2) Modify the annotation, or (3) Remove the annotation.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 56, |
|
"end": 62, |
|
"text": "Fig. 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Setting: Entity Linking", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "0 0.2 0.4 0.6 0.8 1 Precision Pre-Annotation E2E REL Posthoc Verification E2E REL GT A ID A -t ra in A ID A -A A ID A -B A C E 2 0 0 4 A Q U A IN T C L U E W E B M S N B C W IK IP E D IA 0 0.2 0.4 0.6 0.8 1 Recall A ID A -t ra in A ID A -A A ID A -B A C E 2 0 0 4 A Q U A IN T C L U E W E B M S N B C W IK IP E D IA", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Setting: Entity Linking", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Verify: The annotator determines that the current annotation (both mention and Wikipedia link) is appropriate.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Setting: Entity Linking", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Modify: The annotator determines that the Wikipedia link is incorrect. In this case, they are asked to search and select a more appropriate Wikipedia link, use it to replace the existing link, and then accept the new annotation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Setting: Entity Linking", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Remove: The annotator determines that the current mention (highlighted text) is not a linkable entity. In this case, they remove the link from the mention.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Setting: Entity Linking", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We made a deliberate decision to not permit new annotation of missing entity mentions. That is, if the model did not label an entity, then there is no opportunity for the worker to add a new label. This design decision kept the worker focused on the verification task, but possibly limits the coverage of the verified dataset. We provide further comments on this decision in the Results section.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Setting: Entity Linking", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Each annotator is assigned to 20 tasks including one control task with three control annotations. We only accept and collect annotations from workers that passed the control task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Setting: Entity Linking", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We paid each worker 3 USD for each HIT. We estimate a average hourly rate of about 9 USD; and paid a total of 6,520 USD. From these, we received 167,432 annotations. The breakdown of tasks, annotations shown to workers, and verified annotations are listed in Table 1 for each dataset and model. Prior to launch, this experiment was reviewed and approved by an impaneled ethics re-view board at the University of Notre Dame. The source code, raw results, and evaluation scripts are publicly available via the MIT license at https://github.com/yifding/ e2e_EL_evaluate", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 259, |
|
"end": 266, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Setting: Entity Linking", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The Pre-Annotation Evaluation Regime. First, we re-tested the E2E and REL models and evaluated their micro precision and recall under the typical pre-annotation evaluation regime. These results are illustrated in Fig 3 and are nearly identical to those reported by related works (Kolitsas, Ganea, and Hofmann, 2018; van Hulst et al., 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 279, |
|
"end": 315, |
|
"text": "(Kolitsas, Ganea, and Hofmann, 2018;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 316, |
|
"end": 339, |
|
"text": "van Hulst et al., 2020)", |
|
"ref_id": "BIBREF51" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 213, |
|
"end": 222, |
|
"text": "Fig 3 and", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Posthoc Verification Methodology", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Our next task is to define appropriate evaluation metrics that can be used to compare the results of the posthoc verification experiment with results from the pre-annotation evaluation regime. Verification Union. It is important to note that each model and document was evaluated by only a single worker. However, we were careful to assign each worker annotations randomly drawn from model/document combinations. This randomiza-tion largely eliminates biases in favor or against any model or dataset. Furthermore, this methodology provides for repetitions when annotations match exactly across models -which is what models are optimized for in the first place! In this scenario the union of all non-exact, non-overlapping annotations provides a superset of annotations similar to how pooling is used in information retrieval evaluation to create a robust result set (Zobel, 1998) . Formally, we define the verification union of a dataset", |
|
"cite_spans": [ |
|
{ |
|
"start": 866, |
|
"end": 879, |
|
"text": "(Zobel, 1998)", |
|
"ref_id": "BIBREF52" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Posthoc Verification Evaluation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "d as V d = m V m,d", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Posthoc Verification Evaluation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": ". Posthoc Precision and Recall. The precision metric is defined as the ratio of true predictions to all predictions. If we recast the concept of true predictions to be the set of verified annotations V m,d , then it is natural to further consider N d,m to be the set of all predictions for some dataset and model pair, especially considering our data collection methodology restricts", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Posthoc Verification Evaluation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "V m,d \u2286 N d,m .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Posthoc Verification Evaluation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Thus the posthoc precision of a model-data pairing is simply the verification rate r m,d .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Posthoc Verification Evaluation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The recall metric is defined as the ratio of true predictions to all true labels. If we keep the recasting of true positives as verified annotations V m,d , then all that remains a definition of true labels. Like in most evaluation regimes the set of all true labels is estimated by the available labels in the dataset. Here, we do the same and estimate the set of true labels as the union of a dataset's verified annotations", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Posthoc Verification Evaluation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "V d . Thus posthoc recall of a model-data pairing is |V m,d |/|V d |.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Posthoc Verification Evaluation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Using the evaluation tools introduced in the previous section, we begin to answer interesting research questions. First, do the differences between evaluation regimes, i.e., pre-annotation versus posthoc verification, have any affect on our perception of model performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Posthoc Verification Results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To shed some light on this question, we compared the precision and recall metrics calculated using the pre-annotation evaluation regime against the precision and recall metrics calculated using the posthoc verification regime. The left quadplot in Fig. 3 compares model performance under the different evaluation regimes. Error bars represent the empirical 95% confidence internals drawn from 1000 bootstrap samples of the data. We make two major conclusions from this comparison:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 248, |
|
"end": 254, |
|
"text": "Fig. 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Posthoc Verification Results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Pre-annotation performance is lower than Posthoc verification. The differences between the Figure 4 : Detailed error analysis of verification rates in Fig. 3(top right) . The E2E model consistently outperforms the ground truth (GT).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 91, |
|
"end": 99, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 151, |
|
"end": 168, |
|
"text": "Fig. 3(top right)", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Posthoc Verification Results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "scores of the pre-annotation compared to posthoc verification are striking. Posthoc annotation shows very good precision scores across all datasets. Although the models may not exactly predict the preannotated label, high posthoc precision indicates that their results appear to be \"close-enough\" to obtain human verification. Conclusion: the widely-used exact matching evaluation regime is too strict. Despite its intention, the pre-annotation evaluation regime does not appear to faithfully simulate a human use case.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Posthoc Verification Results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Machine Learning models outperform the Ground Truth. The posthoc verification methodology permits the GT annotations to be treated like any other model, and are therefore included in Fig. 3 (right plot) . These results were unexpected and surprising. We found that labels produced by the EL models oftentimes had a higher verification rate than the pre-annotated ground truth. The recall metric also showed that the EL models were also able to identify more verified labels than GT. Conclusion: Higher precision performance of the EL models indicates that human annotators make more unverifiable annotations than the EL models. Higher recall performance of the EL models also indicates that the EL models find a greater coverage of possible entities. The recall results are less surprising because human annotators may be unmotivated or inattentive during free-form annotationqualities that tend to not affect EL models.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 183, |
|
"end": 202, |
|
"text": "Fig. 3 (right plot)", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Posthoc Verification Results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For each linked entity, the posthoc verification methodology permitted one of three outcomes: verification, modification, or removal. The plot in Fig. 4 shows the percentage of each outcome for each model and dataset pair; it is essentially a zoomed-in, more-detailed illustration of the Posthoc Verification Precision result panel from Fig. 3 , but with colors representing outcomes and patterns representing models. Edits indicate that the named entity recognition (i.e., mention detection) portion of the EL model was able to identify an entity, but the entity was not linked to a verifiable entity. The available dataset has an enumeration of corrected linkages, but we do not consider them further in the present work. Removal indicates an error with the mention detection. From these results we find that, when a entity mention is detected it is usually a good detection; the majority of the error comes from the linking subtask. A similar error analysis of missing entities is not permitted from the data collection methodology because we only ask workers to verify pre-annotated or predicted entities, not add missing entities. Because all detected mentions are provided with some entity link, we can safely assume that missing entities is mostly (perhaps wholly) due to errors in the mention detection portion of EL models.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 146, |
|
"end": 152, |
|
"text": "Fig. 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 337, |
|
"end": 343, |
|
"text": "Fig. 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Error Analysis of the Ground Truth", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The primary goal of the present work is to compare pre-annotation labels contributed by human workers against verified annotations of the same data. Using entity linking as an example task, we ultimately found that these two methodologies returned vastly different performance results. From this observation we can draw several important conclusions. First, EL models have a much higher precision than related work reports. This difference is because the standard evaluation methodology used in EL, and throughout ML generally, do not account for soft matches or the semantics of what constitutes a label that is \"close enough\". Our second conclusion is that EL models, and perhaps ML models generally, sometimes perform better than ground truth annotators -at least, that is, according to other ground truth annotators.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This research is sponsored in part by the Defense Advanced Research Projects Agency (DAPRA) under contract numbers HR00111990114 and HR001121C0168. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA or the U.S. Government. The U.S. Gov-ernment is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Split and Rephrase: Better Evaluation and Stronger Baselines", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Aharoni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "719--724", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aharoni, R.; and Goldberg, Y. 2018. Split and Rephrase: Better Evaluation and Stronger Baselines. In ACL, 719-724.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Contextual string embeddings for sequence labeling", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Akbik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Blythe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Vollgraf", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "COLING", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1638--1649", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Akbik, A.; Blythe, D.; and Vollgraf, R. 2018. Con- textual string embeddings for sequence labeling. In COLING, 1638-1649.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Synthetic and Natural Noise Both Break Neural Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Belinkov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Bisk", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "ICLR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Belinkov, Y.; and Bisk, Y. 2018. Synthetic and Natural Noise Both Break Neural Machine Translation. In ICLR.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Reddit entity linking dataset", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Botzer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Ding", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Weninger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Information Processing & Management", |
|
"volume": "58", |
|
"issue": "3", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Botzer, N.; Ding, Y.; and Weninger, T. 2021. Red- dit entity linking dataset. Information Processing & Management, 58(3): 102479.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "A large annotated corpus for learning natural language inference", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Bowman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Angeli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Potts", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1508.05326" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bowman, S. R.; Angeli, G.; Potts, C.; and Man- ning, C. D. 2015. A large annotated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "What Will it Take to Fix Benchmarking in Natural Language Understanding? arXiv preprint", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Bowman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Dahl", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2104.02145" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bowman, S. R.; and Dahl, G. E. 2021. What Will it Take to Fix Benchmarking in Natural Language Un- derstanding? arXiv preprint arXiv:2104.02145.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Set-valued classification-overview via a unified framework", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Chzhen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Denis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Hebiri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Lorieul", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2102.12318" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chzhen, E.; Denis, C.; Hebiri, M.; and Lorieul, T. 2021. Set-valued classification-overview via a uni- fied framework. arXiv preprint arXiv:2102.12318.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Confidence sets with expected sizes for multiclass classification", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Denis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Hebiri", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "JMLR", |
|
"volume": "18", |
|
"issue": "1", |
|
"pages": "3571--3598", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Denis, C.; and Hebiri, M. 2017. Confidence sets with expected sizes for multiclass classification. JMLR, 18(1): 3571-3598.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M.-W", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "NAACL HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2019. BERT: Pre-training of Deep Bidirec- tional Transformers for Language Understanding. In NAACL HLT, 4171-4186.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Deep Joint Entity Disambiguation with Local Neural Attention", |
|
"authors": [ |
|
{ |
|
"first": "O.-E", |
|
"middle": [], |
|
"last": "Ganea", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Hofmann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2619--2629", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ganea, O.-E.; and Hofmann, T. 2017. Deep Joint En- tity Disambiguation with Local Neural Attention. In EMNLP, 2619-2629.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "On Aligning OpenIE Extractions with Knowledge Bases: A Case Study", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Gashteovski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Gemulla", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Kotnis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Hertling", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Meilicke", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the First Workshop on Evaluation and Comparison of NLP Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "143--154", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gashteovski, K.; Gemulla, R.; Kotnis, B.; Hertling, S.; and Meilicke, C. 2020. On Aligning OpenIE Extractions with Knowledge Bases: A Case Study. In Proceedings of the First Workshop on Evaluation and Comparison of NLP Systems, 143-154.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Are We Modeling the Task or the Annotator? An Investigation of Annotator Bias in Natural Language Understanding Datasets", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Geva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Berant", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1161--1166", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Geva, M.; Goldberg, Y.; and Berant, J. 2019. Are We Modeling the Task or the Annotator? An Investiga- tion of Annotator Bias in Natural Language Under- standing Datasets. In EMNLP, 1161-1166.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Breaking NLI Systems with Sentences that Require Simple Lexical Inferences", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Glockner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Shwartz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "650--655", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Glockner, M.; Shwartz, V.; and Goldberg, Y. 2018. Breaking NLI Systems with Sentences that Require Simple Lexical Inferences. In ACL, 650-655.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Robustness Gym: Unifying the NLP Evaluation Landscape", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Goel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Rajani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Vig", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Tan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Zheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Xiong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Bansal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "R\u00e9", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2101.04840" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Goel, K.; Rajani, N.; Vig, J.; Tan, S.; Wu, J.; Zheng, S.; Xiong, C.; Bansal, M.; and R\u00e9, C. 2021. Robust- ness Gym: Unifying the NLP Evaluation Landscape. arXiv preprint arXiv:2101.04840.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Continuous measurement scales in human evaluation of machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Graham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Moffat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Zobel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "33--41", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Graham, Y.; Baldwin, T.; Moffat, A.; and Zobel, J. 2013. Continuous measurement scales in human evaluation of machine translation. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, 33-41.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Is machine translation getting better over time", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Graham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Moffat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Zobel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "EACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "443--451", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Graham, Y.; Baldwin, T.; Moffat, A.; and Zobel, J. 2014. Is machine translation getting better over time? In EACL, 443-451.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Robust entity linking via random walks", |
|
"authors": [ |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Guo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Barbosa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "CIKM", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "499--508", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guo, Z.; and Barbosa, D. 2014. Robust entity linking via random walks. In CIKM, 499-508.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Annotation artifacts in natural language inference data", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Gururangan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Swayamdipta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Schwartz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Bowman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1803.02324" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gururangan, S.; Swayamdipta, S.; Levy, O.; Schwartz, R.; Bowman, S. R.; and Smith, N. A. 2018. An- notation artifacts in natural language inference data. arXiv preprint arXiv:1803.02324.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Robust disambiguation of named entities in text", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Hoffart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Yosef", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Bordino", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "F\u00fcrstenau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Pinkal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Spaniol", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Taneva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Thater", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Weikum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "782--792", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hoffart, J.; Yosef, M. A.; Bordino, I.; F\u00fcrstenau, H.; Pinkal, M.; Spaniol, M.; Taneva, B.; Thater, S.; and Weikum, G. 2011. Robust disambiguation of named entities in text. In EMNLP, 782-792.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "The data linter: Lightweight, automated sanity checking for ml data sets", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Hynes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Sculley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Terry", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "NIPS MLSys Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hynes, N.; Sculley, D.; and Terry, M. 2017. The data linter: Lightweight, automated sanity checking for ml data sets. In NIPS MLSys Workshop.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Dynabench: Rethinking Benchmarking in NLP", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Kiela", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Bartolo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Nie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Kaushik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Geiger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Vidgen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Prasad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Ringshia", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2104.14337" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kiela, D.; Bartolo, M.; Nie, Y.; Kaushik, D.; Geiger, A.; Wu, Z.; Vidgen, B.; Prasad, G.; Singh, A.; Ring- shia, P.; et al. 2021. Dynabench: Rethinking Bench- marking in NLP. arXiv preprint arXiv:2104.14337.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "End-to-end neural entity linking", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Kolitsas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O.-E", |
|
"middle": [], |
|
"last": "Ganea", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Hofmann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kolitsas, N.; Ganea, O.-E.; and Hofmann, T. 2018. End-to-end neural entity linking. CoNLL.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Improving Entity Linking by Modeling Latent Relations between Mentions", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Titov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1595--1604", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Le, P.; and Titov, I. 2018. Improving Entity Linking by Modeling Latent Relations between Mentions. In ACL, 1595-1604.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Improving distributional similarity with lessons learned from word embeddings", |
|
"authors": [ |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Dagan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "211--225", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Levy, O.; Goldberg, Y.; and Dagan, I. 2015. Improving distributional similarity with lessons learned from word embeddings. Transactions of the Association for Computational Linguistics, 3: 211-225.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Rouge: A package for automatic evaluation of summaries", |
|
"authors": [ |
|
{ |
|
"first": "C.-Y", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Text summarization branches out", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "74--81", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lin, C.-Y. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, 74-81.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "A Unified Approach to Interpreting Model Predictions", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Lundberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S.-I", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "NeurIPS", |
|
"volume": "30", |
|
"issue": "", |
|
"pages": "4765--4774", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lundberg, S. M.; and Lee, S.-I. 2017. A Unified Ap- proach to Interpreting Model Predictions. NeurIPS, 30: 4765-4774.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Learning word vectors for sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Maas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Daly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Pham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Potts", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "142--150", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maas, A.; Daly, R. E.; Pham, P. T.; Huang, D.; Ng, A. Y.; and Potts, C. 2011. Learning word vectors for sentiment analysis. In ACL, 142-150.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Pervasive label errors in test sets destabilize machine learning benchmarks", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Northcutt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Athalye", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Mueller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2103.14749" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Northcutt, C. G.; Athalye, A.; and Mueller, J. 2021. Pervasive label errors in test sets destabi- lize machine learning benchmarks. arXiv preprint arXiv:2103.14749.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Interrater disagreement resolution: A systematic procedure to reach consensus in annotation tasks", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Oortwijn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Ossenkoppele", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Betti", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the Workshop on Human Evaluation of NLP Systems (HumEval)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "131--141", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Oortwijn, Y.; Ossenkoppele, T.; and Betti, A. 2021. In- terrater disagreement resolution: A systematic pro- cedure to reach consensus in annotation tasks. In Proceedings of the Workshop on Human Evaluation of NLP Systems (HumEval), 131-141.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Bleu: a method for automatic evaluation of machine translation", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Papineni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Ward", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W.-J", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "311--318", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Papineni, K.; Roukos, S.; Ward, T.; and Zhu, W.-J. 2002. Bleu: a method for automatic evaluation of machine translation. In ACL, 311-318.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Choosing what I want versus rejecting what I do not want: An application of decision framing to product option choice decisions", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Park", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Jun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Macinnis", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Journal of Marketing Research", |
|
"volume": "37", |
|
"issue": "2", |
|
"pages": "187--202", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Park, C. W.; Jun, S. Y.; and MacInnis, D. J. 2000. Choosing what I want versus rejecting what I do not want: An application of decision framing to prod- uct option choice decisions. Journal of Marketing Research, 37(2): 187-202.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Comparing bayesian models of annotation", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Paun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Carpenter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Chamberlain", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "U", |
|
"middle": [], |
|
"last": "Kruschwitz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Poesio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "TACL", |
|
"volume": "6", |
|
"issue": "", |
|
"pages": "571--585", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Paun, S.; Carpenter, B.; Chamberlain, J.; Hovy, D.; Kruschwitz, U.; and Poesio, M. 2018. Comparing bayesian models of annotation. TACL, 6: 571-585.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Hypothesis Only Baselines in Natural Language Inference. NAACL HLT", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Poliak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Naradowsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Haldar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Rudinger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Poliak, A.; Naradowsky, J.; Haldar, A.; Rudinger, R.; and Van Durme, B. 2018. Hypothesis Only Base- lines in Natural Language Inference. NAACL HLT, 180.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Perturbation Sensitivity Analysis to Detect Unintended Model Biases", |
|
"authors": [ |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Prabhakaran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Hutchinson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Mitchell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5744--5749", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Prabhakaran, V.; Hutchinson, B.; and Mitchell, M. 2019. Perturbation Sensitivity Analysis to Detect Unintended Model Biases. In EMNLP, 5744-5749.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Language models are unsupervised multitask learners", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Radford", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Child", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Luan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Amodei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "OpenAI Blog", |
|
"volume": "1", |
|
"issue": "8", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Radford, A.; Wu, J.; Child, R.; Luan, D.; Amodei, D.; and Sutskever, I. 2019. Language models are unsu- pervised multitask learners. OpenAI Blog, 1(8): 9.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Know What You Don't Know: Unanswerable Questions for SQuAD", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Rajpurkar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Jia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "784--789", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rajpurkar, P.; Jia, R.; and Liang, P. 2018. Know What You Don't Know: Unanswerable Questions for SQuAD. In ACL, 784-789.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Squad: 100,000+ questions for machine comprehension of text", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Rajpurkar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Lopyrev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1606.05250" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rajpurkar, P.; Zhang, J.; Lopyrev, K.; and Liang, P. 2016. Squad: 100,000+ questions for ma- chine comprehension of text. arXiv preprint arXiv:1606.05250.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Survey Equivalence: A Procedure for Measuring Classifier Accuracy Against Human Labels", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Resnick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Kong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Schoenebeck", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Weninger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2106.01254" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Resnick, P.; Kong, Y.; Schoenebeck, G.; and Weninger, T. 2021. Survey Equivalence: A Procedure for Mea- suring Classifier Accuracy Against Human Labels. arXiv preprint arXiv:2106.01254.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Are red roses red? evaluating consistency of questionanswering models", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Ribeiro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Guestrin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "6174--6184", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ribeiro, M. T.; Guestrin, C.; and Singh, S. 2019. Are red roses red? evaluating consistency of question- answering models. In ACL, 6174-6184.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Beyond Accuracy: Behavioral Testing of NLP Models with CheckList", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Ribeiro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Guestrin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4902--4912", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ribeiro, M. T.; Wu, T.; Guestrin, C.; and Singh, S. 2020. Beyond Accuracy: Behavioral Testing of NLP Models with CheckList. In ACL, 4902-4912.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Mctest: A challenge dataset for the open-domain machine comprehension of text", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Richardson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Burges", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Renshaw", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "193--203", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Richardson, M.; Burges, C. J.; and Renshaw, E. 2013. Mctest: A challenge dataset for the open-domain ma- chine comprehension of text. In EMNLP, 193-203.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "Deep learning is robust to massive label noise", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Rolnick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Veit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Belongie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Shavit", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1705.10694" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rolnick, D.; Veit, A.; Belongie, S.; and Shavit, N. 2017. Deep learning is robust to massive label noise. arXiv preprint arXiv:1705.10694.", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "Fine-grained evaluation for entity linking", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Rosales-M\u00e9ndez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Hogan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Poblete", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "EMNLP-IJCNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "718--727", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rosales-M\u00e9ndez, H.; Hogan, A.; and Poblete, B. 2019a. Fine-grained evaluation for entity linking. In EMNLP-IJCNLP, 718-727.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "NIFify: Towards Better Quality Entity Linking Datasets", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Rosales-M\u00e9ndez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Hogan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Poblete", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "WWW 2019", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "815--818", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rosales-M\u00e9ndez, H.; Hogan, A.; and Poblete, B. 2019b. NIFify: Towards Better Quality Entity Link- ing Datasets. In WWW 2019, 815-818.", |
|
"links": null |
|
}, |
|
"BIBREF44": { |
|
"ref_id": "b44", |
|
"title": "Everyone wants to do the model work, not the data work", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Sambasivan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Kapania", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Highfill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Akrong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Paritosh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Aroyo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Data Cascades in High-Stakes AI. In CHI", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--15", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sambasivan, N.; Kapania, S.; Highfill, H.; Akrong, D.; Paritosh, P.; and Aroyo, L. M. 2021. \"Everyone wants to do the model work, not the data work\": Data Cascades in High-Stakes AI. In CHI, 1-15.", |
|
"links": null |
|
}, |
|
"BIBREF45": { |
|
"ref_id": "b45", |
|
"title": "Story cloze task: Uw nlp system", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Schwartz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Sap", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Konstas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Zilles", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "52--55", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Schwartz, R.; Sap, M.; Konstas, I.; Zilles, L.; Choi, Y.; and Smith, N. A. 2017. Story cloze task: Uw nlp system. In Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics, 52-55.", |
|
"links": null |
|
}, |
|
"BIBREF46": { |
|
"ref_id": "b46", |
|
"title": "Neural entity linking: A survey of models based on deep learning", |
|
"authors": [ |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Sevgili", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Shelmanov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Arkhipov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Panchenko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Biemann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2006.00575" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sevgili, O.; Shelmanov, A.; Arkhipov, M.; Panchenko, A.; and Biemann, C. 2020. Neural entity linking: A survey of models based on deep learning. arXiv preprint arXiv:2006.00575.", |
|
"links": null |
|
}, |
|
"BIBREF47": { |
|
"ref_id": "b47", |
|
"title": "Would you describe a leopard as yellow? Evaluating crowd-annotations with justified and informative disagreement", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Sommerauer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Fokkens", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Vossen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "COLING", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4798--4809", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sommerauer, P.; Fokkens, A.; and Vossen, P. 2020. Would you describe a leopard as yellow? Evaluat- ing crowd-annotations with justified and informative disagreement. In COLING, 4798-4809.", |
|
"links": null |
|
}, |
|
"BIBREF48": { |
|
"ref_id": "b48", |
|
"title": "Learning from noisy labels with deep neural networks: A survey", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Song", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Park", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Shin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J.-G", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2007.08199" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Song, H.; Kim, M.; Park, D.; Shin, Y.; and Lee, J.-G. 2020. Learning from noisy labels with deep neural networks: A survey. arXiv preprint arXiv:2007.08199.", |
|
"links": null |
|
}, |
|
"BIBREF49": { |
|
"ref_id": "b49", |
|
"title": "Newsqa: A machine comprehension dataset", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Trischler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Yuan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Harris", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Sordoni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Bachman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Suleman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1611.09830" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Trischler, A.; Wang, T.; Yuan, X.; Harris, J.; Sordoni, A.; Bachman, P.; and Suleman, K. 2016. Newsqa: A machine comprehension dataset. arXiv preprint arXiv:1611.09830.", |
|
"links": null |
|
}, |
|
"BIBREF50": { |
|
"ref_id": "b50", |
|
"title": "Performance Impact Caused by Hidden Bias of Training Data for Recognizing Textual Entailment", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Tsuchiya", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "LREC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tsuchiya, M. 2018. Performance Impact Caused by Hidden Bias of Training Data for Recognizing Tex- tual Entailment. In LREC.", |
|
"links": null |
|
}, |
|
"BIBREF51": { |
|
"ref_id": "b51", |
|
"title": "REL: An entity linker standing on the shoulders of giants", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Van Hulst", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Hasibi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Dercksen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Balog", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Vries", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "SIGIR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2197--2200", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "van Hulst, J. M.; Hasibi, F.; Dercksen, K.; Balog, K.; and de Vries, A. P. 2020. REL: An entity linker standing on the shoulders of giants. In SIGIR, 2197- 2200.", |
|
"links": null |
|
}, |
|
"BIBREF52": { |
|
"ref_id": "b52", |
|
"title": "How reliable are the results of largescale information retrieval experiments", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Zobel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "SIGIR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "307--314", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zobel, J. 1998. How reliable are the results of large- scale information retrieval experiments? In SIGIR, 307-314.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"text": "Example Entity Linking task where the preannotated ground truth mention and link is different from the predicted label. Standard evaluation regimes count this as a completely incorrect prediction despite being a reasonable label.", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"text": "Web system used to collect posthoc annotations from workers.", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"uris": null, |
|
"text": "Precision and recall results from pre-annotation evaluation (Left) compared with the posthoc verification evaluation (Right). Error bars represent 95% confidence intervals on bootstrapped samples of the data. Posthoc verification returns substantially higher scores than the pre-annotation evaluation.", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF3": { |
|
"uris": null, |
|
"text": "Verification Rate. For each combination of dataset and model providing annotations, we compute the verification rate as the percentage of annotations that were verified.Formally, let d \u2208 datasets; m \u2208 models; and V m,d be the set of verified annotations in a pairing of d and m Likewise, let N d,m be the pre-annotations of model m on dataset d. We therefore define the verification rate of a datasetmodel pair as r m,d = |V m,d |/|N d,m |. Higher verification rates indicate that the dataset contains annotations and/or the model is more capable of providing labels that pass human inspection.", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"text": "Statistics of the entity linking datasets and annotations.", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table><tr><td/><td>Datasets</td><td>Docs</td><td>GT</td><td colspan=\"2\">Annotations E2E REL</td><td>GT</td><td>Tasks E2E REL GT Verified Annotations E2E REL</td></tr><tr><td>AIDA</td><td>AIDA-train AIDA-A AIDA-B</td><td>946 216 231</td><td colspan=\"4\">18541* 18301 21204 2801 2802 2913 18511 18274 21172 4791 4758 5443 713 715 725 4787 4754 5439 4485 4375 5086 636 646 654 4480 4370 5079</td></tr><tr><td>WNED</td><td colspan=\"4\">ACE2004 AQUAINT CLUEWEB MSNBC WIKIPEDIA 345* 6793* 57* 257 50 727 320 11154 20 656</td><td colspan=\"2\">1355 810 12273 23114 3526 3678 4944 11139 12247 23056 1675 114 318 334 256 1352 1672 925 175 170 179 727 810 925 629 756 164 163 171 656 629 756 8141 11184 1348 1578 1638 6786 8136 11177</td></tr><tr><td/><td colspan=\"6\">* indicate results different from related work because they remove out-of-dictionary annotations.</td></tr></table>", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |