ACL-OCL / Base_JSON /prefixD /json /dash /2021.dash-1.10.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:29:34.458321Z"
},
"title": "Data Cleaning Tools for Token Classification Tasks",
"authors": [
{
"first": "Karthik",
"middle": [],
"last": "Muthuraman",
"suffix": "",
"affiliation": {},
"email": "karthik.muthuraman@ibm.com"
},
{
"first": "Frederick",
"middle": [],
"last": "Reiss",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM Research -Almaden",
"location": {
"postCode": "95120",
"settlement": "San Jose",
"region": "CA",
"country": "USA"
}
},
"email": "frreiss@us.ibm.com"
},
{
"first": "Hong",
"middle": [],
"last": "Xu",
"suffix": "",
"affiliation": {},
"email": "hongx@ibm.com"
},
{
"first": "Bryan",
"middle": [],
"last": "Cutler",
"suffix": "",
"affiliation": {},
"email": "bjcutler@us.ibm.com"
},
{
"first": "Zachary",
"middle": [],
"last": "Eichenberger",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM Research -Almaden",
"location": {
"postCode": "95120",
"settlement": "San Jose",
"region": "CA",
"country": "USA"
}
},
"email": "zachary.eichen@gmail.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Human-in-the-loop systems for cleaning NLP training data rely on automated sieves to isolate potentially-incorrect labels for manual review. We have developed a novel technique for flagging potentially-incorrect labels with high sensitivity in named entity recognition corpora. We incorporated our sieve into an end-to-end system for cleaning NLP corpora, implemented as a modular collection of Jupyter notebooks built on extensions to the Pandas DataFrame library. We used this system to identify incorrect labels in the CoNLL-2003 corpus for English-language named entity recognition (NER), one of the most influential corpora for NER model research. Unlike previous work that only looked at a subset of the corpus's validation fold, our automated sieve enabled us to examine the entire corpus in depth. Across the entire CoNLL-2003 corpus, we identified over 1300 incorrect labels (out of 35089 in the corpus). We have published our corrections, along with the code we used in our experiments. We are developing a repeatable version of the process we used on the CoNLL-2003 corpus as an open-source library.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Human-in-the-loop systems for cleaning NLP training data rely on automated sieves to isolate potentially-incorrect labels for manual review. We have developed a novel technique for flagging potentially-incorrect labels with high sensitivity in named entity recognition corpora. We incorporated our sieve into an end-to-end system for cleaning NLP corpora, implemented as a modular collection of Jupyter notebooks built on extensions to the Pandas DataFrame library. We used this system to identify incorrect labels in the CoNLL-2003 corpus for English-language named entity recognition (NER), one of the most influential corpora for NER model research. Unlike previous work that only looked at a subset of the corpus's validation fold, our automated sieve enabled us to examine the entire corpus in depth. Across the entire CoNLL-2003 corpus, we identified over 1300 incorrect labels (out of 35089 in the corpus). We have published our corrections, along with the code we used in our experiments. We are developing a repeatable version of the process we used on the CoNLL-2003 corpus as an open-source library.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Human-in-the-loop systems for cleaning NLP training data rely on automated sieves to isolate potentially-incorrect labels for manual review. In this work, a full version of which has been presented in (Reiss et al., 2020) , we describe how we developed a novel technique for flagging potentially-incorrect labels with high sensitivity in named entity recognition corpora.",
"cite_spans": [
{
"start": 201,
"end": 221,
"text": "(Reiss et al., 2020)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We implemented our sieve in the context of a set of extensions to the Pandas 1 DataFrame library. In addition to flagging errors, our extensions provide facilities for comparing NLP model results 1 https://pandas.pydata.org/ and visualizing model outputs and training data in context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Because we built these facilities into the primary DataFrame library of the Python data analysis stack, we were able to construct an end-to-end system for NLP data cleaning as a series of Jupyter 2 notebooks. This design gives sophisticated users a view of the internals of the data cleaning process and allows for easy customization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our Jupyter notebooks comprises a pipeline that starts with training ensembles of models. Next, the system analyzes the outputs of the ensembles to identify potentially incorrect labels. Additional notebooks provide human annotators with a view of the suspicious labels in context. Later stages of the pipeline merge and analyze the results of manual annotation; then construct a corrected dataset and reports on the nature of the corrections. We used this system to identify errors in the CoNLL-2003 NER corpus. The English-language portion of the CoNLL-2003 shared task (Tjong Kim Sang and De Meulder, 2003) (henceforth CoNLL-2003) is one of the most widely-used benchmarks for named entity recognition (NER) models. It consists of news articles from the Reuters RCV1 corpus (Lewis et al., 2004) . Since its debut, CoNLL-2003 has played a central role in NLP research and continues to do so with more than 2300 citations. While researchers have relied heavily on the CoNLL-2003 corpus as a source of ground truth, few have paid attention to the corpus itself. Errors in the corpus could potentially mislead and even divert the course of future research.",
"cite_spans": [
{
"start": 572,
"end": 591,
"text": "(Tjong Kim Sang and",
"ref_id": "BIBREF2"
},
{
"start": 592,
"end": 633,
"text": "De Meulder, 2003) (henceforth CoNLL-2003)",
"ref_id": null
},
{
"start": 777,
"end": 797,
"text": "(Lewis et al., 2004)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Unlike previous analyses of this dataset that only examined small fractions of the CoNLL-2003 corpus, our work leveraged a high level of automation to analyze the entire corpus. We found over 1300 errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our approach builds on previous work in semisupervised labeling, with some key differences. Because we were looking for errors in a corpus that already had many high-quality labels, we needed a sieve with especially high sensitivity. We used ensembles of NER models trained on the corpus, and we focused on cases where the models agreed strongly on a particular label, but that label does not appear in the corpus. One of these ensembles was the outputs of the original 16 entries in the 2003 competition. We also trained two other 17-model ensembles ourselves by applying Gaussian random projections to the BERT embeddings space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Process",
"sec_num": "2"
},
{
"text": "We developed extensions to the Pandas DataFrame library that enabled us to represent spans within documents as cells within a DataFrame. This facility allowed us to use DataFrames to track the spans of the entities that each of our models produced and to aggregate together the results across models. Using these capabilities, we developed Jupyter notebooks that analyzed our ensembles' outputs to identify labels that appeared in the outputs of multiple models but were not in the corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Process",
"sec_num": "2"
},
{
"text": "We used our Pandas extension types' ability to render spans to HTML to view these spans in the context of the original document from within the same Jupyter notebooks.We started with labels that had a strong agreement among models and we progressed to labels with less agreement among models, the fraction of flagged labels that was actually incorrect decreased. When this fraction dropped below 20 percent, we stopped going through the ordered list of flagged labels. We had an inter annotation agreement and audit cycle for each correction made. In total, we made 12 passes (3 ensembles \u00d7 2 sets of labels \u00d7 2 human reviewers) of manual review over the train and test folds of the corpus and 8 passes over the test fold.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Process",
"sec_num": "2"
},
{
"text": "When we found that a label was incorrect, we coded the type of error and the required correction so that the error could be corrected automatically later on. We divided errors into several categories as explained in detail in the full version of this paper at (Reiss et al., 2020) .",
"cite_spans": [
{
"start": 260,
"end": 280,
"text": "(Reiss et al., 2020)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Process",
"sec_num": "2"
},
{
"text": "In total, we examined 3182 labels our ensembles had flagged in the three folds of the corpus. We considered any label where fewer than 7 models agreed with the corpus label to be \"flagged\". Of these labels, 1274 came from the test fold, 854 came from the dev fold, and 1054 came from the train fold; accounting for 22.6%, 14.3%, and 4.5% of their folds, respectively. Figure 1 shows the split of final errors identified by ensemble and source.",
"cite_spans": [],
"ref_spans": [
{
"start": 368,
"end": 376,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Corrections",
"sec_num": "3"
},
{
"text": "Manual inspection determined that 850 of these 3182 entities (27%) were incorrect. We also found 475 additional incorrect entities in close proximity to the entities that our techniques flagged, for a total of 1320 incorrect labels across the corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corrections",
"sec_num": "3"
},
{
"text": "After identifying incorrect tags, spans and sentence boundaries, we created a corrected version of the original CoNLL-2003 dataset, which we refer to as the corrected CoNLL-2003 dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corrections",
"sec_num": "3"
},
{
"text": "While preparing our dataset of corrections for release, we identified additional improvements to the corrections. We have released a second version of the dataset containing these improvements plus some additional corrections pointed out by members of the open source NLP community.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ongoing Work",
"sec_num": "4"
},
{
"text": "We have released the code that we used in our experiments so far 3 . To facilitate the reuse of this code on other datasets, we are developing a more refined version of this code. Key changes that we are working on are reducing the number of passes of manual review required, simplifying the creation of ensembles of models, and extending the approach from NER to other token classification tasks like semantic role labeling. We plan to release these improvements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ongoing Work",
"sec_num": "4"
},
{
"text": "https://jupyter.org",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/CODAIT/ text-extensions-for-pandas",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Rcv1: A new benchmark collection for text categorization research",
"authors": [
{
"first": "David",
"middle": [
"D"
],
"last": "Lewis",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Tony",
"middle": [
"G"
],
"last": "Rose",
"suffix": ""
},
{
"first": "Fan",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2004,
"venue": "Journal of Machine Learning Research",
"volume": "5",
"issue": "",
"pages": "361--397",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David D. Lewis, Yiming Yang, Tony G. Rose, and Fan Li. 2004. Rcv1: A new benchmark collection for text categorization research. Journal of Machine Learning Research, 5:361-397.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Identifying incorrect labels in the CoNLL-2003 corpus",
"authors": [
{
"first": "Frederick",
"middle": [],
"last": "Reiss",
"suffix": ""
},
{
"first": "Hong",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Cutler",
"suffix": ""
},
{
"first": "Karthik",
"middle": [],
"last": "Muthuraman",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Eichenberger",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 24th Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "215--226",
"other_ids": {
"DOI": [
"10.18653/v1/2020.conll-1.16"
]
},
"num": null,
"urls": [],
"raw_text": "Frederick Reiss, Hong Xu, Bryan Cutler, Karthik Muthuraman, and Zachary Eichenberger. 2020. Identifying incorrect labels in the CoNLL-2003 cor- pus. In Proceedings of the 24th Conference on Com- putational Natural Language Learning, pages 215- 226. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition",
"authors": [
{
"first": "Erik",
"middle": [
"F"
],
"last": "Tjong",
"suffix": ""
},
{
"first": "Kim",
"middle": [],
"last": "Sang",
"suffix": ""
},
{
"first": "Fien",
"middle": [],
"last": "De Meulder",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the SIGNLL Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "142--147",
"other_ids": {
"DOI": [
"10.3115/1119176.1119195"
]
},
"num": null,
"urls": [],
"raw_text": "Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the SIGNLL Conference on Computa- tional Natural Language Learning, pages 142-147, USA. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Number of errors flagged by different combinations of ensembles after filtering by human labelers.",
"type_str": "figure",
"num": null,
"uris": null
}
}
}
}