ACL-OCL / Base_JSON /prefixI /json /icon /2020.icon-main.1.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:30:21.751339Z"
},
"title": "The WEAVE Corpus: Annotating Synthetic Chemical Procedures in Patents with Chemical Named Entities",
"authors": [
{
"first": "Ravindra",
"middle": [],
"last": "Nittala",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IIIT -Hyderabad",
"location": {
"country": "India"
}
},
"email": "ravindra.n@research.iiit.ac.in"
},
{
"first": "Manish",
"middle": [],
"last": "Shrivastava",
"suffix": "",
"affiliation": {},
"email": "m.shrivastava@iiit.ac.in"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The modern pharmaceutical industry depends on the iterative design of novel synthetic routes for drugs while not infringing on existing intellectual property rights. Such a design process calls for analyzing many existing synthetic chemical reactions and planning the synthesis of novel chemicals. These procedures have been historically available in unstructured raw text form in publications and patents. To facilitate automated synthetic chemical reactions analysis and design the novel synthetic reactions using Natural Language Processing (NLP) methods, we introduce a Named Entity Recognition (NER) dataset of the Examples section in 180 full-text patent documents with 5188 synthetic procedures annotated by domain experts. All the chemical entities which are part of the synthetic discourse were annotated with suitable class labels. We present the second-largest chemical NER corpus with 100,129 annotations and the highest IAA value of 98.73% (F-measure) on a 45 document subset. We discuss this new resource in detail and highlight some specific challenges in annotating synthetic chemical procedures with chemical named entities. We make the corpus available to the community to promote further research and development of downstream NLP systems applications. We also provide baseline results for the NER model to the community to improve on.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "The modern pharmaceutical industry depends on the iterative design of novel synthetic routes for drugs while not infringing on existing intellectual property rights. Such a design process calls for analyzing many existing synthetic chemical reactions and planning the synthesis of novel chemicals. These procedures have been historically available in unstructured raw text form in publications and patents. To facilitate automated synthetic chemical reactions analysis and design the novel synthetic reactions using Natural Language Processing (NLP) methods, we introduce a Named Entity Recognition (NER) dataset of the Examples section in 180 full-text patent documents with 5188 synthetic procedures annotated by domain experts. All the chemical entities which are part of the synthetic discourse were annotated with suitable class labels. We present the second-largest chemical NER corpus with 100,129 annotations and the highest IAA value of 98.73% (F-measure) on a 45 document subset. We discuss this new resource in detail and highlight some specific challenges in annotating synthetic chemical procedures with chemical named entities. We make the corpus available to the community to promote further research and development of downstream NLP systems applications. We also provide baseline results for the NER model to the community to improve on.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "There is a renewed interest in academia and industry to access the information regarding chemical and chemical reactions currently available in unstructured raw text form in journal publications and patents (Coley et al., 2017; Segler et al., 2018; Mysore et al., 2019) using machine learning. Also, several chemical NER datasets exist. With increasing demand in automated chemical synthesis design and planning novel chemical reactions, we need to shift away from the annotation of title and abstract of patents or reactions in isolation to the patents' core, the Examples section. The CHEMDNER-patents corpus (Krallinger et al., 2015c) is the only dataset focusing on titles and abstracts. The Chapati corpus (Grego et al., 2009) and BioSemnatics corpus (Akhondi et al., 2014) focus on the full text of patents for annotation. The reason for the insufficiency of these corpora is discussed in detail in Section 3.3 and 3.5. The ChEMU labs introduced a named entity dataset with chemical role labels (Nguyen et al., 2020) . As part of the dataset, they have annotated only snippets of reaction text from the patents' experimental section. They also acknowledge the problem of entity often referring to context beyond the current reaction text 1 . This context cannot be accounted for by the snippets of reaction text in isolation. As part of the WEAVE corpus, we would like to annotate the chemical entities in their full reaction discourse. This would enable us to model the context beyond the immediate reaction text. We refer readers to the supporting information containing full-text patents to understand how the discourse varies from section to section.",
"cite_spans": [
{
"start": 207,
"end": 227,
"text": "(Coley et al., 2017;",
"ref_id": "BIBREF3"
},
{
"start": 228,
"end": 248,
"text": "Segler et al., 2018;",
"ref_id": "BIBREF16"
},
{
"start": 249,
"end": 269,
"text": "Mysore et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 611,
"end": 637,
"text": "(Krallinger et al., 2015c)",
"ref_id": "BIBREF11"
},
{
"start": 711,
"end": 731,
"text": "(Grego et al., 2009)",
"ref_id": "BIBREF5"
},
{
"start": 756,
"end": 778,
"text": "(Akhondi et al., 2014)",
"ref_id": "BIBREF0"
},
{
"start": 1001,
"end": 1022,
"text": "(Nguyen et al., 2020)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A patent is the grant of a legal right by a patent office to an inventor. This grant provides the inventor exclusive rights for a designated period of time in exchange for comprehensive invention disclosure. The disclosure should be complete, such that a person well versed in the field should be able to reproduce this patented process, design, or invention. This disclosure is done in the Examples section of a patent. Hence the Examples section is fundamentally different in its linguistic structure from other sections in a patent. It is the most useful part of understanding the synthetic chemical 1 https://chemu-patent-ie.github. io/resources/Annotation_Guidelines_ CLEF2020_ChEMU_task1.pdf reactions given in the patent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There is a large body of chemical and biomedical NER literature. We refer readers to and Huang et al. (2020) for a comprehensive survey. We include a summary of the publicly available datasets as follows: Chapati corpus (Grego et al., 2009 ) is a manually annotated set of 40 patents with 11,162 annotations. The chemical named entities identified were mapped to the Chemical Entities of Biological Interest (ChEBI) database. BioSemantics corpus (Akhondi et al., 2014) is a manually annotated set of patents. This corpus has two sets: First, a harmonized set of 47 patents with 36,537 annotations, and the second set of 198 patents with 400,125 annotations. Besides chemical entity mentions, they also annotate diseases, targets, modes of actions (MOAs), OCR errors, and spelling errors. It is the largest chemical NER dataset. BC-IV CHEMDNER corpus (Krallinger et al., 2015a) is an annotated set of 10,500 titles and abstracts from the PubMed database with 84,355 annotations. BC-V CHEMDNER-patents corpus (Krallinger et al., 2015c) is an annotated set of 21,000 titles and abstracts from patents with 99,634 annotations. With BC-IV CHEMDNER corpus and BC-V CHEMDNER-patents corpus being the widely cited among these. CHEMDNER-patents corpus exclusively focuses on chemical entity mentions. The entity mention classes are a variant of earlier published CHEMDNER corpus (Krallinger et al., 2015b) . Nguyen et al. (2020) have introduced a new evaluation lab named ChEMU. It focuses on two tasks: First, named entity recognition of chemical compounds and assign the compound's role within a chemical reaction. Second, event trigger detection and argument identification of previously detected chemical entities. In the publically available NER dataset 2 , there are 20,186 annotations (train + dev) in 1125 reaction snippets extracted from 170 patents.",
"cite_spans": [
{
"start": 89,
"end": 108,
"text": "Huang et al. (2020)",
"ref_id": "BIBREF7"
},
{
"start": 220,
"end": 239,
"text": "(Grego et al., 2009",
"ref_id": "BIBREF5"
},
{
"start": 850,
"end": 876,
"text": "(Krallinger et al., 2015a)",
"ref_id": "BIBREF9"
},
{
"start": 1007,
"end": 1033,
"text": "(Krallinger et al., 2015c)",
"ref_id": "BIBREF11"
},
{
"start": 1370,
"end": 1396,
"text": "(Krallinger et al., 2015b)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "1.1"
},
{
"text": "A typical US patent 3 granted has the following discourse structure: Patent grant number, Title, Bibliography, Abstract, Other Patent Relations, Brief Summary, Detailed Description, and Claims. The intellectual property rights or the innovative part of the patent granted resides in the examples contained in the Detailed Description section. This section will be analyzed thoroughly for any novel synthetic route to be non-infringing on existing intellectual property rights. Therefore in the next section, we present the WEAVE 4 patents corpus, which focuses exclusively on synthetic procedures in the Examples section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structure of a patent",
"sec_num": "1.2"
},
{
"text": "An important consideration in preparing a corpus for NER training, development, and evaluation sets is selecting documents representing the distribution of chemical named entities seen in related documents. In the WEAVE corpus, the focus is on synthetic chemical procedures and the chemical entities present. Two considerations influenced document selection in our corpus. First, the documents used in the corpus should be available without copyright protection. Second, they are complementary to existing datasets. We accessed the patents from the United States Patent and Trademark Office (USPTO) 5 . Following criterion were applied to further subset the patents for annotation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The WEAVE patents corpus",
"sec_num": "2"
},
{
"text": "\u2022 IPC code: The selection of patents for the WEAVE corpus was made based on IPC (International Patent Classification) code. Patents which belonged to at least A61K (Preparations for Medical, Dental, or Toilet purposes) 6 or C07D (Heterocyclic compounds) 7 were selected. This enriched patents with chemical entities in medicinal and organic chemistry. An additional criterion for selection within this subset was the presence of synthetic organic procedures.",
"cite_spans": [
{
"start": 219,
"end": 220,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The WEAVE patents corpus",
"sec_num": "2"
},
{
"text": "\u2022 Date and Publication type: We decided to select patents that were granted in the years 4 to form something from several different things or to combine several different things, in a complicated or skilled way https://dictionary.cambridge.org/ dictionary/english/weave 5 USPTO Bulk Data Storage System (BDSS) https:// bulkdata.uspto.gov/#pats 6 https://www.wipo.int/classifications/ ipc/en/ITsupport/Version0170101/ transformations/ipc/20170101/en/htm/ A61K.htm 7 https://www.wipo.int/classifications/ ipc/en/ITsupport/Version0170101/ transformations/ipc/20170101/en/htm/ C07D.htm 2018 and 2019. This would ensure the availability of patents in XML format and text free from OCR errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The WEAVE patents corpus",
"sec_num": "2"
},
{
"text": "\u2022 Character encoding and language: XML character entities were converted to corresponding UTF-8 characters, and the full text was encoded in UTF-8 encoding. As the patents were selected from USPTO, only English language patents were included.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The WEAVE patents corpus",
"sec_num": "2"
},
{
"text": "\u2022 Document format: The patent in XML format was converted to a UTF-8 encoded text file. Only the paragraph elements, headings, subheadings, and tables were written to the text file. All the formatting elements like bold, italics, subscript, and superscript were discarded. Bibliographic details and XML formatting was also discarded. There was no restriction on the number of lines in documents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The WEAVE patents corpus",
"sec_num": "2"
},
{
"text": "\u2022 Documents inclusion and exclusion: Patents covering Inorganic, Organometallic, Polymers, Natural products, Proteins, DNA/RNA, Polymorphic crystal forms were excluded. The overriding criterion for inclusion was at least one synthetic organic procedure in the Examples section, and this was manually checked in each document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The WEAVE patents corpus",
"sec_num": "2"
},
{
"text": "\u2022 Final document sets: After applying the above selection criteria and prepossessing, we were left with 180 documents. The summary of these sets is given in Table 1 . These were randomly assigned to training, development, and test sets. 45 documents from the above settings were used for the Interannotator agreement (IAA). For display performance in BRAT, all patents were split into files of 100 lines each before annotation and later concatenated into a single document after annotation. Evaluation 45 438 Training 60 1311 Development 60 2020 Test 60 1857 Overall 180 5188 Table 1 : Document sets. Evaluation set is a subset of overall 180 documents 3 Corpus annotation",
"cite_spans": [],
"ref_spans": [
{
"start": 157,
"end": 164,
"text": "Table 1",
"ref_id": null
},
{
"start": 491,
"end": 599,
"text": "Evaluation 45 438 Training 60 1311 Development 60 2020 Test 60 1857 Overall 180 5188 Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "The WEAVE patents corpus",
"sec_num": "2"
},
{
"text": "Neves and Leser (2012) have surveyed the annotation tools available for biomedical literature. They determined that BRAT was easy to use and customizable as per the annotation scheme among the tools reviewed. Hence we used the BRAT Rapid Annotation Tool (BRAT) (Stenetorp et al., 2012) for the entire annotation process and BRAT standoff format for storing the annotations.",
"cite_spans": [
{
"start": 261,
"end": 285,
"text": "(Stenetorp et al., 2012)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation tools",
"sec_num": "3.1"
},
{
"text": "We used CoNLL 2003 shared task (Tjong Kim Sang and De Meulder, 2003) evaluation script to compute the macro averaged F-measure on named entity annotations. The annotation output in the BRAT standoff format was converted to CoNLL 2003 shared task format with BIO tagging representation before computing the F-measure. We used F-measure as the evaluation metric for IAA as suggested by Corbett et al. (2007) and Kolarik et al. (2008) . CoNLL 2003 shared task evaluation script evaluates an entity to be valid by matching the chemical mention and class label. The use of F-measure provides an advantage in a direct comparison between system performance and interannotator agreement (Grouin and N\u00e9v\u00e9ol, 2014) .",
"cite_spans": [
{
"start": 31,
"end": 68,
"text": "(Tjong Kim Sang and De Meulder, 2003)",
"ref_id": "BIBREF18"
},
{
"start": 384,
"end": 405,
"text": "Corbett et al. (2007)",
"ref_id": "BIBREF4"
},
{
"start": 410,
"end": 431,
"text": "Kolarik et al. (2008)",
"ref_id": "BIBREF8"
},
{
"start": 679,
"end": 704,
"text": "(Grouin and N\u00e9v\u00e9ol, 2014)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metric",
"sec_num": "3.2"
},
{
"text": "We had to make a choice of designing our own scheme or utilize an existing scheme. Based on publicly available guidelines and corpora, we had a choice between Chapati corpus by Chemical Entities of Biological Interest (ChEBI) and European Patent Office (EPO) (Grego et al., 2009) , BioSematics corpus (Akhondi et al., 2014), CHEMDNER corpus (Krallinger et al., 2015a) , CHEMDNERpatents corpus (Krallinger et al., 2015c) and ChEMU Labs NER corpus (Nguyen et al., 2020). In Chapati corpus, 40 patents were manually annotated with 11,162 annotations (Grego et al., 2009) . The number of annotated patents and the corresponding number of annotations was small in size.",
"cite_spans": [
{
"start": 259,
"end": 279,
"text": "(Grego et al., 2009)",
"ref_id": "BIBREF5"
},
{
"start": 341,
"end": 367,
"text": "(Krallinger et al., 2015a)",
"ref_id": "BIBREF9"
},
{
"start": 393,
"end": 419,
"text": "(Krallinger et al., 2015c)",
"ref_id": "BIBREF11"
},
{
"start": 547,
"end": 567,
"text": "(Grego et al., 2009)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation scheme",
"sec_num": "3.3"
},
{
"text": "We were left with a choice between BioSemantics, CHEMDNER, and CHEMDNER-patents corpora. On a closer look at BioSemantics corpus, which was based on 15 rules published in their article (Akhondi et al., 2014), we noticed that the IAA (F-score), when considered for only chemical mentions in the corpus, varies from 0.94 to 0.38 depending on entity type and the agreement between the four annotator groups on the harmonized patents set (47 patents) (Akhondi et al., 2014). The wide variation in IAA indicates a lack of consistency in guidelines and the need for multiple disambiguation steps. This could be potentially misleading to the annotators.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation scheme",
"sec_num": "3.3"
},
{
"text": "The near-simultaneous publication of the ChEMU Labs NER dataset 8 (Nguyen et al., 2020) with this publication precluded a full evaluation of the dataset. After reviewing the guidelines 9 , it was determined that this dataset is not suitable for the chemical named entity recognition in the full discourse of reaction text in the Examples section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation scheme",
"sec_num": "3.3"
},
{
"text": "The extensive guidelines documentation (30 pages), illustrated with examples, led us to choose the annotation scheme developed for BioCreative IV (BC-IV) CHEMDNER (Krallinger et al., 2015a) . As modified in BioCreative V (BC-V), CHEMDNER-patents task (Krallinger et al., 2015c) to be used for the WEAVE corpus annotation process. CHEMDNER-patents task had annotated titles and abstracts from 21,000 patents with 99,625 annotations (Krallinger et al., 2015c) . SYSTEMATIC, IDENTIFIER, FORMULA, TRIV-IAL, ABBREVIATION (ABBV), FAMILY and MULTIPLE entity mention classes as reported by Krallinger et al. (2015c) were utilized. We chose to annotate the Examples section of the patent with synthetic organic procedures against title and abstract only in the CHEMDNER-patents task (Krallinger et al., 2015c) . This is illustrated in Figure 1 .",
"cite_spans": [
{
"start": 163,
"end": 189,
"text": "(Krallinger et al., 2015a)",
"ref_id": "BIBREF9"
},
{
"start": 251,
"end": 277,
"text": "(Krallinger et al., 2015c)",
"ref_id": "BIBREF11"
},
{
"start": 431,
"end": 457,
"text": "(Krallinger et al., 2015c)",
"ref_id": "BIBREF11"
},
{
"start": 582,
"end": 607,
"text": "Krallinger et al. (2015c)",
"ref_id": "BIBREF11"
},
{
"start": 774,
"end": 800,
"text": "(Krallinger et al., 2015c)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 826,
"end": 834,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Annotation scheme",
"sec_num": "3.3"
},
{
"text": "The entire annotation process was done in two stages. The first stage work was done to establish the inter-annotator agreement on the evaluation set of 45 documents. The documents were annotated by nine chemistry domain experts with no formal linguistics experience and were equally divided between them (5 each).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation process",
"sec_num": "3.4"
},
{
"text": "These 45 documents were independently double annotated by another chemistry domain expert, designated as lead annotator with formal linguistics experience. The lead annotator's annotations were designated as the gold standard for evaluating the quality of annotation by the nine annotators. These 45 documents were compared to the gold standard using F-measure. Once the annotation consistency was established, the second stage work was done on the rest of the 135 documents. With each annotator getting 15 documents. Following the concept of annotator-reviser (or adjudicator) agreement (Campillos et al., 2018; Bada et al., 2012) , annotators were free to consult the lead annotator throughout the annotation process regarding guidelines. Table 2 presents IAA statistics for 45 documents set. The average F-measure was 98.73%. Bada et al. (2012) have reported 90+% IAA level following the annotator-reviser (or adjudicator) agreement concept. Hence the F-measure reported by us is consistent with published results. This IAA value is the highest reported to date on the chemical entity mention dataset. The F-measure at the microlevel was the lowest for IDENTIFIER (76.19%) and MULTIPLE (85.71%). This can be attributed to the data sparsity in the corpus for these two classes. Tables 4, 5 and 6 demonstrate that the data sparsity for these two classes can also be seen in BC-IV CHEMDNER (Krallinger et al., 2015a) and BC-V CHEMDNER-patents task (Krallinger et al., 2015c) .",
"cite_spans": [
{
"start": 588,
"end": 612,
"text": "(Campillos et al., 2018;",
"ref_id": "BIBREF2"
},
{
"start": 613,
"end": 631,
"text": "Bada et al., 2012)",
"ref_id": "BIBREF1"
},
{
"start": 829,
"end": 847,
"text": "Bada et al. (2012)",
"ref_id": "BIBREF1"
},
{
"start": 1390,
"end": 1416,
"text": "(Krallinger et al., 2015a)",
"ref_id": "BIBREF9"
},
{
"start": 1448,
"end": 1474,
"text": "(Krallinger et al., 2015c)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 741,
"end": 748,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Annotation process",
"sec_num": "3.4"
},
{
"text": "Akhondi et al. 2014have reported an annotated chemical patent corpus, which besides chemical mentions, also annotates diseases, protein targets, and MOAs in the patents. The best-reported IAA value among a set of values was 78% (F-score). Krallinger et al. (2015b) in BC-IV CHEMDNER task has reported the IAA value of 91% (F-score) while matching the chemical mention ignoring the class label. When the class label was also considered, the IAA value was 85.26% (F-score). Krallinger et al. (2015c) in BC-V CHEMDNERpatents task have not reported any IAA value and have proposed an IAA study based on a blind annotation of 200 patent abstracts in case of the chemical entity mentions. To the best of our knowledge, this has not yet been published.",
"cite_spans": [
{
"start": 239,
"end": 264,
"text": "Krallinger et al. (2015b)",
"ref_id": "BIBREF10"
},
{
"start": 472,
"end": 497,
"text": "Krallinger et al. (2015c)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation process",
"sec_num": "3.4"
},
{
"text": "Despite no published IAA study for CHEMDNER-patents corpus, we relied on the extensive guidelines published as part of their corpus. Table 3 presents the error analysis of the doubly annotated 45 documents. In the table, rows represent the gold standard labels, and columns represent the annotator's labels. Of the 7503 gold labels, 90 labels (1.2%) were assigned outside the reaction discourse. These should have been assigned to the OTHER class. 78 labels (1.0%) where they should have been assigned one of seven class labels, they were assigned, OTHER class. Only 4 (0.05%) were assigned the incorrect label within the seven class labels.",
"cite_spans": [],
"ref_spans": [
{
"start": 133,
"end": 140,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Annotation process",
"sec_num": "3.4"
},
{
"text": "The error analysis demonstrates that annotators were able to assign the class labels to the chemical entities. The majority of the errors occurred at the boundary of reaction discourse. These errors were communicated to the annotators. They were trained to identify the reaction discourse boundaries and the chemical entities present. They were also en-couraged to consult the lead annotator in case of any doubt.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "3.6"
},
{
"text": "Tables 4, 5 and 6 present the counts of chemical entity mention class labels in the WEAVE corpus (180 documents). These were randomly divided into Training, Development, and Test sets and compared with similar counts from BC-IV CHEMDNER (Krallinger et al., 2015a) and BC-V CHEMDNERpatents task (Krallinger et al., 2015c) . Table 7 presents the statistics for the counts of annotations in the WEAVE corpus and CHEMDNER-patents corpus. There are a total of 100,129 annotations with an average of 556 annotations per document. As shown in the table, there is a wide variation between average and median counts per document. This skew is due to a small number of documents having a large number of annotations (Bada et al., 2012) . This assertion is supported by the minimum and maximum count across 180 documents.",
"cite_spans": [
{
"start": 237,
"end": 263,
"text": "(Krallinger et al., 2015a)",
"ref_id": "BIBREF9"
},
{
"start": 294,
"end": 320,
"text": "(Krallinger et al., 2015c)",
"ref_id": "BIBREF11"
},
{
"start": 706,
"end": 725,
"text": "(Bada et al., 2012)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 323,
"end": 330,
"text": "Table 7",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Corpus statistics",
"sec_num": "3.7"
},
{
"text": "The top three entity mention classes as a percentage of total annotations in WEAVE corpus was: SYSTEMATIC (49.73%), FORMULA (26.58%), and ABBREVIATION (11.25%). The corresponding distribution of the top three classes in BC-IV CHEMDNER task was: SYSTEMATIC (30.36%), TRIVIAL (22.69%) and ABBREVI-ATION (15.55%), and in BC-V CHEMDNERpatents task was: FAMILY (36.49%), SYSTEM-ATIC (28.79%) and TRIVIAL (26.11%). The statistical distribution of entities mentions classes between WEAVE corpus and CHEMDNER-patents corpus is different. Hence the need for annotation of the Examples section of patents was felt. This would significantly help develop machine learning models tailored for the Examples section and downstream processing of synthetic organic reactions in patents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus statistics",
"sec_num": "3.7"
},
{
"text": "To establish some baseline performance parameters for the evaluation of the WEAVE corpus, we applied the NER model 10 developed by , which has been successfully applied in Multilingual, Clinical and Drug NER. Morphological features have been successfully applied in named entity recognition. In submissions to BC-IV CHEMDNER task (Krallinger et al., 2015a) and BC-V CHEMDNER-patents task (Krallinger et al., 2015c) they feature prominently in the topperforming models.",
"cite_spans": [
{
"start": 330,
"end": 356,
"text": "(Krallinger et al., 2015a)",
"ref_id": "BIBREF9"
},
{
"start": 388,
"end": 414,
"text": "(Krallinger et al., 2015c)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "200-dimension GloVe embeddings (Pennington et al., 2014) Figure 2 presents the architecture of the NER model proposed by . The model features Character Bi-LSTM, Word features, Word Bi-LSTM, and Word CRF layer for generating BIO tags for the named entities. The above model was used as is with minor modifications in hyperparameters. The word embeddings size of 200-d was used, train embeddings was set to false, and batch size was set to 25. All other parameters were set to the default values given in the model proposed by .",
"cite_spans": [
{
"start": 31,
"end": 56,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 57,
"end": 65,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Word embeddings",
"sec_num": "4.1"
},
{
"text": "The WEAVE corpus of the present study was randomly split into training, development, and test set with 60 documents in each set. The official training, development, and test set of CHEMDNER-patents task (Krallinger et al., 2015c) was used without modification.",
"cite_spans": [
{
"start": 203,
"end": 229,
"text": "(Krallinger et al., 2015c)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NER datasets",
"sec_num": "4.3"
},
{
"text": "The WEAVE corpus in the BRAT standoff format was converted into CoNLL 2003 BIO format and truncated to the Examples section. The resulting WEAVE corpus had 73,522 sentences, 3,453,525 tokens, and 15,782 unique tokens. The CHEMDNER-patents corpus in a tab-separated format was converted into CoNLL 2003 BIO format before being used in training and evaluation of the model. The resulting CHEMDNER-patents corpus had 73,383 sentences, 2,511,006 tokens, and 51,570 unique tokens.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "4.4"
},
{
"text": "To better understand the WEAVE corpus's baseline performance, we conducted several experiments involving BC-V corpus and its combinations with the WEAVE corpus. In Tables 8 and 9 we present the results of experiments on various combinations of WEAVE and BC-V datasets.",
"cite_spans": [],
"ref_spans": [
{
"start": 164,
"end": 178,
"text": "Tables 8 and 9",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "Based on the simple NER model , the best result in terms of macro-averaged F-measure was the model on standalone WEAVE corpus and tested on WEAVE test set with 91.37%. Followed by a model trained on BC-V + WEAVE corpus and tested on the WEAVE test set with 91.34%. In comparison, the top-performing team in the BC-V CHEMDNER-patents task had an Fscore of 89.37% (Krallinger et al., 2015c ). Whereas the model trained on standalone BC-V corpus and tested on BC-V test corpus had an F-measure of 80.89%. The model's worst performance was when trained on WEAVE corpus and tested on the BC-V test set; the F-measure was 29.93%.",
"cite_spans": [
{
"start": 362,
"end": 387,
"text": "(Krallinger et al., 2015c",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "The results validate the linguistic structure of the title and abstract of a patent is very different from that of the Examples section. Hence, when combined with the CHEMDNER-patents corpus, the WEAVE corpus are complementary; without losing precision, we have an increase in the recall of the NER model. This also supports our assertion of the need for a focused dataset covering the Examples section of patents. The combined corpus can perform very close to the state-of-the-art results in chemical NER. This combination also gives us many high-quality annotations 199,763 (100,129 WEAVE + 99,634 BC-V) to develop better chemical NER models. The IAA value of 98.73% on 45 documents subset and the best NER model with F-measure of 91.37% is instructive of the NER model's simple nature. There is good scope for researching better NER models, which can reduce this difference. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "Our results show that a focused annotated NER dataset with a simple NER model can achieve near state-of-the-art results. Complementary datasets can achieve high recall without sacrificing the precision of the chemical NER model. This is illustrated by the rows highlighted as bold in Table 9 . The reuse of the existing manually annotated dataset results in substantial savings in manual annotation effort. Chemical NER models with high precision and recall can be used for downstream processing and analysis of chemical reactions in patents. The present annotated dataset would help better temporal modeling of the synthetic procedures given in the Examples section of patents.",
"cite_spans": [],
"ref_spans": [
{
"start": 284,
"end": 291,
"text": "Table 9",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "We propose to explore more complex NER models. These models can better account for the high IAA values reported by us. In the future, we would explore the possibility of extending this dataset to chemical reaction role labeling for the identified chemical entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "The WEAVE corpus described in this paper is available at Github repository: https://github.com/ nv-ravindra/the-weave-corpus",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supporting Information",
"sec_num": "7"
},
{
"text": "http://chemu.eng.unimelb.edu.au/ 3 USPTO, https://www.uspto.gov",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://chemu.eng.unimelb.edu.au/ 9 https://github.com/chemu-patent-ie/ chemu-patent-ie.github.io/tree/master/ resources/Annotation_Guidelines_ CLEF2020_ChEMU_task1.pdf",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/vikas95/Pref_Suff_ Span_NN",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.wipo.int/classifications/ ipc/en/ITsupport/Version0170101/ transformations/ipc/20170101/en/htm/ A61K.htm12 https://www.wipo.int/classifications/ ipc/en/ITsupport/Version0170101/ transformations/ipc/20170101/en/htm/ C07D.htm",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank Vincatis Technologies Private Limited, Hyderabad and anonymous reviewers for their help with this publication.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Annotated Chemical Patent Corpus: A Gold Standard for Text Mining",
"authors": [
{
"first": "A",
"middle": [],
"last": "Saber",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"G"
],
"last": "Akhondi",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Klenner",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tyrchan",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Anil",
"suffix": ""
},
{
"first": "Kiran",
"middle": [],
"last": "Manchala",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Boppana",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Lowe",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zimmermann",
"suffix": ""
},
{
"first": "Arp",
"middle": [],
"last": "Sarma",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Jagarlapudi",
"suffix": ""
},
{
"first": "Jan",
"middle": [
"A"
],
"last": "Sayle",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kors",
"suffix": ""
}
],
"year": 2014,
"venue": "PloS One",
"volume": "9",
"issue": "9",
"pages": "",
"other_ids": {
"DOI": [
"10.1371/journal.pone.0107477"
]
},
"num": null,
"urls": [],
"raw_text": "Saber A Akhondi, Alexander G Klenner, Christian Tyr- chan, Anil K Manchala, Kiran Boppana, Daniel Lowe, Marc Zimmermann, Sarma ARP Jagarlapudi, Roger Sayle, Jan A Kors, et al. 2014. Annotated Chemical Patent Corpus: A Gold Standard for Text Mining. PloS One, 9(9):e107477.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Concept annotation in the CRAFT corpus",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Bada",
"suffix": ""
},
{
"first": "Miriam",
"middle": [],
"last": "Eckert",
"suffix": ""
},
{
"first": "Donald",
"middle": [],
"last": "Evans",
"suffix": ""
},
{
"first": "Kristin",
"middle": [],
"last": "Garcia",
"suffix": ""
},
{
"first": "Krista",
"middle": [],
"last": "Shipley",
"suffix": ""
},
{
"first": "Dmitry",
"middle": [],
"last": "Sitnikov",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Baumgartner",
"suffix": ""
},
{
"first": "Karin",
"middle": [],
"last": "Bretonnel Cohen",
"suffix": ""
},
{
"first": "Judith",
"middle": [
"A"
],
"last": "Verspoor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Blake",
"suffix": ""
}
],
"year": 2012,
"venue": "BMC Bioinformatics",
"volume": "13",
"issue": "1",
"pages": "",
"other_ids": {
"DOI": [
"10.1186/1471-2105-13-161"
]
},
"num": null,
"urls": [],
"raw_text": "Michael Bada, Miriam Eckert, Donald Evans, Kristin Garcia, Krista Shipley, Dmitry Sitnikov, William A Baumgartner, K Bretonnel Cohen, Karin Verspoor, Judith A Blake, et al. 2012. Concept annota- tion in the CRAFT corpus. BMC Bioinformatics, 13(1):161.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A French clinical corpus with comprehensive semantic annotations: development of the Medical Entity and Relation LIMSI Annotated Text corpus (MERLOT). Language Resources and Evaluation",
"authors": [
{
"first": "Leonardo",
"middle": [],
"last": "Campillos",
"suffix": ""
},
{
"first": "Louise",
"middle": [],
"last": "Del\u00e9ger",
"suffix": ""
},
{
"first": "Cyril",
"middle": [],
"last": "Grouin",
"suffix": ""
},
{
"first": "Thierry",
"middle": [],
"last": "Hamon",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "52",
"issue": "",
"pages": "571--601",
"other_ids": {
"DOI": [
"10.1007/s10579-017-9382-y"
]
},
"num": null,
"urls": [],
"raw_text": "Leonardo Campillos, Louise Del\u00e9ger, Cyril Grouin, Thierry Hamon, Anne-Laure Ligozat, and Aur\u00e9lie N\u00e9v\u00e9ol. 2018. A French clinical corpus with com- prehensive semantic annotations: development of the Medical Entity and Relation LIMSI Annotated Text corpus (MERLOT). Language Resources and Evaluation, 52(2):571-601.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Prediction of Organic Reaction Outcomes using Machine Learning",
"authors": [
{
"first": "W",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Coley",
"suffix": ""
},
{
"first": "Tommi",
"middle": [
"S"
],
"last": "Barzilay",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jaakkola",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Klavs F",
"middle": [],
"last": "Green",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jensen",
"suffix": ""
}
],
"year": 2017,
"venue": "ACS Central Science",
"volume": "3",
"issue": "5",
"pages": "434--443",
"other_ids": {
"DOI": [
"10.1021/acscentsci.7b00064"
]
},
"num": null,
"urls": [],
"raw_text": "Connor W Coley, Regina Barzilay, Tommi S Jaakkola, William H Green, and Klavs F Jensen. 2017. Predic- tion of Organic Reaction Outcomes using Machine Learning. ACS Central Science, 3(5):434-443.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Association for Computational Linguistics",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Corbett",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Batchelor",
"suffix": ""
},
{
"first": "Simone",
"middle": [],
"last": "Teufel",
"suffix": ""
}
],
"year": 2007,
"venue": "Biological, translational, and clinical language processing",
"volume": "",
"issue": "",
"pages": "57--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Corbett, Colin Batchelor, and Simone Teufel. 2007. Annotation of Chemical Named Entities. In Biological, translational, and clinical language pro- cessing, pages 57-64, Prague, Czech Republic. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Identification of Chemical Entities in Patent Documents",
"authors": [
{
"first": "Tiago",
"middle": [],
"last": "Grego",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Pezik",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Francisco",
"suffix": ""
},
{
"first": "Dietrich",
"middle": [],
"last": "Couto",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rebholz-Schuhmann",
"suffix": ""
}
],
"year": 2009,
"venue": "International Work-Conference on Artificial Neural Networks",
"volume": "",
"issue": "",
"pages": "942--949",
"other_ids": {
"DOI": [
"10.1007/978-3-642-02481-8_144"
]
},
"num": null,
"urls": [],
"raw_text": "Tiago Grego, Piotr Pezik, Francisco M Couto, and Di- etrich Rebholz-Schuhmann. 2009. Identification of Chemical Entities in Patent Documents. In Inter- national Work-Conference on Artificial Neural Net- works, pages 942-949. Springer.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Deidentification of clinical notes in French: towards a protocol for reference corpus development",
"authors": [
{
"first": "Cyril",
"middle": [],
"last": "Grouin",
"suffix": ""
},
{
"first": "Aur\u00e9lie",
"middle": [],
"last": "N\u00e9v\u00e9ol",
"suffix": ""
}
],
"year": 2014,
"venue": "Special Issue on Informatics Methods in Medical Privacy",
"volume": "50",
"issue": "",
"pages": "151--161",
"other_ids": {
"DOI": [
"10.1016/j.jbi.2013.12.014"
]
},
"num": null,
"urls": [],
"raw_text": "Cyril Grouin and Aur\u00e9lie N\u00e9v\u00e9ol. 2014. De- identification of clinical notes in French: towards a protocol for reference corpus development. Journal of Biomedical Informatics, 50:151 -161. Special Issue on Informatics Methods in Medical Privacy.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Biomedical named entity recognition and linking datasets: survey and our recent development",
"authors": [
{
"first": "Ming-Siang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Po-Ting",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Pei-Yen",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Yu-Ting",
"middle": [],
"last": "You",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Tzong-Han Tsai",
"suffix": ""
},
{
"first": "Wen-Lian",
"middle": [],
"last": "Hsu",
"suffix": ""
}
],
"year": 2020,
"venue": "Briefings in Bioinformatics. Bbaa054",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1093/bib/bbaa054"
]
},
"num": null,
"urls": [],
"raw_text": "Ming-Siang Huang, Po-Ting Lai, Pei-Yen Lin, Yu-Ting You, Richard Tzong-Han Tsai, and Wen-Lian Hsu. 2020. Biomedical named entity recognition and linking datasets: survey and our recent development. Briefings in Bioinformatics. Bbaa054.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Chemical Names: Terminological Resources and Corpora Annotation",
"authors": [
{
"first": "Corinna",
"middle": [],
"last": "Kolarik",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Klinger",
"suffix": ""
},
{
"first": "Christoph",
"middle": [
"M"
],
"last": "Friedrich",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Hofmann-Apitius",
"suffix": ""
},
{
"first": "Juliane",
"middle": [],
"last": "Fluck",
"suffix": ""
}
],
"year": 2008,
"venue": "Workshop on Building and evaluating resources for biomedical text mining",
"volume": "",
"issue": "",
"pages": "51--58",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Corinna Kolarik, Roman Klinger, Christoph M. Friedrich, Martin Hofmann-Apitius, and Juliane Fluck. 2008. Chemical Names: Terminological Re- sources and Corpora Annotation. In Workshop on Building and evaluating resources for biomedical text mining (6th edition of the Language Resources and Evaluation Conference), pages 51-58.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "CHEMDNER: The drugs and chemical names extraction challenge",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Krallinger",
"suffix": ""
},
{
"first": "Florian",
"middle": [],
"last": "Leitner",
"suffix": ""
},
{
"first": "Obdulia",
"middle": [],
"last": "Rabal",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Vazquez",
"suffix": ""
}
],
"year": 2015,
"venue": "Journal of Cheminformatics",
"volume": "7",
"issue": "S1",
"pages": "",
"other_ids": {
"DOI": [
"10.1186/1758-2946-7-S1-S1"
]
},
"num": null,
"urls": [],
"raw_text": "Martin Krallinger, Florian Leitner, Obdulia Rabal, Miguel Vazquez, Julen Oyarzabal, and Alfonso Va- lencia. 2015a. CHEMDNER: The drugs and chemi- cal names extraction challenge. Journal of Chemin- formatics, 7(S1):S1.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The CHEMDNER corpus of chemicals and drugs and its annotation principles",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Krallinger",
"suffix": ""
},
{
"first": "Obdulia",
"middle": [],
"last": "Rabal",
"suffix": ""
},
{
"first": "Florian",
"middle": [],
"last": "Leitner",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Vazquez",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Salgado",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Leaman",
"suffix": ""
},
{
"first": "Yanan",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Donghong",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Daniel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lowe",
"suffix": ""
}
],
"year": 2015,
"venue": "Journal of Cheminformatics",
"volume": "7",
"issue": "1",
"pages": "1--17",
"other_ids": {
"DOI": [
"10.1186/1758-2946-7-S1-S2"
]
},
"num": null,
"urls": [],
"raw_text": "Martin Krallinger, Obdulia Rabal, Florian Leitner, Miguel Vazquez, David Salgado, Zhiyong Lu, Robert Leaman, Yanan Lu, Donghong Ji, Daniel M Lowe, et al. 2015b. The CHEMDNER corpus of chemicals and drugs and its annotation principles. Journal of Cheminformatics, 7(1):1-17.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Overview of the CHEMD-NER patents task",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Krallinger",
"suffix": ""
},
{
"first": "Obdulia",
"middle": [],
"last": "Rabal",
"suffix": ""
},
{
"first": "Analia",
"middle": [],
"last": "Louren\u00e7o",
"suffix": ""
},
{
"first": "Martin",
"middle": [
"Perez"
],
"last": "Perez",
"suffix": ""
},
{
"first": "Gael",
"middle": [
"Perez"
],
"last": "Rodriguez",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Vazquez",
"suffix": ""
},
{
"first": "Florian",
"middle": [],
"last": "Leitner",
"suffix": ""
},
{
"first": "Julen",
"middle": [],
"last": "Oyarzabal",
"suffix": ""
},
{
"first": "Alfonso",
"middle": [],
"last": "Valencia",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the fifth BioCreative challenge evaluation workshop",
"volume": "",
"issue": "",
"pages": "63--75",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Krallinger, Obdulia Rabal, Analia Louren\u00e7o, Martin Perez Perez, Gael Perez Rodriguez, Miguel Vazquez, Florian Leitner, Julen Oyarzabal, and Al- fonso Valencia. 2015c. Overview of the CHEMD- NER patents task. In Proceedings of the fifth BioCre- ative challenge evaluation workshop, pages 63-75.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The Materials Science Procedural Text Corpus: Annotating Materials Synthesis Procedures with Shallow Semantic Structures",
"authors": [
{
"first": "Zach",
"middle": [],
"last": "Sheshera Mysore",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Jensen",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Haw-Shiuan",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Emma",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Strubell",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Flanigan",
"suffix": ""
},
{
"first": "Elsa",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Olivetti",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th Linguistic Annotation Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sheshera Mysore, Zach Jensen, Edward Kim, Kevin Huang, Haw-Shiuan Chang, Emma Strubell, Jef- frey Flanigan, Andrew McCallum, and Elsa Olivetti. 2019. The Materials Science Procedural Text Cor- pus: Annotating Materials Synthesis Procedures with Shallow Semantic Structures. In Proceedings of the 13th Linguistic Annotation Workshop, Flo- rence, Italy. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A survey on annotation tools for the biomedical literature",
"authors": [
{
"first": "Mariana",
"middle": [],
"last": "Neves",
"suffix": ""
},
{
"first": "Ulf",
"middle": [],
"last": "Leser",
"suffix": ""
}
],
"year": 2012,
"venue": "Briefings in Bioinformatics",
"volume": "15",
"issue": "2",
"pages": "327--340",
"other_ids": {
"DOI": [
"10.1093/bib/bbs084"
]
},
"num": null,
"urls": [],
"raw_text": "Mariana Neves and Ulf Leser. 2012. A survey on anno- tation tools for the biomedical literature. Briefings in Bioinformatics, 15(2):327-340.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "ChEMU: Named Entity Recognition and Event Extraction of Chemical Reactions from Patents",
"authors": [
{
"first": "Zenan",
"middle": [],
"last": "Dat Quoc Nguyen",
"suffix": ""
},
{
"first": "Hiyori",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "Biaoyan",
"middle": [],
"last": "Yoshikawa",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Camilo",
"middle": [],
"last": "Druckenbrodt",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Thorne",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hoessel",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Saber",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Akhondi",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Karin",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Verspoor",
"suffix": ""
}
],
"year": 2020,
"venue": "Advances in Information Retrieval",
"volume": "",
"issue": "",
"pages": "572--579",
"other_ids": {
"DOI": [
"10.1007/978-3-030-45442-5_74"
]
},
"num": null,
"urls": [],
"raw_text": "Dat Quoc Nguyen, Zenan Zhai, Hiyori Yoshikawa, Biaoyan Fang, Christian Druckenbrodt, Camilo Thorne, Ralph Hoessel, Saber A. Akhondi, Trevor Cohn, Timothy Baldwin, and Karin Verspoor. 2020. ChEMU: Named Entity Recognition and Event Ex- traction of Chemical Reactions from Patents. In Advances in Information Retrieval, pages 572-579, Cham. Springer International Publishing.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "GloVe: Global Vectors for Word Representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global Vectors for Word Representation. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Planning chemical syntheses with deep neural networks and symbolic AI",
"authors": [
{
"first": "H",
"middle": [
"S"
],
"last": "Marwin",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Segler",
"suffix": ""
},
{
"first": "Mark",
"middle": [
"P"
],
"last": "Preuss",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Waller",
"suffix": ""
}
],
"year": 2018,
"venue": "Nature",
"volume": "555",
"issue": "7698",
"pages": "604--610",
"other_ids": {
"DOI": [
"10.1038/nature25978"
]
},
"num": null,
"urls": [],
"raw_text": "Marwin HS Segler, Mike Preuss, and Mark P Waller. 2018. Planning chemical syntheses with deep neural networks and symbolic AI. Nature, 555(7698):604- 610.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "BRAT: a Web-based Tool for NLP-Assisted Text Annotation",
"authors": [
{
"first": "Pontus",
"middle": [],
"last": "Stenetorp",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "Goran",
"middle": [],
"last": "Topi\u0107",
"suffix": ""
},
{
"first": "Tomoko",
"middle": [],
"last": "Ohta",
"suffix": ""
},
{
"first": "Sophia",
"middle": [],
"last": "Ananiadou",
"suffix": ""
},
{
"first": "Jun'ichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Demonstrations at the 13th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "102--107",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pontus Stenetorp, Sampo Pyysalo, Goran Topi\u0107, Tomoko Ohta, Sophia Ananiadou, and Jun'ichi Tsu- jii. 2012. BRAT: a Web-based Tool for NLP- Assisted Text Annotation. In Proceedings of the Demonstrations at the 13th Conference of the Euro- pean Chapter of the Association for Computational Linguistics, pages 102-107, Avignon, France. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recognition",
"authors": [
{
"first": "Erik",
"middle": [
"F"
],
"last": "Tjong",
"suffix": ""
},
{
"first": "Kim",
"middle": [],
"last": "Sang",
"suffix": ""
},
{
"first": "Fien",
"middle": [],
"last": "De Meulder",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003",
"volume": "",
"issue": "",
"pages": "142--147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recog- nition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142-147.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A Survey on Recent Advances in Named Entity Recognition from Deep Learning models",
"authors": [
{
"first": "Vikas",
"middle": [],
"last": "Yadav",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2145--2158",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vikas Yadav and Steven Bethard. 2018. A Survey on Recent Advances in Named Entity Recognition from Deep Learning models. In Proceedings of the 27th International Conference on Computational Linguis- tics, pages 2145-2158, Santa Fe, New Mexico, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Deep Affix Features Improve Neural Named Entity Recognizers",
"authors": [
{
"first": "Vikas",
"middle": [],
"last": "Yadav",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Sharp",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics",
"volume": "",
"issue": "",
"pages": "167--172",
"other_ids": {
"DOI": [
"10.18653/v1/S18-2021"
]
},
"num": null,
"urls": [],
"raw_text": "Vikas Yadav, Rebecca Sharp, and Steven Bethard. 2018. Deep Affix Features Improve Neural Named Entity Recognizers. In Proceedings of the Seventh Joint Conference on Lexical and Computational Seman- tics, pages 167-172, New Orleans, Louisiana. As- sociation for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "An example an annotated organic reaction, within the Examples section of patent.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF1": {
"text": "ABBV. FAMILY FORMULA IDENTIFIER MULTIPLE SYSTEMATIC",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF1": {
"type_str": "table",
"content": "<table/>",
"text": "IAA statistics.",
"html": null,
"num": null
},
"TABREF2": {
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"3\">Figure 2: Architecture of NER model proposed by Yadav et al. (2018)</td></tr><tr><td>CLASS</td><td colspan=\"3\">BC-IV BC-V WEAVE</td></tr><tr><td>ABBV.</td><td>4538</td><td>588</td><td>2520</td></tr><tr><td>FAMILY</td><td colspan=\"2\">4090 12209</td><td>783</td></tr><tr><td>FORMULA</td><td>4448</td><td>2239</td><td>6709</td></tr><tr><td>IDENTIFIER</td><td>672</td><td>99</td><td>47</td></tr><tr><td>MULTIPLE</td><td>202</td><td>140</td><td>6</td></tr><tr><td>NO CLASS</td><td>40</td><td>-</td><td>-</td></tr><tr><td>SYSTEMATIC</td><td>6656</td><td>9570</td><td>14547</td></tr><tr><td>TRIVIAL</td><td>8832</td><td>8698</td><td>2756</td></tr><tr><td>Total</td><td colspan=\"2\">29478 33543</td><td>27368</td></tr></table>",
"text": "Error analysis of annotations.",
"html": null,
"num": null
},
"TABREF3": {
"type_str": "table",
"content": "<table/>",
"text": "Training set.",
"html": null,
"num": null
},
"TABREF5": {
"type_str": "table",
"content": "<table><tr><td>: Development set.</td></tr><tr><td>100,000 US patents belonging to IPC code A61K 11</td></tr><tr><td>and C07D 12 . A window of word co-occurrence of</td></tr></table>",
"text": "",
"html": null,
"num": null
},
"TABREF6": {
"type_str": "table",
"content": "<table><tr><td>Type</td><td colspan=\"2\">WEAVE BC-V</td></tr><tr><td>Total annotations</td><td colspan=\"2\">100,129 99,634</td></tr><tr><td>Average per document</td><td>522</td><td>5</td></tr><tr><td>Median per document</td><td>366</td><td>3</td></tr><tr><td>Minimum per document</td><td>10</td><td>0</td></tr><tr><td>Maximum per document</td><td>3640</td><td>233</td></tr></table>",
"text": "Test set.",
"html": null,
"num": null
},
"TABREF7": {
"type_str": "table",
"content": "<table><tr><td>: Statistics for counts of annotations</td></tr><tr><td>8 and word frequency of 1 was used to train the</td></tr><tr><td>uncased text. The resulting embeddings had a dic-</td></tr><tr><td>tionary size of 6,828,514 and were used for all</td></tr><tr><td>experiments.</td></tr></table>",
"text": "",
"html": null,
"num": null
},
"TABREF9": {
"type_str": "table",
"content": "<table><tr><td>Training</td><td>Development</td><td colspan=\"3\">Test Precision Recall</td><td>F1</td></tr><tr><td>BC-V</td><td colspan=\"2\">BC-V WEAVE</td><td>67.08</td><td>50.80 57.82</td></tr><tr><td>BC-V</td><td colspan=\"2\">WEAVE WEAVE</td><td>73.32</td><td>48.38 58.29</td></tr><tr><td>WEAVE</td><td colspan=\"2\">BC-V WEAVE</td><td>93.24</td><td>89.11 91.13</td></tr><tr><td>WEAVE</td><td colspan=\"2\">WEAVE WEAVE</td><td>93.55</td><td>89.29 91.37</td></tr><tr><td>BC-V + WEAVE</td><td colspan=\"2\">BC-V WEAVE</td><td>92.91</td><td>88.76 90.79</td></tr><tr><td>BC-V + WEAVE</td><td colspan=\"2\">WEAVE WEAVE</td><td>92.54</td><td>88.74 90.60</td></tr><tr><td colspan=\"3\">BC-V + WEAVE BC-V + WEAVE WEAVE</td><td>93.43</td><td>89.34 91.34</td></tr></table>",
"text": "Experimental results with BC-V Test corpus",
"html": null,
"num": null
},
"TABREF10": {
"type_str": "table",
"content": "<table/>",
"text": "Experimental results with WEAVE Test corpus.",
"html": null,
"num": null
}
}
}
}