Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D19-1032",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:58:51.134417Z"
},
"title": "Doc2EDAG: An End-to-End Document-level Framework for Chinese Financial Event Extraction",
"authors": [
{
"first": "Shun",
"middle": [],
"last": "Zheng",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tsinghua University",
"location": {}
},
"email": "zhengs14@mails.tsinghua.edu.cn"
},
{
"first": "Wei",
"middle": [],
"last": "Cao",
"suffix": "",
"affiliation": {},
"email": "wei.cao@microsoft.com"
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tsinghua University",
"location": {}
},
"email": "weixu@mail.tsinghua.edu.cn"
},
{
"first": "Jiang",
"middle": [],
"last": "Bian",
"suffix": "",
"affiliation": {},
"email": "jiang.bian@microsoft.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Most existing event extraction (EE) methods merely extract event arguments within the sentence scope. However, such sentence-level EE methods struggle to handle soaring amounts of documents from emerging applications, such as finance, legislation, health, etc., where event arguments always scatter across different sentences, and even multiple such event mentions frequently co-exist in the same document. To address these challenges, we propose a novel end-to-end model, Doc2EDAG, which can generate an entity-based directed acyclic graph to fulfill the document-level EE (DEE) effectively. Moreover, we reformalize a DEE task with the no-trigger-words design to ease document-level event labeling. To demonstrate the effectiveness of Doc2EDAG, we build a large-scale real-world dataset consisting of Chinese financial announcements with the challenges mentioned above. Extensive experiments with comprehensive analyses illustrate the superiority of Doc2EDAG over state-of-the-art methods. Data and codes can be found at https://github.com/ dolphin-zs/Doc2EDAG.",
"pdf_parse": {
"paper_id": "D19-1032",
"_pdf_hash": "",
"abstract": [
{
"text": "Most existing event extraction (EE) methods merely extract event arguments within the sentence scope. However, such sentence-level EE methods struggle to handle soaring amounts of documents from emerging applications, such as finance, legislation, health, etc., where event arguments always scatter across different sentences, and even multiple such event mentions frequently co-exist in the same document. To address these challenges, we propose a novel end-to-end model, Doc2EDAG, which can generate an entity-based directed acyclic graph to fulfill the document-level EE (DEE) effectively. Moreover, we reformalize a DEE task with the no-trigger-words design to ease document-level event labeling. To demonstrate the effectiveness of Doc2EDAG, we build a large-scale real-world dataset consisting of Chinese financial announcements with the challenges mentioned above. Extensive experiments with comprehensive analyses illustrate the superiority of Doc2EDAG over state-of-the-art methods. Data and codes can be found at https://github.com/ dolphin-zs/Doc2EDAG.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Event extraction (EE), traditionally modeled as detecting trigger words and extracting corresponding arguments from plain text, plays a vital role in natural language processing since it can produce valuable structured information to facilitate a variety of tasks, such as knowledge base construction, question answering, language understanding, etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In recent years, with the rising trend of digitalization within various domains, such as finance, legislation, health, etc., EE has become an increasingly important accelerator to the development of business in those domains. Take the financial domain as an example, continuous economic growth has witnessed exploding volumes of digital financial documents, such as financial announcements in a specific stock market as Figure 1 shows, specified as Chinese financial announcements (ChFi-nAnn). While forming up a gold mine, such large amounts of announcements call EE for assisting people in extracting valuable structured information to sense emerging risks and find profitable opportunities timely.",
"cite_spans": [],
"ref_spans": [
{
"start": 420,
"end": 428,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Given the necessity of applying EE on the financial domain, the specific characteristics of financial documents as well as those within many other business fields, however, raise two critical challenges to EE, particularly arguments-scattering and multi-event. Specifically, the first challenge indicates that arguments of one event record may scatter across multiple sentences of the document, while the other one reflects that a document is likely to contain multiple such event records. To intuitively illustrate these challenges, we show a typical ChFinAnn document with two Equity Pledge event records in Figure 2 . For the first event, the entity 1 \"[SHARE1]\" is the correct Pledged Shares at the sentence level (ID 5). However, due to the capital stock increment (ID 7), Figure 2 : A document example with two Equity Pledge event records whose arguments scatter across multiple sentences, where we use ID to denote the sentence index, substitute entity mentions with corresponding marks, and color event arguments outside the scope of key-event sentences as red.",
"cite_spans": [],
"ref_spans": [
{
"start": 610,
"end": 618,
"text": "Figure 2",
"ref_id": null
},
{
"start": 778,
"end": 786,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "the correct Pledged Shares at the document level should be \"[SHARE2]\". Similarly, \"[DATE3]\" is the correct End Date at the sentence level (ID 9) but incorrect at the document level (ID 10). Moreover, some summative arguments, such as \"[SHARE5]\" and \"[RATIO]\", are often stated at the end of the document. Although a great number of efforts (Ahn, 2006; Ji and Grishman, 2008; Liao and Grishman, 2010; Hong et al., 2011; Riedel and McCallum, 2011; Li et al., 2013 Li et al., , 2014 Chen et al., 2015; Yang and Mitchell, 2016; Nguyen et al., 2016; Sha et al., 2018; Zhang and Ji, 2018; Nguyen and Nguyen, 2019; Wang et al., 2019) have been put on EE, most of them are based on ACE 2005 2 , an expert-annotated benchmark, which only tagged event arguments within the sentence scope. We refer to such task as the sentence-level EE (SEE), which obviously overlooks the arguments-scattering challenge. In contrast, EE on financial documents, such as ChFi-nAn, requires document-level EE (DEE) when facing arguments-scattering, and this challenge gets much harder when coupled with multi-event.",
"cite_spans": [
{
"start": 340,
"end": 351,
"text": "(Ahn, 2006;",
"ref_id": "BIBREF2"
},
{
"start": 352,
"end": 374,
"text": "Ji and Grishman, 2008;",
"ref_id": "BIBREF10"
},
{
"start": 375,
"end": 399,
"text": "Liao and Grishman, 2010;",
"ref_id": "BIBREF15"
},
{
"start": 400,
"end": 418,
"text": "Hong et al., 2011;",
"ref_id": "BIBREF8"
},
{
"start": 419,
"end": 445,
"text": "Riedel and McCallum, 2011;",
"ref_id": "BIBREF22"
},
{
"start": 446,
"end": 461,
"text": "Li et al., 2013",
"ref_id": "BIBREF13"
},
{
"start": 462,
"end": 479,
"text": "Li et al., , 2014",
"ref_id": "BIBREF14"
},
{
"start": 480,
"end": 498,
"text": "Chen et al., 2015;",
"ref_id": "BIBREF5"
},
{
"start": 499,
"end": 523,
"text": "Yang and Mitchell, 2016;",
"ref_id": "BIBREF29"
},
{
"start": 524,
"end": 544,
"text": "Nguyen et al., 2016;",
"ref_id": "BIBREF19"
},
{
"start": 545,
"end": 562,
"text": "Sha et al., 2018;",
"ref_id": "BIBREF24"
},
{
"start": 563,
"end": 582,
"text": "Zhang and Ji, 2018;",
"ref_id": "BIBREF33"
},
{
"start": 583,
"end": 607,
"text": "Nguyen and Nguyen, 2019;",
"ref_id": "BIBREF20"
},
{
"start": 608,
"end": 626,
"text": "Wang et al., 2019)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The most recent work, DCFEE , attempted to explore DEE on ChFinAnn, by employing distant supervision (DS) (Mintz et al., 2009) to generate EE data and performing a two-stage extraction: 1) a sequence tagging model for SEE, and 2) a key-event-sentence detection model to detect the key-event sentence, coupled with a heuristic strategy that padded missing arguments from surrounding sentences, for DEE.",
"cite_spans": [
{
"start": 106,
"end": 126,
"text": "(Mintz et al., 2009)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, the sequence tagging model for SEE cannot handle multi-event sentences elegantly, and even worse, the context-agnostic argumentscompletion strategy fails to address the argumentsscattering challenge effectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose a novel end-to-end model, Doc2EDAG, to address the unique challenges of DEE. The key idea of Doc2EDAG is to transform the event table into an entity-based directed acyclic graph (EDAG). The EDAG format can transform the hard table-filling task into several sequential path-expanding sub-tasks that are more tractable. To support the EDAG generation efficiently, Doc2EDAG encodes entities with document-level contexts and designs a memory mechanism for path expanding. Moreover, to ease the DS-based document-level event labeling, we propose a novel DEE formalization that removes the trigger-words labeling and regards DEE as directly filling event tables based on a document. This no-trigger-words design does not rely on any predefined trigger-words set or heuristic to filter multiple trigger candidates, and still perfectly matches the ultimate goal of DEE, mapping a document to underlying event tables.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To evaluate the effectiveness of our proposed Doc2EDAG, we conduct experiments on a realworld dataset, consisting of large scales of financial announcements. In contrast to the dataset used by DCFEE where 97% 3 documents just contained one event record, our data collection is ten times larger where about 30% documents include multiple event records. Extensive experiments demonstrate that Doc2EDAG can significantly outper-form state-of-the-art methods when facing DEEspecific challenges.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In summary, our contributions include:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose a novel model, Doc2EDAG, which can directly generate event tables based on a document, to address unique challenges of DEE effectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We reformalize a DEE task without trigger words to ease the DS-based document-level event labeling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We build a large-scale real-world dataset for DEE with the unique challenges of arguments-scattering and multi-event, the extensive experiments on which demonstrate the superiority of Doc2EDAG.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Note that though we focus on ChFinAnn data in this work, we tackle those DEE-specific challenges without any domain-specific assumption. Therefore, our general labeling and modeling strategies can directly benefit many other business domains with similar challenges, such as criminal facts and judgments extraction from legal documents, disease symptoms and doctor instructions identification from medical reports, etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recent development on information extraction has been advancing in building the joint model that can extract entities and identify structures (relations or events) among them simultaneously. For instance, (Ren et al., 2017; Zheng et al., 2017; Zeng et al., 2018a; Wang et al., 2018) focused on jointly extracting entities and inter-entity relations. In the meantime, the same to the focus of this paper, a few studies aimed at designing joint models for the entity and event extraction, such as handcrafted-feature-based (Li et al., 2014; Yang and Mitchell, 2016; Judea and Strube, 2016) and neural-network-based (Zhang and Ji, 2018 ; Nguyen and Nguyen, 2019) models. Nevertheless, these models did not present how to handle argument candidates beyond the sentence scope. (Yang and Mitchell, 2016) claimed to handle event-argument relations across sentences with the prerequisite of well-defined features, which, unfortunately, is nontrivial.",
"cite_spans": [
{
"start": 205,
"end": 223,
"text": "(Ren et al., 2017;",
"ref_id": "BIBREF21"
},
{
"start": 224,
"end": 243,
"text": "Zheng et al., 2017;",
"ref_id": "BIBREF35"
},
{
"start": 244,
"end": 263,
"text": "Zeng et al., 2018a;",
"ref_id": "BIBREF31"
},
{
"start": 264,
"end": 282,
"text": "Wang et al., 2018)",
"ref_id": "BIBREF26"
},
{
"start": 521,
"end": 538,
"text": "(Li et al., 2014;",
"ref_id": "BIBREF14"
},
{
"start": 539,
"end": 563,
"text": "Yang and Mitchell, 2016;",
"ref_id": "BIBREF29"
},
{
"start": 564,
"end": 587,
"text": "Judea and Strube, 2016)",
"ref_id": "BIBREF11"
},
{
"start": 613,
"end": 632,
"text": "(Zhang and Ji, 2018",
"ref_id": "BIBREF33"
},
{
"start": 772,
"end": 797,
"text": "(Yang and Mitchell, 2016)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In addition to the modeling challenge, another big obstacle for democratizing EE is the lack of training data due to the enormous cost to obtain expert annotations. To address this problem, some researches attempted to adapt distant supervision (DS) to the EE setting, since DS has shown promising results by employing knowledge bases to automatically generate training data for relation extraction (Mintz et al., 2009) . However, the vanilla EE required the trigger words that were absent on factual knowledge bases. Therefore, employed either linguistic resources or predefined dictionaries for trigger-words labeling. On the other hand, another recent work (Zeng et al., 2018b) showed that directly labeling event arguments without trigger words was also feasible. However, they only considered the SEE setting and their methods cannot be directly extended to the DEE setting, which is the main focus of this work.",
"cite_spans": [
{
"start": 399,
"end": 419,
"text": "(Mintz et al., 2009)",
"ref_id": "BIBREF18"
},
{
"start": 660,
"end": 680,
"text": "(Zeng et al., 2018b)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Traditionally, when applying DS to relation extraction, researchers put huge efforts into alleviating labeling noises (Riedel et al., 2010; Lin et al., 2016; Zheng et al., 2019) . In contrast, this work shows that combining DS with some simple constraints can obtain pretty good labeling quality for DEE, where the reasons are two folds: 1) both the knowledge base and text documents are from the same domain; 2) an event record usually contains multiple arguments, while a common relational fact only covers two entities.",
"cite_spans": [
{
"start": 118,
"end": 139,
"text": "(Riedel et al., 2010;",
"ref_id": "BIBREF23"
},
{
"start": 140,
"end": 157,
"text": "Lin et al., 2016;",
"ref_id": "BIBREF16"
},
{
"start": 158,
"end": 177,
"text": "Zheng et al., 2019)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We first clarify several key notions: 1) entity mention: an entity mention is a text span that refers to an entity object; 2) event role: an event role corresponds to a predefined field of the event table; 3) event argument: an event argument is an entity that plays a specific event role; 4) event record: an event record corresponds to an entry of the event table and contains several arguments with required roles. For example, Figure 2 shows two event records, where the entity \"[PER]\" is an event argument with the Pledger role.",
"cite_spans": [],
"ref_spans": [
{
"start": 431,
"end": 439,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "3"
},
{
"text": "To better elaborate and evaluate our proposed approach, we leverage the ChFinAnn data in this paper. ChFinAnn documents contain firsthand official disclosures of listed companies in the Chinese stock market and have hundreds of types, such as annual reports and earnings estimates. While in this work, we focus on those eventrelated ones that are frequent, influential, and mainly expressed by the natural language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "3"
},
{
"text": "As a prerequisite to DEE, we first conduct the DSbased event labeling at the document level. More specifically, we map tabular records from an event knowledge base to document text and regard wellmatched records as events expressed by that document. Moreover, we adopt a no-trigger-words design and reformalize a novel DEE task accordingly to enable end-to-end model designs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document-level Event Labeling",
"sec_num": "4"
},
{
"text": "Event Labeling. To ensure the labeling quality, we set two constraints for matched records: 1) arguments of predefined key event roles must exist (non-key ones can be empty) and 2) the number of matched arguments should be higher than a certain threshold. Configurations of these constraints are event-specific, and in practice, we can tune them to directly ensure the labeling quality at the document level. We regard records that meet these two constraints as the well-matched ones, which serve as distantly supervised ground truths. In addition to labeling event records, we assign roles of arguments to matched tokens as token-level entity tags. Note that we do not label trigger words explicitly. Besides not affecting the DEE functionality, an extra benefit of such no-trigger-words design is a much easier DS-based labeling that does not rely on predefined trigger-words dictionaries or manually curated heuristics to filter multiple potential trigger words. DEE Task Without Trigger Words. We reformalize a novel task for DEE as directly filling event tables based on a document, which generally requires three sub-tasks: 1) entity extraction, extracting entity mentions as argument candidates, 2) event detection, judging a document to be triggered or not for each event type, and 3) event table filling, filling arguments into the table of triggered events. This novel DEE task is much different from the vanilla SEE with trigger words but is consistent with the above simplified DS-based event labeling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document-level Event Labeling",
"sec_num": "4"
},
{
"text": "The key idea of Doc2EDAG is to transform tabular event records into an EDAG and let the model learn to generate this EDAG based on documentlevel contexts. Following the example in Figure 2 , Figure 3 typically depicts an EDAG generation process and Figure 4 presents the overall workflow of Doc2EDAG, which consists of two key stages: Figure 3 : An EDAG generation example that starts from event triggering and expands sequentially following the predefined order of event roles. document-level entity encoding (Section 5.1) and EDAG generation (Section 5.2). Before elaborating each of them in this section, we first describe two preconditioned modules: input representation and entity recognition.",
"cite_spans": [],
"ref_spans": [
{
"start": 180,
"end": 188,
"text": "Figure 2",
"ref_id": null
},
{
"start": 191,
"end": 199,
"text": "Figure 3",
"ref_id": null
},
{
"start": 249,
"end": 257,
"text": "Figure 4",
"ref_id": null
},
{
"start": 335,
"end": 343,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Doc2EDAG",
"sec_num": "5"
},
{
"text": "Input Representation. In this paper, we denote a document as a sequence of sentences. Formally, after looking up the token embedding ta-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Doc2EDAG",
"sec_num": "5"
},
{
"text": "ble V \u2208 R dw\u00d7|V | , we denote a document d as a sentence sequence [s 1 ; s 2 ; \u2022 \u2022 \u2022 ; s Ns ] and each sentence s i \u2208 R dw\u00d7Nw is composed of a sequence of token embeddings as [w i,1 , w i,2 , \u2022 \u2022 \u2022 , w i,Nw ],",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Doc2EDAG",
"sec_num": "5"
},
{
"text": "where |V | is the vocabulary size, N s and N w are the maximum lengths of the sentence sequence and the token sequence, respectively, and w i,j \u2208 R dw is the embedding of j th token in i th sentence with the embedding size d w .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Doc2EDAG",
"sec_num": "5"
},
{
"text": "Entity Recognition. Entity recognition is a typical sequence tagging task. We conduct this task at the sentence level and follow a classic method, BI-LSTM-CRF (Huang et al., 2015) , that first encodes the token sequence and then adds a conditional random field (CRF) layer to facilitate the sequence tagging. The only difference is that we employ the Transformer (Vaswani et al., 2017) instead of the original encoder, LSTM (Hochreiter and Schmidhuber, 1997). Transformer encodes a sequence of embeddings by the multiheaded self-attention mechanism to exchange contextual information among them. Due to the superior performance of the Transformer, we employ it as a primary context encoder in this work and name the Transformer module used in this stage as Transformer-1. Formally, for each sentence tensor s i \u2208 R dw\u00d7Nw , we get the encoded one as h i = Transformer-1(s i ), where h i \u2208 R dw\u00d7Nw shares the same embedding size d w and sequence length N w . During training, we employ roles of matched arguments as entity labels with the classic BIO (Begin, Inside, Other) scheme and wrap h i with a CRF layer to get the entity-recognition loss L er . As for the inference, we use the Viterbi decoding to get the best tagging sequence.",
"cite_spans": [
{
"start": 159,
"end": 179,
"text": "(Huang et al., 2015)",
"ref_id": "BIBREF9"
},
{
"start": 363,
"end": 385,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Doc2EDAG",
"sec_num": "5"
},
{
"text": "To address the arguments-scattering challenge efficiently, it is indispensable to leverage global contexts to better identify whether an entity plays a specific event role. Consequently, we utilize document-level entity encoding to encode extracted entity mentions with such contexts and produce an embedding of size d w for each entity mention with a distinct surface name.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document-level Entity Encoding",
"sec_num": "5.1"
},
{
"text": "Entity & Sentence Embedding. Since an entity mention usually covers multiple tokens with a variable length, we first obtain a fixed-sized embedding for each entity mention by conducting a max-pooling operation over its covered token embeddings. For example, given l th entity mention covering j th to k th tokens of i th sentence, we conduct the max-pooling over",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document-level Entity Encoding",
"sec_num": "5.1"
},
{
"text": "[h i,j , \u2022 \u2022 \u2022 , h i,k ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document-level Entity Encoding",
"sec_num": "5.1"
},
{
"text": "to get the entity mention embedding e l \u2208 R dw . For each sentence s i , we also take the maxpooling operation over the encoded token se-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document-level Entity Encoding",
"sec_num": "5.1"
},
{
"text": "quence [h i,1 , \u2022 \u2022 \u2022 , h i,Nw",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document-level Entity Encoding",
"sec_num": "5.1"
},
{
"text": "] to obtain a single sentence embedding c i \u2208 R dw . After these operations, both the mention and the sentence embeddings share the same embedding size d w . Document-level Encoding. Though we get embeddings for all sentences and entity mentions, these embeddings only encode local contexts within the sentence scope. To enable the awareness of document-level contexts, we employ the second Transformer module, Transformer-2, to facilitate the information exchange between all entity mentions and sentences. Before feeding them into Transformer-2, we add them with sentence position embeddings to inform the sentence order. After the Transformer encoding, we utilize the max-pooling operation again to merge multiple mention embeddings with the same entity surface name into a single embedding. Formally, after this stage, we obtain document-level contextaware entity mention and sentence embeddings as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document-level Entity Encoding",
"sec_num": "5.1"
},
{
"text": "e d = [e d 1 , \u2022 \u2022 \u2022 , e d Ne ] and c d = [c d 1 , \u2022 \u2022 \u2022 , c d Ns ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document-level Entity Encoding",
"sec_num": "5.1"
},
{
"text": ", respectively, where N e is the number of distinct entity surface names. These aggregated embeddings serve the next stage to fill event tables directly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document-level Entity Encoding",
"sec_num": "5.1"
},
{
"text": "After the document-level entity encoding stage, we can obtain the document embedding t \u2208 R dw by operating the max-pooling over the sentence tensor c d \u2208 R dw\u00d7Ns and stack a linear classifier over t to conduct the event-triggering classification for each event type. Next, for each triggered event type, we learn to generate an EDAG.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EDAG Generation",
"sec_num": "5.2"
},
{
"text": "EDAG Building. Before the model training, we need to build the EDAG from tabular event records. For each event type, we first manually define an event role order. Then, we transform each event record into a linked list of arguments following this order, where each argument node is either an entity or a special empty argument NA. Finally, we merge these linked lists into an EDAG by sharing the same prefix path. Since every complete path of the EDAG corresponds to one row of the event table, recovering the table format from a given EDAG is simple.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EDAG Generation",
"sec_num": "5.2"
},
{
"text": "Task Decomposition. The EDAG format aims to simplify the hard table-filling task into several tractable path-expanding sub-tasks. Then, a natural question is how the task decomposition works, which can be answered by the following EDAG recovering procedure. Assume the event triggering as the starting node (the initial EDAG), there comes a series of path-expanding sub-tasks following a predefined event role order. When considering a certain role, for every leaf node of the current EDAG, there is a path-expanding sub-task that decides which entities to be expanded. For each entity to be expanded, we create a new node of that entity for the current role and expand the path by connecting the current leaf node to the new entity node. If no entity is valid for expanding, we create a special NA node. When all sub-tasks for the current role finish, we move to the next role and repeat until the last. In this work, we leverage the above logic to recover the EDAG from pathexpanding predictions at inference and to set associated labels for each sub-task when training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EDAG Generation",
"sec_num": "5.2"
},
{
"text": "Memory. To better fulfill each path-expanding sub-task, it is crucial to know entities already contained by the path. Hence, we design a memory mechanism that initializes a memory tensor m with the sentence tensor c d at the beginning and updates m when expanding the path by appending either the associated entity embedding or the zero-padded one for the NA argument. With this design, each sub-task can own a distinct memory tensor, corresponding to the unique path history. e r Figure 4 : The overall workflow of Doc2EDAG, where we follow the example in Figure 2 and the EDAG structure in Figure 3 , and use stripes to differentiate different entities (note that the number of input tokens and entity positions are imaginary, which do not match previous ones strictly, and here we only include the first three event roles and associated entities for brevity).",
"cite_spans": [],
"ref_spans": [
{
"start": 481,
"end": 489,
"text": "Figure 4",
"ref_id": null
},
{
"start": 557,
"end": 565,
"text": "Figure 2",
"ref_id": null
},
{
"start": 592,
"end": 600,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "EDAG Generation",
"sec_num": "5.2"
},
{
"text": "Path Expanding. For each path-expanding subtask, we formalize it as a collection of multiple binary classification problems, that is predicting expanding (1) or not (0) for all entities. To enable the awareness of the current path state, history contexts and the current event role, we first concatenate the memory tensor m and the entity tensor e d , then add them with a trainable eventrole-indicator embedding, and encode them with the third Transformer module, Transformer-3, to facilitate the context-aware reasoning. Finally, we extract the enriched entity tensor e r from outputs of Transformer-3 and stack a linear classifier over e r to conduct the path-expanding classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EDAG Generation",
"sec_num": "5.2"
},
{
"text": "Optimization. For the event-triggering classification, we calculate the cross-entropy loss L tr . During the EDAG generation, we calculate a cross-entropy loss for each path-expanding subtask, and sum these losses as the final EDAGgeneration loss L dag . Finally, we sum L tr , L dag and the entity-recognition loss L er together as the final loss, L all = \u03bb 1 L er + \u03bb 2 L tr + \u03bb 3 L dag , where \u03bb 1 , \u03bb 2 and \u03bb 3 are hyper-parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EDAG Generation",
"sec_num": "5.2"
},
{
"text": "Inference. Given a document, Doc2EDAG first recognizes entity mentions from sentences, then encodes them with document-level contexts, and finally generates an EDAG for each triggered event type by conducting a series of pathexpanding sub-tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EDAG Generation",
"sec_num": "5.2"
},
{
"text": "Practical Tips. During training, we can utilize both ground-truth entity tokens and the given EDAG structure. While at inference, we need to first identify entities and then expand paths sequentially based on embeddings of those entities to recover the EDAG. This gap between training and inference can cause severe error-propagation problems. To mitigate such problems, we utilize the scheduled sampling (Bengio et al., 2015) to gradually switch the inputs of document-level entity encoding from ground-truth entity mentions to model recognized ones. Moreover, for pathexpanding classifications, false positives are more harmful than false negatives, because the former can cause a completely wrong path. Accordingly, we can set \u03b3(> 1) as the negative class weight of the associated cross-entropy loss.",
"cite_spans": [
{
"start": 405,
"end": 426,
"text": "(Bengio et al., 2015)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "EDAG Generation",
"sec_num": "5.2"
},
{
"text": "In this section, we present thorough empirical studies to answer the following questions: 1) to what extent can Doc2EDAG improve over stateof-the-art methods when facing DEE-specific challenges? 2) how do different models behave when facing both arguments-scattering and multievent challenges? 3) how important are various components of Doc2EDAG?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "Data Collection with Event Labeling. We utilize ten years (2008-2018) ChFinAnn 4 documents and human-summarized event knowledge bases to conduct the DS-based event labeling. We focus on five event types: Equity Freeze (EF), Equity Repurchase (ER), Equity Underweight (EU), Equity Overweight (EO) and Equity Pledge (EP), which belong to major events required to be disclosed by the regulator and may have a huge impact on the company value. To ensure the labeling quality, we set constraints for matched document-record pairs as Section 4 describes. Moreover, we directly use the character tokenization to avoid error propagations from Chinese word segmentation tools. Finally, we obtain 32, 040 documents in total, and this number is ten times larger than 2, 976 of DCFEE and about 53 times larger than 599 of ACE 2005. We divide these documents into train, development, and test set with the proportion of 8 : 1 : 1 based on the time order. In Table 1 , we show the number of documents and the multi-event ratio (MER) for each event type on this dataset. Note that a few documents may contain multiple event types at the same time.",
"cite_spans": [],
"ref_spans": [
{
"start": 945,
"end": 952,
"text": "Table 1",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "6.1"
},
{
"text": "Data Quality. To verify the quality of DS-based event labeling, we randomly select 100 documents and manually annotate them. By regarding DS-generated event tables as the prediction and human-annotated ones as the ground-truth, we evaluate the labeling quality based on the metric introduced below. Table 2 shows this approximate evaluation, and we can observe that DS-generated data are pretty good, achieving high precision and acceptable recall. In later experiments, we directly employ the automatically generated test set for evaluation due to its much broad coverage.",
"cite_spans": [],
"ref_spans": [
{
"start": 299,
"end": 306,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "6.1"
},
{
"text": "Evaluation Metric. The ultimate goal of DEE is to fill event tables with correct arguments for each role. Therefore, we evaluate DEE by directly comparing the predicted event table with the groundtruth one for each event type. Specifically, for each document and each event type, we pick one predicted record and one most similar ground-truth record (at least one of them is non-empty) from associated event tables without replacement to calculate event-role-specific true positive, false positive and false negative statistics until no record left. After aggregating these statistics among all evaluated documents, we can calculate role-level precision, recall, and F1 scores (all reported in percentage format). As an event type often includes multiple roles, we calculate micro-averaged rolelevel scores as the final event-level metric that reflects the ability of end-to-end DEE directly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "6.1"
},
{
"text": "Hyper-parameter Setting. For the input, we set the maximum number of sentences and the maximum sentence length as 64 and 128, respectively. During training, we set \u03bb 1 = 0.05, \u03bb 2 = \u03bb 3 = 0.95 and \u03b3 = 3. We employ the Adam (Kingma and Ba, 2015) optimizer with the learning rate 1e \u22124 , train for at most 100 epochs and pick the best epoch by the validation score on the development set. Besides, we leverage the decreasing order of the non-empty argument ratio as the event role order required by Doc2EDAG, because more informative entities in the path history can better facilitate later path-expanding classifications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "6.1"
},
{
"text": "Note that, due to the space limit, we leave other detailed hyper-parameters, model structures, data preprocessing configurations, event type specifications and pseudo codes for EDAG generation to the appendix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "6.1"
},
{
"text": "Baselines. As discussed in the related work, the state-of-the-art method applicable to our setting is DCFEE. We follow the implementation described in , but they did not illustrate how to handle multi-event sentences with just a sequence tagging model. Thus, we develop two versions, DCFEE-O and DCFEE-M, where DCFEE-O only produces one event record from one keyevent sentence, while DCFEE-M tries to get multiple possible argument combinations by the closest relative distance from the key-event sentence. To be fair, the SEE stages of both versions share the same neural architecture as the entity recognition part of Doc2EDAG. Besides, we further employ a simple decoding baseline of Doc2EDAG, Greedy-Dec, that only fills one event table entry greedily by using recognized entity roles to verify the necessity of end-to-end modeling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance Comparisons",
"sec_num": "6.2"
},
{
"text": "Main Results. As Table 3 shows, Doc2EDAG achieves significant improvements over all base- -PathMem -11.2 -0.2 -10.1 -16.3 -10.9 -9.7 -SchSamp -5.3 -4.8 -5.3 -6.6 -3.0 -5.0 -DocEnc -4.7 -1.5 -1.6 -1.1 -1.5 -2.1 -NegCW -1.4 -0.4 -0.7 -1.3 -0.4 -0.8 Table 5 : Performance differences of Doc2EDAG variants for all event types and the averaged ones (Avg.).",
"cite_spans": [],
"ref_spans": [
{
"start": 17,
"end": 24,
"text": "Table 3",
"ref_id": null
},
{
"start": 247,
"end": 254,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Performance Comparisons",
"sec_num": "6.2"
},
{
"text": "lines for all event types. Specifically, Doc2EDAG improves 19.1, 4.2, 26.5, 28.4 and 13.4 F1 scores over DCFEE-O, the best baseline, on EF, ER, EU, EO and EP events, respectively. These vast improvements mainly owe to the document-level end-to-end modeling of Doc2EDAG. Moreover, since we work on automatically generated data, the direct document-level supervision can be more robust than the extra sentence-level supervision used in DCFEE, which assumes the sentence containing most event arguments as the key-event one. This assumption does not work well on some event types, such as EF, EU and EO, on which DCFEE-O is even inferior to the most straightforward baseline, GreedyDec. Besides, DCFEE-O achieves better results than DCFEE-M, which demonstrates that naively guessing multiple events from the keyevent sentence cannot work well. By comparing Doc2EDAG with GreedyDec that owns high precision but low recall, we can clearly see the benefit of document-level end-to-end modeling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance Comparisons",
"sec_num": "6.2"
},
{
"text": "Single-Event vs. Multi-Event. We divide the test set into a single-event set, containing documents with just one event record, and a multi-event set, containing others, to show the extreme difficulty when arguments-scattering meets multievent. Table 4 shows F1 scores for different scenarios. Although Doc2EDAG still maintains the highest extraction performance for all cases, the multi-event set is extremely challenging as the extraction performance of all models drops significantly. Especially, GreedyDec, with no mechanism for the multi-event challenge, decreases most drastically. DCFEE-O decreases less, but is still far away from Doc2EDAG. On the multi-event set, Doc2EDAG increases by 17.7 F1 scores over DCFEE-O, the best baseline, on average.",
"cite_spans": [],
"ref_spans": [
{
"start": 244,
"end": 251,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Performance Comparisons",
"sec_num": "6.2"
},
{
"text": "Ablation Tests. To demonstrate key designs of Doc2EDAG, we conduct ablation tests by evaluating four variants: 1) -PathMem, removing the memory mechanism used during the EDAG generation, 2) -SchSamp, dropping the scheduled sampling strategy during training, 3) -DocEnc, removing the Transformer module used for documentlevel entity encoding, and 4) -NegCW, keeping the negative class weight as 1 when doing pathexpanding classifications. From Table 5 , we can observe that 1) the memory mechanism is of prime importance, as removing it can result in the most drastic performance declines, over 10 F1 scores on four event types except for the ER type whose MER is very low on the test set; 2) the scheduled sampling strategy that alleviates the mismatch of entity candidates for event table filling between training and inference also contributes greatly, improving by 5 F1 scores on average; 3) the document-level entity encoding that enhances global entity representations contributes 2.1 F1 scores on average; 4) the larger negative class weight to penalize false positive path expanding can also make slight but stable contributions for all event types.",
"cite_spans": [],
"ref_spans": [
{
"start": 443,
"end": 450,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Performance Comparisons",
"sec_num": "6.2"
},
{
"text": "Case Studies. Let us follow the example in Figure 2, Doc2EDAG can successfully recover the correct EDAG, while DCFEE inevitably makes many mistakes even with a perfect SEE model, as discussed in the introduction. Due to the space limit, we leave another three fine-grained case studies to the appendix.",
"cite_spans": [],
"ref_spans": [
{
"start": 43,
"end": 49,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Performance Comparisons",
"sec_num": "6.2"
},
{
"text": "Towards the end-to-end modeling for DEE, we propose a novel model, Doc2EDAG, associated with a novel task formalization without trigger words to ease DS-based labeling. To validate the effectiveness of the proposed approach, we build a large-scale real-world dataset in the financial domain and conduct extensive empirical studies. Notably, without any domain-specific assumption, our general labeling and modeling strategies can benefit practitioners in other domains directly. As this work shows promising results for the end-to-end DEE, expanding the inputs of Doc2EDAG from pure text sequences to richly formatted ones (Wu et al., 2018) is appealing, and we leave it as future work to explore.",
"cite_spans": [
{
"start": 623,
"end": 640,
"text": "(Wu et al., 2018)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "In this paper, we use \"entity\" as a general notion that includes named entities, numbers, percentages, etc., for brevity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.ldc.upenn.edu/ collaborations/past-projects/ace",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Estimated by their Table 1 as 2 * NO.ANN\u2212NO.POS NO.ANN .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Crawling from http://www.cninfo.com.cn/ new/index",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work is supported in part by the National Natural Science Foundation of China (NSFC) Grant 61532001 and the Zhongguancun Haihua Institute for Frontier Information Technology.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "GreedyDec 79",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "GreedyDec 79",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Overall event-level precision (P.), recall (R.) and F1 scores evaluated on the test set",
"authors": [],
"year": null,
"venue": "Table",
"volume": "3",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Table 3: Overall event-level precision (P.), recall (R.) and F1 scores evaluated on the test set. Model EF ER EU EO EP Avg. S. M. S. M. S. M. S. M. S. M. S. M. S. & M.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The stages of event extraction",
"authors": [
{
"first": "References",
"middle": [],
"last": "David Ahn",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Workshop on Annotating and Reasoning about Time and Events",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "References David Ahn. 2006. The stages of event extraction. In Proceedings of the Workshop on Annotating and Reasoning about Time and Events.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Scheduled sampling for sequence prediction with recurrent neural networks",
"authors": [
{
"first": "Samy",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Navdeep",
"middle": [],
"last": "Jaitly",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
}
],
"year": 2015,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for se- quence prediction with recurrent neural networks. In NIPS.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Automatically labeled data generation for large scale event extraction",
"authors": [
{
"first": "Yubo",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Shulin",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2017,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yubo Chen, Shulin Liu, Xiang Zhang, Kang Liu, and Jun Zhao. 2017. Automatically labeled data genera- tion for large scale event extraction. In ACL.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Event extraction via dynamic multi-pooling convolutional neural networks",
"authors": [
{
"first": "Yubo",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Liheng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Daojian",
"middle": [],
"last": "Zeng",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yubo Chen, Liheng Xu, Kang Liu, Daojian Zeng, and Jun Zhao. 2015. Event extraction via dynamic multi-pooling convolutional neural networks. In ACL.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Reinforcement learning for relation classification from noisy data",
"authors": [
{
"first": "Jun",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Xiaoyan",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2018,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jun Feng, Minlie Huang, Li Zhao, Yang Yang, and Xi- aoyan Zhu. 2018. Reinforcement learning for rela- tion classification from noisy data. In AAAI.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Using cross-entity inference to improve event extraction",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Hong",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Bin",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Jianmin",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Qiaoming",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2011,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu Hong, Jianfeng Zhang, Bin Ma, Jianmin Yao, Guodong Zhou, and Qiaoming Zhu. 2011. Using cross-entity inference to improve event extraction. In ACL.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Bidirectional lstm-crf models for sequence tagging",
"authors": [
{
"first": "Zhiheng",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1508.01991"
]
},
"num": null,
"urls": [],
"raw_text": "Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirec- tional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Refining event extraction through cross-document inference",
"authors": [
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 2008,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heng Ji and Ralph Grishman. 2008. Refining event ex- traction through cross-document inference. In ACL.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Incremental global event extraction",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Judea",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Strube",
"suffix": ""
}
],
"year": 2016,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Judea and Michael Strube. 2016. Incremental global event extraction. In COLING.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederick",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederick P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Joint event extraction via structured prediction with global features",
"authors": [
{
"first": "Qi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ji",
"middle": [],
"last": "Heng",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2013,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qi Li, Heng Ji, and Liang Huang. 2013. Joint event extraction via structured prediction with global fea- tures. In ACL.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Constructing information networks using one single model",
"authors": [
{
"first": "Qi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Hong",
"suffix": ""
},
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qi Li, Heng Ji, HONG Yu, and Sujian Li. 2014. Constructing information networks using one single model. In EMNLP.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Using document level cross-event inference to improve event extraction",
"authors": [
{
"first": "Shasha",
"middle": [],
"last": "Liao",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 2010,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shasha Liao and Ralph Grishman. 2010. Using doc- ument level cross-event inference to improve event extraction. In ACL.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Neural relation extraction with selective attention over instances",
"authors": [
{
"first": "Yankai",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Shiqi",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Huanbo",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2016,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In ACL.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Exploiting argument information to improve event detection via supervised attention mechanisms",
"authors": [
{
"first": "Shulin",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yubo",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2017,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shulin Liu, Yubo Chen, Kang Liu, and Jun Zhao. 2017. Exploiting argument information to improve event detection via supervised attention mechanisms. In ACL.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Distant supervision for relation extraction without labeled data",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Mintz",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bills",
"suffix": ""
}
],
"year": 2009,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Mintz, Steven Bills, Rion Snow, and Dan Juraf- sky. 2009. Distant supervision for relation extrac- tion without labeled data. In ACL.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Joint event extraction via recurrent neural networks",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Thien Huu Nguyen",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thien Huu Nguyen, Kyunghyun Cho, and Ralph Gr- ishman. 2016. Joint event extraction via recurrent neural networks. In NAACL.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "One for all: Neural joint modeling of entities and events",
"authors": [
{
"first": "Minh",
"middle": [],
"last": "Trung",
"suffix": ""
},
{
"first": "Thien",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Huu Nguyen",
"suffix": ""
}
],
"year": 2019,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Trung Minh Nguyen and Thien Huu Nguyen. 2019. One for all: Neural joint modeling of entities and events. In AAAI.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "CoType: Joint extraction of typed entities and relations with knowledge bases",
"authors": [
{
"first": "Xiang",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Zeqiu",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Wenqi",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Meng",
"middle": [],
"last": "Qu",
"suffix": ""
},
{
"first": "Clare",
"middle": [
"R"
],
"last": "Voss",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Tarek",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Abdelzaher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Han",
"suffix": ""
}
],
"year": 2017,
"venue": "WWW",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiang Ren, Zeqiu Wu, Wenqi He, Meng Qu, Clare R Voss, Heng Ji, Tarek F Abdelzaher, and Jiawei Han. 2017. CoType: Joint extraction of typed entities and relations with knowledge bases. In WWW.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Fast and robust joint models for biomedical event extraction",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2011,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Riedel and Andrew McCallum. 2011. Fast and robust joint models for biomedical event extrac- tion. In EMNLP.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Modeling relations and their mentions without labeled text",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Limin",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2010,
"venue": "ECML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions with- out labeled text. In ECML.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Jointly extracting event triggers and arguments by dependency-bridge rnn and tensor-based argument interaction",
"authors": [
{
"first": "Lei",
"middle": [],
"last": "Sha",
"suffix": ""
},
{
"first": "Feng",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Baobao",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Zhifang",
"middle": [],
"last": "Sui",
"suffix": ""
}
],
"year": 2018,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lei Sha, Feng Qian, Baobao Chang, and Zhifang Sui. 2018. Jointly extracting event triggers and argu- ments by dependency-bridge rnn and tensor-based argument interaction. In AAAI.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Joint extraction of entities and relations based on a novel graph scheme",
"authors": [
{
"first": "Shaolei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "IJCAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shaolei Wang, Yue Zhang, Wanxiang Che, and Ting Liu. 2018. Joint extraction of entities and relations based on a novel graph scheme. In IJCAI.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Adversarial training for weakly supervised event detection",
"authors": [
{
"first": "Xiaozhi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2019,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaozhi Wang, Xu Han, Zhiyuan Liu, Maosong Sun, and Peng Li. 2019. Adversarial training for weakly supervised event detection. In NAACL.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Fonduer: Knowledge base construction from richly formatted data",
"authors": [
{
"first": "Sen",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Hsiao",
"suffix": ""
},
{
"first": "Xiao",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Braden",
"middle": [],
"last": "Hancock",
"suffix": ""
},
{
"first": "Theodoros",
"middle": [],
"last": "Rekatsinas",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Levis",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "R\u00e9",
"suffix": ""
}
],
"year": 2018,
"venue": "SIGMOD",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sen Wu, Luke Hsiao, Xiao Cheng, Braden Hancock, Theodoros Rekatsinas, Philip Levis, and Christo- pher R\u00e9. 2018. Fonduer: Knowledge base construc- tion from richly formatted data. In SIGMOD.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Joint extraction of events and entities within a document context",
"authors": [
{
"first": "Bishan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Tom",
"middle": [
"M"
],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2016,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bishan Yang and Tom M. Mitchell. 2016. Joint extrac- tion of events and entities within a document con- text. In NAACL.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "DCFEE: A document-level chinese financial event extraction system based on automatically labeled training data",
"authors": [
{
"first": "Hang",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Yubo",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of ACL 2018",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hang Yang, Yubo Chen, Kang Liu, Yang Xiao, and Jun Zhao. 2018. DCFEE: A document-level chinese fi- nancial event extraction system based on automati- cally labeled training data. In Proceedings of ACL 2018, System Demonstrations.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Extracting relational facts by an end-to-end neural model with copy mechanism",
"authors": [
{
"first": "Xiangrong",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Daojian",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Shizhu",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiangrong Zeng, Daojian Zeng, Shizhu He, Kang Liu, and Jun Zhao. 2018a. Extracting relational facts by an end-to-end neural model with copy mechanism. In ACL.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Scale up event extraction learning via automatic training data generation",
"authors": [
{
"first": "Ying",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Yansong",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Rong",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Zheng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Chongde",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Dongyan",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2018,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ying Zeng, Yansong Feng, Rong Ma, Zheng Wang, Rui Yan, Chongde Shi, and Dongyan Zhao. 2018b. Scale up event extraction learning via automatic training data generation. In AAAI.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Event extraction with generative adversarial imitation learning",
"authors": [
{
"first": "Tongtao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1804.07881"
]
},
"num": null,
"urls": [],
"raw_text": "Tongtao Zhang and Heng Ji. 2018. Event extraction with generative adversarial imitation learning. arXiv preprint arXiv:1804.07881.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "DIAG-NRE: A neural pattern diagnosis framework for distantly supervised neural relation extraction",
"authors": [
{
"first": "Shun",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Yankai",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Peilin",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Ling",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2019,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shun Zheng, Xu Han, Yankai Lin, Peilin Yu, Lu Chen, Ling Huang, Zhiyuan Liu, and Wei Xu. 2019. DIAG-NRE: A neural pattern diagnosis framework for distantly supervised neural relation extraction. In ACL.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Joint extraction of entities and relations based on a novel tagging scheme",
"authors": [
{
"first": "Suncong",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Feng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hongyun",
"middle": [],
"last": "Bao",
"suffix": ""
},
{
"first": "Yuexing",
"middle": [],
"last": "Hao",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2017,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Suncong Zheng, Feng Wang, Hongyun Bao, Yuexing Hao, Peng Zhou, and Bo Xu. 2017. Joint extrac- tion of entities and relations based on a novel tagging scheme. In ACL.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "The rapid growth of event-related announcements considered in this paper.",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF0": {
"text": "After the company carried out the transferring of the capital accumulation fund to the capital stock, his pledged shares became [SHARE2]. The aforementioned pledged and supplementary pledged shares added up to [SHARE4], and the original repurchase date was [DATE3]. As of the date of this announcement, [PER] hold [SHARE5] of the company, accounting for [RATIO] of the total share capital of the company.",
"num": null,
"content": "<table><tr><td/><td colspan=\"3\">Entity Mark Table</td><td>Event</td><td/><td/><td/><td/><td colspan=\"2\">Event Table of Equity Pledge</td></tr><tr><td>Mark</td><td colspan=\"2\">Entity</td><td>Entity (English)</td><td>Record Role Event</td><td colspan=\"3\">Pledger Pledged Shares [PER] [SHARE2]</td><td>Pledgee [ORG]</td><td>Begin Date [DATE1]</td><td>End Date [DATE4]</td><td>Total Holding Shares [SHARE5]</td><td>Total Holding Ratio [RATIO]</td></tr><tr><td>[PER]</td><td/><td>5</td><td>Weiqun Liu</td><td/><td colspan=\"2\">[PER]</td><td>[SHARE3]</td><td>[ORG]</td><td>[DATE2]</td><td>[DATE4]</td><td>[SHARE5]</td><td>[RATIO]</td></tr><tr><td>[ORG]</td><td colspan=\"2\">7 6 38 .</td><td>Guosen Securities Co., ltd.</td><td>Event Argument</td><td>ID</td><td/><td/><td/><td/><td>Sentence</td></tr><tr><td>[DATE1]</td><td colspan=\"3\">0 2 1 Sept. 22nd, 2017</td><td>Entity</td><td>5</td><td colspan=\"2\">[DATE1] [PER]</td><td>[SHARE1]</td><td>[ORG]</td></tr><tr><td>[DATE2] [DATE3]</td><td colspan=\"2\">0 2%1 0 2 1</td><td>Sept. 6th, 2018 Sept. 20th, 2018</td><td>Mention</td><td>7</td><td/><td/><td/><td>[SHARE2]</td></tr><tr><td>[DATE4] [SHARE1]</td><td colspan=\"2\">0 2 1 6</td><td>Mar. 20th, 2019 750000 shares</td><td/><td>8</td><td colspan=\"5\">[DATE2] [PER] In [DATE2], [PER] pledged [SHARE3] to [ORG], as a supplementary pledge to the above pledged shares. [SHARE3] [ORG]</td></tr><tr><td>[SHARE2]</td><td/><td>6</td><td>975000 shares</td><td/><td>9</td><td/><td/><td>[SHARE4]</td><td colspan=\"2\">[DATE3]</td></tr><tr><td>[SHARE3]</td><td/><td>6</td><td>525000 shares</td><td/><td/><td/><td/><td/><td/></tr><tr><td>[SHARE4]</td><td/><td>6</td><td>1500000 shares</td><td/><td>10</td><td colspan=\"5\">[DATE3] [PER] In [DATE3], [PER] extended the repurchase date to [DATE4] for [SHARE4] he pledged. [SHARE4] [DATE4]</td></tr><tr><td>[SHARE5] [RATIO]</td><td>% %</td><td colspan=\"2\">6 16768903 shares 1.0858%</td><td/><td>12</td><td/><td>[PER]</td><td>[SHARE5]</td><td/><td>[RATIO]</td></tr></table>",
"type_str": "table",
"html": null
},
"TABREF3": {
"text": "",
"num": null,
"content": "<table><tr><td colspan=\"3\">: Dataset statistics about the number of docu-</td></tr><tr><td colspan=\"3\">ments for the train (#Train), development (#Dev) and</td></tr><tr><td colspan=\"3\">test (#Test), the number (#Total) and the multi-event</td></tr><tr><td colspan=\"2\">ratio (MER) of all documents.</td><td/></tr><tr><td colspan=\"3\">Precision Recall F1 MER (%)</td></tr><tr><td>98.8</td><td>89.7 94.0</td><td>31.0</td></tr></table>",
"type_str": "table",
"html": null
},
"TABREF4": {
"text": "The quality of the DS-based event labeling evaluated on 100 manually annotated documents (randomly select 20 for each event type).",
"num": null,
"content": "<table/>",
"type_str": "table",
"html": null
},
"TABREF5": {
"text": "F1 scores for all event types and the averaged ones (Avg.) on single-event (S.) and multi-event (M.) sets.",
"num": null,
"content": "<table><tr><td>Model</td><td>EF ER</td><td>EU</td><td>EO</td><td>EP Avg.</td></tr><tr><td colspan=\"5\">Doc2EDAG 70.2 87.3 71.8 75.0 77.3 76.3</td></tr></table>",
"type_str": "table",
"html": null
}
}
}
}