{ "paper_id": "D19-1027", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:02:19.109229Z" }, "title": "Open Event Extraction from Online Text using a Generative Adversarial Network", "authors": [ { "first": "Rui", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "Key Laboratory of Computer Network and Information Integration", "institution": "Southeast University", "location": { "country": "China" } }, "email": "ruiwang@seu.edu.cn" }, { "first": "Deyu", "middle": [], "last": "Zhou", "suffix": "", "affiliation": { "laboratory": "Key Laboratory of Computer Network and Information Integration", "institution": "Southeast University", "location": { "country": "China" } }, "email": "d.zhou@seu.edu.cn" }, { "first": "Yulan", "middle": [], "last": "He", "suffix": "", "affiliation": {}, "email": "yulan.he@warwick.ac.uk" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "To extract the structured representations of open-domain events, Bayesian graphical models have made some progress. However, these approaches typically assume that all words in a document are generated from a single event. While this may be true for short text such as tweets, such an assumption does not generally hold for long text such as news articles. Moreover, Bayesian graphical models often rely on Gibbs sampling for parameter inference which may take long time to converge. To address these limitations, we propose an event extraction model based on Generative Adversarial Nets, called Adversarial-neural Event Model (AEM). AEM models an event with a Dirichlet prior and uses a generator network to capture the patterns underlying latent events. A discriminator is used to distinguish documents reconstructed from the latent events and the original documents. A byproduct of the discriminator is that the features generated by the learned discriminator network allow the visualization of the extracted events. Our model has been evaluated on two Twitter datasets and a news article dataset. Experimental results show that our model outperforms the baseline approaches on all the datasets, with more significant improvements observed on the news article dataset where an increase of 15% is observed in F-measure.", "pdf_parse": { "paper_id": "D19-1027", "_pdf_hash": "", "abstract": [ { "text": "To extract the structured representations of open-domain events, Bayesian graphical models have made some progress. However, these approaches typically assume that all words in a document are generated from a single event. While this may be true for short text such as tweets, such an assumption does not generally hold for long text such as news articles. Moreover, Bayesian graphical models often rely on Gibbs sampling for parameter inference which may take long time to converge. To address these limitations, we propose an event extraction model based on Generative Adversarial Nets, called Adversarial-neural Event Model (AEM). AEM models an event with a Dirichlet prior and uses a generator network to capture the patterns underlying latent events. A discriminator is used to distinguish documents reconstructed from the latent events and the original documents. A byproduct of the discriminator is that the features generated by the learned discriminator network allow the visualization of the extracted events. Our model has been evaluated on two Twitter datasets and a news article dataset. Experimental results show that our model outperforms the baseline approaches on all the datasets, with more significant improvements observed on the news article dataset where an increase of 15% is observed in F-measure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "With the increasing popularity of the Internet, online texts provided by social media platform (e.g. Twitter) and news media sites (e.g. Google news) have become important sources of realworld events. Therefore, it is crucial to automatically extract events from online texts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Due to the high variety of events discussed online and the difficulty in obtaining annotated data * corresponding author for training, traditional template-based or supervised learning approaches for event extraction are no longer applicable in dealing with online texts. Nevertheless, newsworthy events are often discussed by many tweets or online news articles. Therefore, the same event could be mentioned by a high volume of redundant tweets or news articles. This property inspires the research community to devise clustering-based models (Popescu et al., 2011; Abdelhaq et al., 2013; Xia et al., 2015) to discover new or previously unidentified events without extracting structured representations.", "cite_spans": [ { "start": 544, "end": 566, "text": "(Popescu et al., 2011;", "ref_id": "BIBREF17" }, { "start": 567, "end": 589, "text": "Abdelhaq et al., 2013;", "ref_id": "BIBREF0" }, { "start": 590, "end": 607, "text": "Xia et al., 2015)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To extract structured representations of events such as who did what, when, where and why, Bayesian approaches have made some progress. Assuming that each document is assigned to a single event, which is modeled as a joint distribution over the named entities, the date and the location of the event, and the event-related keywords, Zhou et al. (2014) proposed an unsupervised Latent Event Model (LEM) for open-domain event extraction. To address the limitation that LEM requires the number of events to be pre-set, Zhou et al. (2017) further proposed the Dirichlet Process Event Mixture Model (DPEMM) in which the number of events can be learned automatically from data. However, both LEM and DPEMM have two limitations: (1) they assume that all words in a document are generated from a single event which can be represented by a quadruple . However, long texts such news articles often describe multiple events which clearly violates this assumption; (2) During the inference process of both approaches, the Gibbs sampler needs to compute the conditional posterior distribution and assigns an event for each document. This is time consuming and takes long time to converge.", "cite_spans": [ { "start": 333, "end": 351, "text": "Zhou et al. (2014)", "ref_id": "BIBREF27" }, { "start": 516, "end": 534, "text": "Zhou et al. (2017)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To deal with these limitations, in this paper, we propose the Adversarial-neural Event Model (AEM) based on adversarial training for opendomain event extraction. The principle idea is to use a generator network to learn the projection function between the document-event distribution and four event related word distributions (entity distribution, location distribution, keyword distribution and date distribution). Instead of providing an analytic approximation, AEM uses a discriminator network to discriminate between the reconstructed documents from latent events and the original input documents. This essentially helps the generator to construct a more realistic document from a random noise drawn from a Dirichlet distribution. Due to the flexibility of neural networks, the generator is capable of learning complicated nonlinear distributions. And the supervision signal provided by the discriminator will help generator to capture the event-related patterns. Furthermore, the discriminator also provides lowdimensional discriminative features which can be used to visualize documents and events.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The main contributions of the paper are summarized below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We propose a novel Adversarial-neural Event Model (AEM), which is, to the best of our knowledge, the first attempt of using adversarial training for open-domain event extraction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Unlike existing Bayesian graphical modeling approaches, AEM is able to extract events from different text sources (short and long). And a significant improvement on computational efficiency is also observed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Experimental results on three datasets show that AEM outperforms the baselines in terms of accuracy, recall and F-measure. In addition, the results show the strength of AEM in visualizing events.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our work is related to two lines of research, event extraction and Generative Adversarial Nets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Recently there has been much interest in event extraction from online texts, and approaches could be categorized as domain-specific and opendomain event extraction. Domain-specific event extraction often focuses on the specific types of events (e.g. sports events or city events). Panem et al. (2014) devised a novel algorithm to extract attribute-value pairs and mapped them to manually generated schemes for extracting the natural disaster events. Similarly, to extract the city-traffic related event, Anantharam et al. (2015) viewed the task as a sequential tagging problem and proposed an approach based on the conditional random fields. Zhang (2018) proposed an event extraction approach based on imitation learning, especially on inverse reinforcement learning.", "cite_spans": [ { "start": 281, "end": 300, "text": "Panem et al. (2014)", "ref_id": "BIBREF15" }, { "start": 504, "end": 528, "text": "Anantharam et al. (2015)", "ref_id": "BIBREF1" }, { "start": 642, "end": 654, "text": "Zhang (2018)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Event Extraction", "sec_num": null }, { "text": "Open-domain event extraction aims to extract events without limiting the specific types of events. To analyze individual messages and induce a canonical value for each event, Benson et al. (2011) proposed an approach based on a structured graphical model. By representing an event with a binary tuple which is constituted by a named entity and a date, Ritter et al. (2012) employed some statistic to measure the strength of associations between a named entity and a date. The proposed system relies on a supervised labeler trained on annotated data. In (Abdelhaq et al., 2013) , Abdelhaq et al. developed a realtime event extraction system called EvenTweet, and each event is represented as a triple constituted by time, location and keywords. To extract more information, Wang el al. (2015) developed a system employing the links in tweets and combing tweets with linked articles to identify events. Xia el al. (2015) combined texts with the location information to detect the events with low spatial and temporal deviations. Zhou et al. (2014; represented event as a quadruple and proposed two Bayesian models to extract events from tweets.", "cite_spans": [ { "start": 175, "end": 195, "text": "Benson et al. (2011)", "ref_id": "BIBREF3" }, { "start": 352, "end": 372, "text": "Ritter et al. (2012)", "ref_id": "BIBREF19" }, { "start": 553, "end": 576, "text": "(Abdelhaq et al., 2013)", "ref_id": "BIBREF0" }, { "start": 901, "end": 918, "text": "Xia el al. (2015)", "ref_id": null }, { "start": 1027, "end": 1045, "text": "Zhou et al. (2014;", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Event Extraction", "sec_num": null }, { "text": "As a neural-based generative model, Generative Adversarial Nets (Goodfellow et al., 2014) have been extensively researched in natural language processing (NLP) community.", "cite_spans": [ { "start": 64, "end": 89, "text": "(Goodfellow et al., 2014)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Generative Adversarial Nets", "sec_num": null }, { "text": "For text generation, the sequence generative adversarial network (SeqGAN) proposed in (Yu et al., 2017 ) incorporated a policy gradient strategy to optimize the generation process. Based on the policy gradient, Lin et al. (2017) proposed RankGAN to capture the rich structures of language by ranking and analyzing a collection of human-written and machine-written sentences. To overcome mode collapse when dealing with discrete data, Fedus et al. (2018) posed MaskGAN which used an actor-critic conditional GAN to fill in missing text conditioned on the surrounding context. Along this line, proposed SentiGAN to generate texts of different sentiment labels. Besides, improved the performance of semi-supervised text classification using adversarial training, (Zeng et al., 2018; Qin et al., 2018) designed GAN-based models for distance supervision relation extraction.", "cite_spans": [ { "start": 86, "end": 102, "text": "(Yu et al., 2017", "ref_id": "BIBREF24" }, { "start": 211, "end": 228, "text": "Lin et al. (2017)", "ref_id": "BIBREF12" }, { "start": 434, "end": 453, "text": "Fedus et al. (2018)", "ref_id": "BIBREF4" }, { "start": 760, "end": 779, "text": "(Zeng et al., 2018;", "ref_id": "BIBREF25" }, { "start": 780, "end": 797, "text": "Qin et al., 2018)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Generative Adversarial Nets", "sec_num": null }, { "text": "Although various GAN based approaches have been explored for many applications, none of these approaches tackles open-domain event extraction from online texts. We propose a novel GANbased event extraction model called AEM. Compared with the previous models, AEM has the following differences: (1) Unlike most GAN-based text generation approaches, a generator network is employed in AEM to learn the projection function between an event distribution and the eventrelated word distributions (entity, location, keyword, date). The learned generator captures eventrelated patterns rather than generating text sequence; (2) Different from LEM and DPEMM, AEM uses a generator network to capture the event-related patterns and is able to mine events from different text sources (short and long). Moreover, unlike traditional inference procedure, such as Gibbs sampling used in LEM and DPEMM, AEM could extract the events more efficiently due to the CUDA acceleration; (3) The discriminative features learned by the discriminator of AEM provide a straightforward way to visualize the extracted events.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative Adversarial Nets", "sec_num": null }, { "text": "We describe Adversarial-neural Event Model (AEM) in this section. An event is represented as a quadruple , where e stands for non-location named entities, l for a location, k for event-related keywords, d for a date, and each component in the tuple is represented by component-specific representative words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "AEM is constituted by three components: (1) The document representation module, as shown at the top of Figure 1 , defines a document representation approach which converts an input document from the online text corpus into \u20d7 d r \u2208 R V which captures the key event elements; (2) The generator G, as shown in the lower-left part of Figure1, generates a fake document \u20d7 d f which is constituted by four multinomial distributions using an event distribution \u20d7 \u03b8 drawn from a Dirichlet distribution as input; (3) The discriminator D, as shown in the lower-right part of Figure1, distinguishes the real documents from the fake ones and its output is subsequently employed as a learning signal to update the G and D. The details of each component are presented below.", "cite_spans": [], "ref_spans": [ { "start": 103, "end": 111, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "Each document doc in a given corpus C is represented as a concatenation of 4 multinomial distributions which are entity distribution (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document Representation", "sec_num": "3.1" }, { "text": "\u20d7 d e r ), lo- cation distribution ( \u20d7 d l r ), keyword distribution ( \u20d7 d k r ) and date distribution ( \u20d7 d d r )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document Representation", "sec_num": "3.1" }, { "text": "of the document. As four distributions are calculated in a similar way, we only describe the computation of the entity distribution below as an example.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document Representation", "sec_num": "3.1" }, { "text": "The entity distribution \u20d7 d e r is represented by a normalized V e -dimensional vector weighted by TF-IDF, and it is calculated as: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document Representation", "sec_num": "3.1" }, { "text": "tf e i,doc = n e i,doc", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document Representation", "sec_num": "3.1" }, { "text": "V l , V k and V d , respectively. Finally, each docu- ment doc in the corpus is represented by a V - dimensional (V =V e +V l +V k +V d ) vector \u20d7 d r by con- catenating four computed distributions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document Representation", "sec_num": "3.1" }, { "text": "The generator network G is designed to learn the projection function between the document-event distribution \u20d7 \u03b8 and the four document-level word distributions (entity distribution, location distribution, keyword distribution and date distribution).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generator", "sec_num": "3.2.1" }, { "text": "More concretely, G consists of a E-dimensional document-event distribution layer, H-dimensional hidden layer and V -dimensional event-related word distribution layer. Here, E denotes the event number, H is the number of units in the hidden layer, V is the vocabulary size and equals to V e +V l +V k +V d . As shown in Figure 1 , G firstly employs a random document-event distribution \u20d7 \u03b8 as an input. To model the multinomial property of the document-event distribution, \u20d7 \u03b8 is drawn from a Dirichlet distribution parameterized with \u20d7 \u03b1 which is formulated as:", "cite_spans": [], "ref_spans": [ { "start": 319, "end": 327, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Generator", "sec_num": "3.2.1" }, { "text": "p( \u20d7 \u03b8|\u20d7 \u03b1) =Dir( \u20d7 \u03b8|\u20d7 \u03b1) \u225c 1 \u25b3 (\u20d7 \u03b1) E \u220f t=1 \u03b8 \u03b1t\u22121 t (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generator", "sec_num": "3.2.1" }, { "text": "where \u20d7 \u03b1 is the hyper-parameter of the dirichlet distribution, E is the number of events which should be set in AEM,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generator", "sec_num": "3.2.1" }, { "text": "\u25b3(\u20d7 \u03b1) = \u220f E t=1 \u0393(\u03b1t) \u0393( \u2211 E t=1 \u03b1t)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generator", "sec_num": "3.2.1" }, { "text": ", \u03b8 t \u2208", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generator", "sec_num": "3.2.1" }, { "text": "[0, 1] represents the proportion of event t in the document and \u2211 E t=1 \u03b8 t = 1. Subsequently, G transforms \u20d7 \u03b8 into a Hdimensional hidden space using a linear layer followed by layer normalization, and the transformation is defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generator", "sec_num": "3.2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u20d7 s h = LN (W h \u20d7 \u03b8 + \u20d7 b h ) (2) \u20d7 o h = max(\u20d7 s h , l p \u00d7 \u20d7 s h )", "eq_num": "(3)" } ], "section": "Generator", "sec_num": "3.2.1" }, { "text": "where W h \u2208 R H\u00d7E represents the weight matrix of hidden layer, and \u20d7 b h denotes the bias term, l p is the parameter of LeakyReLU activation and is set to 0.1, \u20d7 s h and \u20d7 o h denote the normalized hidden states and the outputs of the hidden layer, and LN represents the layer normalization. Then, to project \u20d7 o h into four document-level event related word distributions ( Figure 1 ), four subnets (each contains a linear layer, a batch normalization layer and a softmax layer) are employed in G. And the exact transformation is based on the formulas below:", "cite_spans": [], "ref_spans": [ { "start": 376, "end": 384, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Generator", "sec_num": "3.2.1" }, { "text": "\u20d7 d e f , \u20d7 d l f , \u20d7 d k f and \u20d7 d d f shown in", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generator", "sec_num": "3.2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u20d7 h e w = W e w \u20d7 o h + \u20d7 b e w , \u20d7 d e f = SM (BN ( \u20d7 h e w )) (4) \u20d7 h l w = W l w \u20d7 o h + \u20d7 b l w , \u20d7 d l f = SM (BN ( \u20d7 h l w )) (5) \u20d7 h k w = W k w \u20d7 o h + \u20d7 b k w , \u20d7 d k f = SM (BN ( \u20d7 h k w )) (6) \u20d7 h d w = W d w \u20d7 o h + \u20d7 b d w , \u20d7 d d f = SM (BN ( \u20d7 h d w ))", "eq_num": "(7)" } ], "section": "Generator", "sec_num": "3.2.1" }, { "text": "distribution and date distribution, respectively, that correspond to the given event distribution \u20d7 \u03b8. And each dimension represents the relevance between corresponding entity/location/keyword/date term and the input event distribution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generator", "sec_num": "3.2.1" }, { "text": "Finally, four generated distributions are concatenated to represent the generated document \u20d7 d f corresponding to the input \u20d7 \u03b8:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generator", "sec_num": "3.2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u20d7 d f = [ \u20d7 d e f ; \u20d7 d l f ; \u20d7 d k f ; \u20d7 d d f ]", "eq_num": "(8)" } ], "section": "Generator", "sec_num": "3.2.1" }, { "text": "The discriminator network D is designed as a fully-connected network which contains an input layer, a discriminative feature layer (discriminative features are employed for event visualization) and an output layer. In AEM, D uses fake document \u20d7 d f and real document \u20d7 d r as input and outputs the signal D out to indicate the source of the input data (lower value denotes that D is prone to predict the input data as a fake document and vice versa).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discriminator", "sec_num": "3.2.2" }, { "text": "As have previously been discussed in Gulrajani et al., 2017) , lipschitz continuity of D network is crucial to the training of the GAN-based approaches. To ensure the lipschitz continuity of D, we employ the spectral normalization technique (Miyato et al., 2018) . More concretely, for each linear layer l d ( \u20d7 h) = W \u20d7 h (bias term is omitted for simplicity) in D, the weight matrix W is normalized by \u03c3(W ). Here, \u03c3(W ) is the spectral norm of the weight matrix W with the definition below:", "cite_spans": [ { "start": 37, "end": 60, "text": "Gulrajani et al., 2017)", "ref_id": "BIBREF8" }, { "start": 241, "end": 262, "text": "(Miyato et al., 2018)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Discriminator", "sec_num": "3.2.2" }, { "text": "\u03c3(W ) := max \u20d7 h: \u20d7 h\u0338 = \u20d7 0 \u2225W \u20d7 h\u2225 2 \u2225 \u20d7 h\u2225 2 = max \u2225 \u20d7 h\u2225 2 \u22641 \u2225W \u20d7 h\u2225 2 (9)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discriminator", "sec_num": "3.2.2" }, { "text": "which is equivalent to the largest singular value of W . The weight matrix W is then normalized using:\u0174", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discriminator", "sec_num": "3.2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "SN := W/\u03c3(W )", "eq_num": "(10)" } ], "section": "Discriminator", "sec_num": "3.2.2" }, { "text": "Obviously, the normalized weight matrix\u0174 SN satisfies that \u03c3(\u0174 SN ) = 1 and further ensures the lipschitz continuity of the D network (Miyato et al., 2018) . To reduce the high cost of computing spectral norm \u03c3(W ) using singular value decomposition at each iteration, we follow (Yoshida and Miyato, 2017) and employ the power iteration method to estimate \u03c3(W ) instead. With this substitution, the spectral norm can be estimated with very small additional computational time.", "cite_spans": [ { "start": 134, "end": 155, "text": "(Miyato et al., 2018)", "ref_id": "BIBREF14" }, { "start": 279, "end": 305, "text": "(Yoshida and Miyato, 2017)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Discriminator", "sec_num": "3.2.2" }, { "text": "The real document \u20d7 d r and fake document \u20d7 d f shown in Figure 1 could be viewed as random samples from two distributions P r and P g , and each of them is a joint distribution constituted by four Dirichlet distributions (corresponding to entity distribution, location distribution, keyword distribution and date distribution). The training objective of AEM is to let the distribution P g (produced by G network) to approximate the real data distribution P r as much as possible.", "cite_spans": [], "ref_spans": [ { "start": 57, "end": 65, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Objective and Training Procedure", "sec_num": "3.3" }, { "text": "To compare the different GAN losses, Kurach (2018) takes a sober view of the current state of GAN and suggests that the Jansen-Shannon divergence used in (Goodfellow et al., 2014 ) performs more stable than variant objectives. Besides, Kurach also advocates that the gradient penalty (GP) regularization devised in (Gulrajani et al., 2017) will further improve the stability of the model. Thus, the objective function of the proposed AEM is defined as:", "cite_spans": [ { "start": 154, "end": 178, "text": "(Goodfellow et al., 2014", "ref_id": "BIBREF7" }, { "start": 315, "end": 339, "text": "(Gulrajani et al., 2017)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Objective and Training Procedure", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L d = \u2212 E \u20d7 dr\u223cPr [log(D( \u20d7 d r ))] \u2212 E \u20d7 d f \u223cPg [log(1 \u2212 D( \u20d7 d f ))]", "eq_num": "(11)" } ], "section": "Objective and Training Procedure", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L gp = E \u20d7 d * \u223cP d * [(\u2225 \u2207 \u20d7 d * D( \u20d7 d * ) \u2225 2 \u22121) 2 ] (12) L = L d + \u03bbL gp", "eq_num": "(13)" } ], "section": "Objective and Training Procedure", "sec_num": "3.3" }, { "text": "where L d denotes the discriminator loss, L gp represents the gradient penalty regularization loss, \u03bb is the gradient penalty coefficient which trade-off the two components of objective, \u20d7 d * could be obtained by sampling uniformly along a straight line between \u20d7 d r and \u20d7 d f , P d * denotes the corresponding distribution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Objective and Training Procedure", "sec_num": "3.3" }, { "text": "The training procedure of AEM is presented in Algorithm 1, where E is the event number, n d denotes the number of discriminator iterations per generator iteration, m is the batch size, \u03b1 \u2032 represents the learning rate, \u03b2 1 and \u03b2 2 are hyperparameters of Adam (Kingma and Ba, 2014), p a denotes {\u03b1 \u2032 , \u03b2 1 , \u03b2 2 }. In this paper, we set \u03bb = 10, n d = 5, m = 32. Moreover, \u03b1 \u2032 , \u03b2 1 and \u03b2 2 are set as 0.0002, 0.5 and 0.999.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Objective and Training Procedure", "sec_num": "3.3" }, { "text": "After the model training, the generator G learns the mapping function between the document-event distribution and the document-level event-related word distributions (entity, location, keyword and date). In other words, with an event distribution Algorithm 1 Training procedure for AEM ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Event Generation", "sec_num": "3.4" }, { "text": "Input: E, \u03bb, n d , m, \u03b1 \u2032 , \u03b2 1 , \u03b2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Event Generation", "sec_num": "3.4" }, { "text": "\u20d7 d f \u2190 G( \u20d7 \u03b8) 9: \u20d7 d * \u2190 \u03f5 \u20d7 d r + (1 \u2212 \u03f5) \u20d7 d f", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Event Generation", "sec_num": "3.4" }, { "text": "10:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Event Generation", "sec_num": "3.4" }, { "text": "L (j) d = \u2212 log[D( \u20d7 d r )]\u2212log[1\u2212D( \u20d7 d f )]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Event Generation", "sec_num": "3.4" }, { "text": "11:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Event Generation", "sec_num": "3.4" }, { "text": "L (j) gp = (\u2225 \u2207 \u20d7 d * D( \u20d7 d * ) \u2225 \u22121) 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Event Generation", "sec_num": "3.4" }, { "text": "12:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Event Generation", "sec_num": "3.4" }, { "text": "L (j) \u2190 L (j) d + \u03bbL (j) gp 13:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Event Generation", "sec_num": "3.4" }, { "text": "end for 14:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Event Generation", "sec_num": "3.4" }, { "text": "\u03c9 d \u2190 Adam(\u2207 \u03c9 d 1 m m \u2211 j=1 L (j) , \u03c9 d , p a )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Event Generation", "sec_num": "3.4" }, { "text": "15:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Event Generation", "sec_num": "3.4" }, { "text": "end for", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Event Generation", "sec_num": "3.4" }, { "text": "Sample m noise", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "16:", "sec_num": null }, { "text": "{ \u20d7 \u03b8 (j) \u223c Dir( \u20d7 \u03b8|\u20d7 \u03b1) } 17: \u03c9 g \u2190 Adam(\u2207 \u03c9g 1 m m \u2211 j=1 log[1 \u2212 D(G( \u20d7 \u03b8 (j) ))], \u03c9 g , p a ) 18:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "16:", "sec_num": null }, { "text": "end while \u20d7 \u03b8 \u2032 as input, G could generate the corresponding entity distribution, location distribution, keyword distribution and date distribution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "16:", "sec_num": null }, { "text": "In AEM, we employ event seed \u20d7 s t\u2208{1,...,E} , an E-dimensional vector with one-hot encoding, to generate the event related word distributions. For example, in ten event setting, \u20d7 s 1 = [1, 0, 0, 0, 0, 0, 0, 0, 0, 0] \u22ba represents the event seed of the first event. With the event seed \u20d7 s 1 as input, the corresponding distributions could be generated by G based on the equation below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "16:", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "[ \u20d7 \u03d5 1 e ; \u20d7 \u03d5 1 l ; \u20d7 \u03d5 1 k ; \u20d7 \u03d5 1 d ] = G(\u20d7 s 1 )", "eq_num": "(14)" } ], "section": "16:", "sec_num": null }, { "text": "where \u20d7 \u03d5 1 e , \u20d7 \u03d5 1 l , \u20d7 \u03d5 1 k and \u20d7 \u03d5 1 d denote the entity distribution, location distribution, keyword distribution and date distribution of the first event respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "16:", "sec_num": null }, { "text": "In this section, we firstly describe the datasets and baseline approaches used in our experiments and then present the experimental results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "To validate the effectiveness of AEM for extracting events from social media (e.g. Twitter) and news media sites (e.g. Google news), three datasets (FSD (Petrovic et al., 2013) , Twitter, and Google datasets 1 ) are employed. Details are summarized below:", "cite_spans": [ { "start": 153, "end": 176, "text": "(Petrovic et al., 2013)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.1" }, { "text": "\u2022 FSD dataset (social media) is the first story detection dataset containing 2,499 tweets. We filter out events mentioned in less than 15 tweets since events mentioned in very few tweets are less likely to be significant. The final dataset contains 2,453 tweets annotated with 20 events.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.1" }, { "text": "\u2022 Twitter dataset (social media) is collected from tweets published in the month of December in 2010 using Twitter streaming API. It contains 1,000 tweets annotated with 20 events.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.1" }, { "text": "\u2022 Google dataset (news article) is a subset of GDELT Event Database 1 , documents are retrieved by event related words. For example, documents which contain 'malaysia', 'airline', 'search' and 'plane' are retrieved for event MH370. By combining 30 events related documents, the dataset contains 11,909 news articles.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.1" }, { "text": "We choose the following three models as the baselines:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.1" }, { "text": "\u2022 K-means is a well known data clustering algorithm, we implement the algorithm using sklearn 2 toolbox, and represent documents using bag-of-words weighted by TF-IDF.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.1" }, { "text": "\u2022 LEM (Zhou et al., 2014 ) is a Bayesian modeling approach for open-domain event extraction. It treats an event as a latent variable and models the generation of an event as a joint distribution of its individual event elements. We implement the algorithm with the default configuration.", "cite_spans": [ { "start": 6, "end": 24, "text": "(Zhou et al., 2014", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.1" }, { "text": "\u2022 DPEMM (Zhou et al., 2017 ) is a nonparametric mixture model for event extraction. It addresses the limitation of LEM that the number of events should be known beforehand. We implement the model with the default configuration.", "cite_spans": [ { "start": 8, "end": 26, "text": "(Zhou et al., 2017", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.1" }, { "text": "For social media text corpus (FSD and Twitter), a named entity tagger 3 specifically built for Twitter is used to extract named entities including locations from tweets. A Twitter Part-of-Speech (POS) tagger (Gimpel et al., 2010) is used for POS tagging and only words tagged with nouns, verbs and adjectives are retained as keywords. For the Google dataset, we use the Stanford Named Entity Recognizer 4 to identify the named entities (organization, location and person). Due to the 'date' information not being provided in the Google dataset, we further divide the non-location named entities into two categories ('person' and 'organization') and employ a quadruple to denote an event in news articles. We also remove common stopwords and only keep the recognized named entities and the tokens which are verbs, nouns or adjectives.", "cite_spans": [ { "start": 208, "end": 229, "text": "(Gimpel et al., 2010)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.1" }, { "text": "To evaluate the performance of the proposed approach, we use the evaluation metrics such as precision, recall and F-measure. Precision is defined as the proportion of the correctly identified events out of the model generated events. Recall is defined as the proportion of correctly identified true events. For calculating the precision of the 4-tuple, we use following criteria:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "4.2" }, { "text": "\u2022 (1) Do the entity/organization, location, date/person and keyword that we have extracted refer to the same event?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "4.2" }, { "text": "\u2022 ( Table 1 shows the event extraction results on the three datasets. The statistics are obtained with the default parameter setting that n d is set to 5, number of hidden units H is set to 200, and G contains three fully-connected layers. The event number E for three datasets are set to 25, 25 and 35, respectively. The examples of extracted events are shown in Table. 2.", "cite_spans": [], "ref_spans": [ { "start": 4, "end": 11, "text": "Table 1", "ref_id": "TABREF4" }, { "start": 364, "end": 370, "text": "Table.", "ref_id": null } ], "eq_spans": [], "section": "Experimental Results", "sec_num": "4.2" }, { "text": "It can be observed that K-means performs the worst over all three datasets. On the social media datasets, AEM outpoerforms both LEM and DPEMM by 6.5% and 1.7% respectively in Fmeasure on the FSD dataset, and 4.4% and 3.7% in F-measure on the Twitter dataset. We can also observe that apart from K-means, all the approaches perform worse on the Twitter dataset compared to FSD, possibly due to the limited size of the Twitter dataset. Moreover, on the Google dataset, the proposed AEM performs significantly better than LEM and DPEMM. It improves upon LEM by 15.5% and upon DPEMM by more than 30% in F-measure. This is because: (1) the assumption made by LEM and DPEMM that all words in a document are generated from a single event is not suitable for long text such as news articles;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "4.2" }, { "text": "(2) DPEMM generates too many irrelevant events which leads to a very low precision score. Overall, we see the superior performance of AEM across all datasets, with more significant improvement on the for Google datasets (long text).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "4.2" }, { "text": "We next visualize the detected events based on the discriminative features learned by the trained D network in AEM. The t-SNE (Maaten and Hinton, 2008) visualization results on the datasets are shown in Figure 2 . For clarity, each subplot is plotted on a subset of the dataset containing ten randomly selected events. It can be observed that documents describing the same event have been grouped into the same cluster.", "cite_spans": [ { "start": 126, "end": 151, "text": "(Maaten and Hinton, 2008)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 203, "end": 211, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Experimental Results", "sec_num": "4.2" }, { "text": "To further evaluate if a variation of the parameters n d (the number of discriminator iterations per generator iteration), H (the number of units in hidden layer) and the structure of generator G will impact the extraction performance, additional experiments have been conducted on the Google dataset, with n d set to 5, 7 and 10, H set to 100, 150 and 200, and three G structures (3, 4 and 5 layers). The comparison results on precision, recall and Fmeasure are shown in Figure 3 . From the results, it could be observed that AEM with the 5-layer generator performs the best and achieves 96.7% in F-measure, and the worst F-measure obtained by AEM is 85.7%. Overall, the AEM outperforms all compared approaches acorss various parameter settings, showing relatively stable performance.", "cite_spans": [], "ref_spans": [ { "start": 472, "end": 480, "text": "Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Experimental Results", "sec_num": "4.2" }, { "text": "Finally, we compare in Figure 4 the training time required for each model, excluding the constant time required by each model to load the data. We could observe that K-means runs fastest among all four approaches. Both LEM and DPEMM need to sample the event allocation for each document and update the relevant counts during Gibbs sampling which are time consuming. AEM only requires a fraction of the training time compared to LEM and DPEMM. Moreover, on a larger dataset such as the Google dataset, AEM appears to be far more efficient compared to LEM and DPEMM. ", "cite_spans": [], "ref_spans": [ { "start": 23, "end": 31, "text": "Figure 4", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Experimental Results", "sec_num": "4.2" }, { "text": "In this paper, we have proposed a novel approach based on adversarial training to extract the structured representation of events from online text. The experimental comparison with the state-of-the-art methods shows that AEM achieves improved extraction performance, especially on e: king larri cnn red vega l: qatar russia china europ beij l: uk state richardson unit south k: host cup reaction world triumph k: final show broadcast night year d: 2010/9/3 2010/9/10 2010/9/9 2010/9/8 2010/9/17 d: 2010/9/17 2010/9/10 2010/9/8 2010/9/9 2010/9/26", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "5" }, { "text": "Coach Urban Meyer step down e: meyer urban reid florida gator Boxer floyd Maweath is arrested e: boxer floyd mayweath vega obama l: florida univers senat europ hous l: vega las beij europ itali k: coach step univers footbal accord k: guard boxer secur assault arrest d: 2010/9/8 2010/9/10 2010/9/9 2010/9/18 2010/9/17 d: 2010/9/17 2010/9/9 2010/9/18 2010/9/8 2010/9/26", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "5" }, { "text": "Christian violence in nigeria e: christian muslim polit concord eve Xiaobo Liu award nobel prize e: xiaobo liu nobel prize china l: nigeria jos congress christian of l: china oslo congress continent europ k: religion church violenc plagu peopl k: award live nobel ceremoni dissid d: 2010/9/25 2010/9/28 2010/9/26 2010/9/6 2010/9/8 d: 2010/9/10 2010/9/8 2010/9/17 2010/9/9 2010/9/18 Google dataset long text corpora with an improvement of 15% observed in F-measure. AEM only requires a fraction of training time compared to existing Bayesian graphical modeling approaches. In future work, we will explore incorporating external knowledge (e.g. word relatedness contained in word embeddings) into the learning framework for event extraction. Besides, exploring nonparametric neural event extraction approaches and detecting the evolution of events over time from news articles are other promising future directions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "5" }, { "text": "where SM means softmax layer, W e w \u2208 R Ve\u00d7H ,W l w \u2208 R V l \u00d7H , W k w \u2208 R V k \u00d7H and W d w \u2208 R Vd \u00d7H denote the weight matrices of the linear layers in subnets, \u20d7 b e w , \u20d7 b l w , \u20d7 b k w and \u20d7 b d w represent the corresponding bias terms, \u20d7 h e w , \u20d7 h l w , \u20d7 h k w and \u20d7 h d w are state vectors. \u20d7 d e f , \u20d7 d l f , \u20d7 d k f and \u20d7 d d f denote the generated entity distribution, location distribution, keyword", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://data.gdeltproject.org/events/index.html 2 https://scikit-learn.org/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://fithub.com/aritter/twitter-nlp 4 https://nlp.stanford.edu/software/CRF-NER.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank anonymous reviewers for their valuable comments and helpful suggestions. This work was funded by the National Key Research and Development Program of China (2017YFB1002801), the National Natural Science Foundation of China (61772132), the Natural Science Foundation of Jiangsu Province of China (BK20161430) and Innovate UK (grant no. 103652).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": "6" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Eventweet: Online localized event detection from twitter", "authors": [ { "first": "Hamed", "middle": [], "last": "Abdelhaq", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Sengstock", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Gertz", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the VLDB Endowment", "volume": "6", "issue": "", "pages": "1326--1329", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hamed Abdelhaq, Christian Sengstock, and Michael Gertz. 2013. Eventweet: Online localized event de- tection from twitter. Proceedings of the VLDB En- dowment, 6(12):1326-1329.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Extracting city traffic events from social streams", "authors": [ { "first": "Pramod", "middle": [], "last": "Anantharam", "suffix": "" }, { "first": "Payam", "middle": [], "last": "Barnaghi", "suffix": "" }, { "first": "Krishnaprasad", "middle": [], "last": "Thirunarayan", "suffix": "" }, { "first": "Amit", "middle": [], "last": "Sheth", "suffix": "" } ], "year": 2015, "venue": "ACM Transactions on Intelligent Systems and Technology (TIST)", "volume": "6", "issue": "4", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pramod Anantharam, Payam Barnaghi, Krishnaprasad Thirunarayan, and Amit Sheth. 2015. Extract- ing city traffic events from social streams. ACM Transactions on Intelligent Systems and Technology (TIST), 6(4):43.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Event discovery in social media feeds", "authors": [ { "first": "Edward", "middle": [], "last": "Benson", "suffix": "" }, { "first": "Aria", "middle": [], "last": "Haghighi", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "389--398", "other_ids": {}, "num": null, "urls": [], "raw_text": "Edward Benson, Aria Haghighi, and Regina Barzilay. 2011. Event discovery in social media feeds. In Proceedings of the 49th Annual Meeting of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies-Volume 1, pages 389-398. As- sociation for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Maskgan: Better text generation via filling in the", "authors": [ { "first": "William", "middle": [], "last": "Fedus", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Goodfellow", "suffix": "" }, { "first": "Andrew M", "middle": [], "last": "Dai", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1801.07736" ] }, "num": null, "urls": [], "raw_text": "William Fedus, Ian Goodfellow, and Andrew M Dai. 2018. Maskgan: Better text generation via filling in the . arXiv preprint arXiv:1801.07736.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Part-of-speech tagging for twitter: Annotation, features, and experiments", "authors": [ { "first": "Michael", "middle": [], "last": "Heilman", "suffix": "" }, { "first": "Dani", "middle": [], "last": "Yogatama", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Flanigan", "suffix": "" }, { "first": "Noah A", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Heilman, Dani Yogatama, Jeffrey Flanigan, and Noah A Smith. 2010. Part-of-speech tagging for twitter: Annotation, features, and experiments. Technical report, Carnegie-Mellon Univ Pittsburgh Pa School of Computer Science.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Generative adversarial nets", "authors": [ { "first": "Ian", "middle": [], "last": "Goodfellow", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Pouget-Abadie", "suffix": "" }, { "first": "Mehdi", "middle": [], "last": "Mirza", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Xu", "suffix": "" }, { "first": "David", "middle": [], "last": "Warde-Farley", "suffix": "" }, { "first": "Sherjil", "middle": [], "last": "Ozair", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Courville", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "2672--2680", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative ad- versarial nets. In Advances in neural information processing systems, pages 2672-2680.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Improved training of wasserstein gans", "authors": [ { "first": "Ishaan", "middle": [], "last": "Gulrajani", "suffix": "" }, { "first": "Faruk", "middle": [], "last": "Ahmed", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Arjovsky", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Dumoulin", "suffix": "" }, { "first": "Aaron C", "middle": [], "last": "Courville", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "5769--5779", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vin- cent Dumoulin, and Aaron C Courville. 2017. Im- proved training of wasserstein gans. In Advances in Neural Information Processing Systems, pages 5769-5779.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1412.6980" ] }, "num": null, "urls": [], "raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The gan landscape: Losses, architectures, regularization, and normalization", "authors": [ { "first": "Karol", "middle": [], "last": "Kurach", "suffix": "" }, { "first": "Mario", "middle": [], "last": "Lucic", "suffix": "" }, { "first": "Xiaohua", "middle": [], "last": "Zhai", "suffix": "" }, { "first": "Marcin", "middle": [], "last": "Michalski", "suffix": "" }, { "first": "Sylvain", "middle": [], "last": "Gelly", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1807.04720" ] }, "num": null, "urls": [], "raw_text": "Karol Kurach, Mario Lucic, Xiaohua Zhai, Marcin Michalski, and Sylvain Gelly. 2018. The gan land- scape: Losses, architectures, regularization, and nor- malization. arXiv preprint arXiv:1807.04720.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Learning adversarial networks for semi-supervised text classification via policy gradient", "authors": [ { "first": "Yan", "middle": [], "last": "Li", "suffix": "" }, { "first": "Jieping", "middle": [], "last": "Ye", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining", "volume": "", "issue": "", "pages": "1715--1723", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yan Li and Jieping Ye. 2018. Learning adversarial networks for semi-supervised text classification via policy gradient. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1715-1723. ACM.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Adversarial ranking for language generation", "authors": [ { "first": "Kevin", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Dianqi", "middle": [], "last": "Li", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "He", "suffix": "" }, { "first": "Zhengyou", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Ming-Ting", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "3155--3165", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Lin, Dianqi Li, Xiaodong He, Zhengyou Zhang, and Ming-Ting Sun. 2017. Adversarial ranking for language generation. In Advances in Neural Infor- mation Processing Systems, pages 3155-3165.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Visualizing data using t-sne", "authors": [ { "first": "Laurens", "middle": [], "last": "Van Der Maaten", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" } ], "year": 2008, "venue": "Journal of machine learning research", "volume": "9", "issue": "", "pages": "2579--2605", "other_ids": {}, "num": null, "urls": [], "raw_text": "Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579-2605.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Spectral normalization for generative adversarial networks", "authors": [ { "first": "Takeru", "middle": [], "last": "Miyato", "suffix": "" }, { "first": "Toshiki", "middle": [], "last": "Kataoka", "suffix": "" }, { "first": "Masanori", "middle": [], "last": "Koyama", "suffix": "" }, { "first": "Yuichi", "middle": [], "last": "Yoshida", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1802.05957" ] }, "num": null, "urls": [], "raw_text": "Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. 2018. Spectral normalization for generative adversarial networks. arXiv preprint arXiv:1802.05957.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Structured information extraction from natural disaster events on twitter", "authors": [ { "first": "Sandeep", "middle": [], "last": "Panem", "suffix": "" }, { "first": "Manish", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Vasudeva", "middle": [], "last": "Varma", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 5th International Workshop on Web-scale Knowledge Representation Retrieval & Reasoning", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sandeep Panem, Manish Gupta, and Vasudeva Varma. 2014. Structured information extraction from natu- ral disaster events on twitter. In Proceedings of the 5th International Workshop on Web-scale Knowl- edge Representation Retrieval & Reasoning, pages 1-8. ACM.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Can twitter replace newswire for breaking news?", "authors": [ { "first": "Sasa", "middle": [], "last": "Petrovic", "suffix": "" }, { "first": "Miles", "middle": [], "last": "Osborne", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Mccreadie", "suffix": "" }, { "first": "Craig", "middle": [], "last": "Macdonald", "suffix": "" }, { "first": "Iadh", "middle": [], "last": "Ounis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Shrimpton", "suffix": "" } ], "year": 2013, "venue": "Seventh international AAAI conference on weblogs and social media", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sasa Petrovic, Miles Osborne, Richard McCreadie, Craig Macdonald, Iadh Ounis, and Luke Shrimpton. 2013. Can twitter replace newswire for breaking news? In Seventh international AAAI conference on weblogs and social media.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Extracting events and event descriptions from twitter", "authors": [ { "first": "Ana-Maria", "middle": [], "last": "Popescu", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Pennacchiotti", "suffix": "" }, { "first": "Deepa", "middle": [], "last": "Paranjpe", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 20th international conference companion on World wide web", "volume": "", "issue": "", "pages": "105--106", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ana-Maria Popescu, Marco Pennacchiotti, and Deepa Paranjpe. 2011. Extracting events and event descrip- tions from twitter. In Proceedings of the 20th inter- national conference companion on World wide web, pages 105-106. ACM.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Dsgan: Generative adversarial training for distant supervision relation extraction", "authors": [ { "first": "Pengda", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Weiran", "middle": [], "last": "Xu", "suffix": "" }, { "first": "William", "middle": [ "Yang" ], "last": "Wang", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1805.09929" ] }, "num": null, "urls": [], "raw_text": "Pengda Qin, Weiran Xu, and William Yang Wang. 2018. Dsgan: Generative adversarial training for distant supervision relation extraction. arXiv preprint arXiv:1805.09929.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Open domain event extraction from twitter", "authors": [ { "first": "Alan", "middle": [], "last": "Ritter", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Etzioni", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining", "volume": "", "issue": "", "pages": "1104--1112", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alan Ritter, Oren Etzioni, Sam Clark, et al. 2012. Open domain event extraction from twitter. In Proceed- ings of the 18th ACM SIGKDD international con- ference on Knowledge discovery and data mining, pages 1104-1112. ACM.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Sentigan: Generating sentimental texts via mixture adversarial networks", "authors": [ { "first": "Ke", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xiaojun", "middle": [], "last": "Wan", "suffix": "" } ], "year": 2018, "venue": "IJCAI", "volume": "", "issue": "", "pages": "4446--4452", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ke Wang and Xiaojun Wan. 2018. Sentigan: Gener- ating sentimental texts via mixture adversarial net- works. In IJCAI, pages 4446-4452.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Seeft: Planned social event discovery and attribute extraction by fusing twitter and web content", "authors": [ { "first": "Yu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "David", "middle": [], "last": "Fink", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Agichtein", "suffix": "" } ], "year": 2015, "venue": "ICWSM", "volume": "", "issue": "", "pages": "483--492", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yu Wang, David Fink, and Eugene Agichtein. 2015. Seeft: Planned social event discovery and attribute extraction by fusing twitter and web content. In ICWSM, pages 483-492.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "What is new in our city? a framework for event extraction using social media posts", "authors": [ { "first": "Chaolun", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Mor", "middle": [], "last": "Naaman", "suffix": "" } ], "year": 2015, "venue": "Pacific-Asia Conference on Knowledge Discovery and Data Mining", "volume": "", "issue": "", "pages": "16--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chaolun Xia, Jun Hu, Yan Zhu, and Mor Naaman. 2015. What is new in our city? a framework for event extraction using social media posts. In Pacific- Asia Conference on Knowledge Discovery and Data Mining, pages 16-32. Springer.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Spectral norm regularization for improving the generalizability of deep learning", "authors": [ { "first": "Yuichi", "middle": [], "last": "Yoshida", "suffix": "" }, { "first": "Takeru", "middle": [], "last": "Miyato", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1705.10941" ] }, "num": null, "urls": [], "raw_text": "Yuichi Yoshida and Takeru Miyato. 2017. Spec- tral norm regularization for improving the gen- eralizability of deep learning. arXiv preprint arXiv:1705.10941.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Seqgan: Sequence generative adversarial nets with policy gradient", "authors": [ { "first": "Lantao", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Weinan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yong", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2017, "venue": "AAAI", "volume": "", "issue": "", "pages": "2852--2858", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017. Seqgan: Sequence generative adversarial nets with policy gradient. In AAAI, pages 2852-2858.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Adversarial learning for distant supervised relation extraction. Computers, Materials & Continua", "authors": [ { "first": "Daojian", "middle": [], "last": "Zeng", "suffix": "" }, { "first": "Yuan", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Feng", "middle": [], "last": "Li", "suffix": "" }, { "first": "Simon", "middle": [], "last": "Sherratt", "suffix": "" }, { "first": "Jin", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2018, "venue": "", "volume": "55", "issue": "", "pages": "121--136", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daojian Zeng, Yuan Dai, Feng Li, R Simon Sherratt, and Jin Wang. 2018. Adversarial learning for distant supervised relation extraction. Computers, Materi- als & Continua, 55(1):121-136.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Event extraction with generative adversarial imitation learning", "authors": [ { "first": "Tongtao", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Heng", "middle": [], "last": "Ji", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1804.07881" ] }, "num": null, "urls": [], "raw_text": "Tongtao Zhang and Heng Ji. 2018. Event extraction with generative adversarial imitation learning. arXiv preprint arXiv:1804.07881.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "A simple bayesian modelling approach to event extraction from twitter", "authors": [ { "first": "Deyu", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Liangyu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yulan", "middle": [], "last": "He", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "700--705", "other_ids": {}, "num": null, "urls": [], "raw_text": "Deyu Zhou, Liangyu Chen, and Yulan He. 2014. A simple bayesian modelling approach to event extrac- tion from twitter. In Proceedings of the 52nd Annual Meeting of the Association for Computational Lin- guistics (Volume 2: Short Papers), volume 2, pages 700-705.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Event extraction from twitter using non-parametric bayesian mixture model with word embeddings", "authors": [ { "first": "Deyu", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Xuan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yulan", "middle": [], "last": "He", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "808--817", "other_ids": {}, "num": null, "urls": [], "raw_text": "Deyu Zhou, Xuan Zhang, and Yulan He. 2017. Event extraction from twitter using non-parametric bayesian mixture model with word embeddings. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Lin- guistics: Volume 1, Long Papers, volume 1, pages 808-817.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Visualization of the ten randomly selected events on each dataset. Each point denotes a document. Different color denotes different events.", "uris": null, "type_str": "figure", "num": null }, "FIGREF1": { "text": "Comparison of methods and parameter settings,'n' and 'h' denote parameter n d and H, other parameters follow the default setting. The vertical axis represents methods/parameter settings, the horizontal axis denotes the corresponding performance value. All blue histograms with different intensity are those obtained by AEM.", "uris": null, "type_str": "figure", "num": null }, "FIGREF2": { "text": "Comparison of training time of models.", "uris": null, "type_str": "figure", "num": null }, "FIGREF3": { "text": "state department cohen robert l: obama princeton ohio columbia harvard l: lockett oklahoma states texas ohio p: mccaskill rose catherine brown duncan p: lockett clayton patton stephanie charles k: sexual assault campus title colleges k: execution death penalty lethal minutes Apple & Samsung patent jury o: apple samsung google inc motorola MH370 o: airlines air transport boeing najib l: california south santa us calif l: malaysia australia beijing malacca houston p: judge steve dunham schmidt mueller p: najib hishammuddin hussein clark dolan k: patent jury smartphone verdict trial k: search plane flight aircraft ocean Afghanistan landslide o: afghanistan united taliban kabul un South Africa election o: anc national mandela congress eff l: afghanistan badakhshan kabul tajikistan pakistan l: zuma africa south africans nkandla p: karzai shah hill mark angela p: zuma jacob president nelson malema k: landslide village rescue mud province k: election apartheid elections voters economic", "uris": null, "type_str": "figure", "num": null }, "TABREF1": { "type_str": "table", "html": null, "num": null, "content": "
\u2211ve n e ve,doc
idf e i = log|C e | |C e i |
tf -idf e i,doc = tf e i,doc \u00d7 idf e i
d e r,i =tf -idf e i,doc ve tf -idf e \u2211 ve,doc
d e r,i denotes the relevance between i-th entity and
document doc.
Similarly, location distribution \u20d7 d l r , keyword dis-tribution \u20d7 d k r and date distribution \u20d7 d d r of doc could
be calculated in the same way, and the dimen-
sions of these distributions are denoted as
", "text": "where C e is the pseudo corpus constructed by removing all non-entity words from C, V e is the total number of distinct entities in a corpus, n e i,doc denotes the number of i-th entity appeared in document doc, |C e | represents the number of documents in the corpus, and |C e i | is the number of documents that contain i-th entity, and the obtained" }, "TABREF2": { "type_str": "table", "html": null, "num": null, "content": "
3:for t = 1, ..., n d do
4:for j = 1, ..., m do
5:Sample \u20d7 d r \u223c P r ,
6:Sample a random \u20d7 \u03b8 \u223c Dir( \u20d7 \u03b8|\u20d7 \u03b1)
7:Sample a random number \u03f5 \u223c U [0, 1]
8:
", "text": "Output: the trained G and D. 1: Initial D parameters \u03c9 d and G parameter \u03c9 g 2: while \u03c9 g has not converged do" }, "TABREF4": { "type_str": "table", "html": null, "num": null, "content": "", "text": "Comparison of the performance of event extraction on the three datasets." }, "TABREF6": { "type_str": "table", "html": null, "num": null, "content": "
", "text": "The event examples extracted by AEM." } } } }