{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:06:04.186459Z" }, "title": "Automatic extraction of personal events from dialogue", "authors": [ { "first": "Joshua", "middle": [ "D" ], "last": "Eisenberg", "suffix": "", "affiliation": {}, "email": "joshua.eisenberg@artie.com" }, { "first": "Michael", "middle": [], "last": "Sheriff", "suffix": "", "affiliation": { "laboratory": "", "institution": "Florida International University", "location": { "addrLine": "11101 S.W. 13 ST", "postCode": "33199", "settlement": "Miami", "region": "FL" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper we introduce the problem of extracting events from dialogue. Previous work on event extraction focused on newswire, however we are interested in extracting events from spoken dialogue. To ground this study, we annotated dialogue transcripts from fourteen episodes of the podcast This American Life. This corpus contains 1,038 utterances, made up of 16,962 tokens, of which 3,664 represent events. The agreement for this corpus has a Cohen's \u03ba of 0.83. We have open sourced this corpus for the NLP community. With this corpus in hand, we trained support vector machines (SVM) to correctly classify these phenomena with 0.68 F1, when using episodefold cross-validation. This is nearly 100% higher F1 than the baseline classifier. The SVM models achieved performance of over 0.75 F1 on some testing folds. We report the results for SVM classifiers trained with four different types of features (verb classes, part of speech tags, named entities, and semantic role labels), and different machine learning protocols (under-sampling and trigram context). This work is grounded in narratology and computational models of narrative. It is useful for extracting events, plot, and story content from spoken dialogue.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "In this paper we introduce the problem of extracting events from dialogue. Previous work on event extraction focused on newswire, however we are interested in extracting events from spoken dialogue. To ground this study, we annotated dialogue transcripts from fourteen episodes of the podcast This American Life. This corpus contains 1,038 utterances, made up of 16,962 tokens, of which 3,664 represent events. The agreement for this corpus has a Cohen's \u03ba of 0.83. We have open sourced this corpus for the NLP community. With this corpus in hand, we trained support vector machines (SVM) to correctly classify these phenomena with 0.68 F1, when using episodefold cross-validation. This is nearly 100% higher F1 than the baseline classifier. The SVM models achieved performance of over 0.75 F1 on some testing folds. We report the results for SVM classifiers trained with four different types of features (verb classes, part of speech tags, named entities, and semantic role labels), and different machine learning protocols (under-sampling and trigram context). This work is grounded in narratology and computational models of narrative. It is useful for extracting events, plot, and story content from spoken dialogue.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "People communicate using stories. A simple definition of story is a series of events arranged over time. A typical story has at least one plot and at least one character. When people speak to one another, we tell stories and reference events using unique discourse. The purpose of this research is to gain better understanding of the events people reference when they speak, effectively enabling further knowledge of how people tell stories and communicate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation", "sec_num": "1" }, { "text": "There has been no work, to our knowledge, about event extraction from transcripts of spoken language. The most popular corpora annotated for events all come from the domain of newswire (Pustejovsky et al., 2003b; Minard et al., 2016) . Our work begins to fill that gap. We have open sourced the gold-standard annotated corpus of events from dialogue. 1 For brevity, we will hearby refer to this corpus as the Personal Events in Dialogue Corpus (PEDC). We detailed the feature extraction pipelines, and the support vector machine (SVM) learning protocols for the automatic extraction of events from dialogue. Using this information, as well as the corpus we have released, anyone interested in extracting events from dialogue can proceed where we have left off.", "cite_spans": [ { "start": 185, "end": 212, "text": "(Pustejovsky et al., 2003b;", "ref_id": "BIBREF13" }, { "start": 213, "end": 233, "text": "Minard et al., 2016)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Motivation", "sec_num": "1" }, { "text": "One may ask: why is it important to annotate a corpus of dialogue for events? It is necessary because dialogue is distinct from other types of discourse. We claim that spoken dialogue, as a type of discourse, is especially different than newswire. We justify this claim by evaluating the distribution of narrative point of view (POV) and diegesis in the PEDC and a common newswire corpus. POV distinguishes whether a narrator tells a story in a personal or impersonal manner, and diegesis is whether the narrator is involved in the events of the story they tell. We use POV and diegesis to make our comparisions because they give information about the narrator, and their relationship to the story they tell.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation", "sec_num": "1" }, { "text": "We back our claim (that dialogue is different than newswire) by comparing the distributions of narrative point of view (POV) and diegesis of the narrators in PEDC with the Reuters-21578 newswire corpus. 2 Eisenberg and Finlayson (2016) found that narrators in newswire texts from the Reuters-21,578 corpus use the first-person POV less than 1% of the time, and are homodiegetic less than 1% of the time. However, in the 14 episodes (1,028 utterances) of This American Life, we found that 56% narrators were first-person, and 32% narrators were homodiegetic.", "cite_spans": [ { "start": 203, "end": 204, "text": "2", "ref_id": null }, { "start": 205, "end": 235, "text": "Eisenberg and Finlayson (2016)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Motivation", "sec_num": "1" }, { "text": "We found these distributions in PEDC by using the automatic POV and diegesis extractors from Eisenberg and Finlayson (2016) , which were open sourced. 3 Comparing the distributions of POV and diegesis for the PEDC to that of newswire demonstrates how different spoken dialogue is. This shows why building an annotated corpus specifically for event extraction of dialogue was necessary.", "cite_spans": [ { "start": 93, "end": 123, "text": "Eisenberg and Finlayson (2016)", "ref_id": "BIBREF2" }, { "start": 151, "end": 152, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Motivation", "sec_num": "1" }, { "text": "It is substantial that so many of the utterances in the PEDC are first-person narrators and homodiegetic. This means that people are speaking about their lives. They are retelling stories. They are speaking in more personal ways than narrators do in newswire. This is where the Personal in the Personal Events in Dialogue Corpus comes from. Additionally, using the modifier personal aligns this work with Gordon and Swanson (2009) who extracted personal stories from blog posts. We want our work to help researchers studying computation models of narrative.", "cite_spans": [ { "start": 405, "end": 430, "text": "Gordon and Swanson (2009)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Motivation", "sec_num": "1" }, { "text": "We define event as: an action or state of being depicted in text span. Actions are things that happen, most typically processes that can be observed visually. A state of being portrays the details of a situation, like the emotional and physical states of a character. For our work, we are only concerned with the state of being for animate objects. We use the concept of animacy from Jahan et al. (2018) , which is defined as:", "cite_spans": [ { "start": 384, "end": 403, "text": "Jahan et al. (2018)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "What are personal events?", "sec_num": "1.1" }, { "text": "Animacy is the characteristic of being able to independently carry out actions (e.g., movement, communication, etc.). For example, a person or a bird is animate because they move or communicate under their own power. On the other hand, a chair or a book is inanimate because they do not perform any kind of independent action. We only annotated states of being for animate objects (i.e. beings) because we are interested in extracting the information most closely coupled with people or characters. We were less concerned In the prior section we showed the PEDC contains a significant amount of personal events by running the POV and diegesis extractors from Eisenberg and Finlayson (2016) . We found that the PEDC contains 56% first-person narrators, and 32% homodiegetic narrators. Our corpus has a significant amount of narrators telling personal stories.", "cite_spans": [ { "start": 659, "end": 689, "text": "Eisenberg and Finlayson (2016)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "What are personal events?", "sec_num": "1.1" }, { "text": "First, in \u00a72 we discuss the annotation study we conducted on fourteen episodes of This American Life. Next, in \u00a73 we discuss the design of the event extractor. In \u00a73.2 we discuss the different sets of features extracted from utterances. In \u00a73.2 we talk about the protocols followed for training of support vector machine (SVM) models to extract events from utterances. In \u00a74 we discuss the types of experiments we ran, and present a table containing the results of 57 experiments. The goal of these experiments is to determine the best set of features and learning protocols for training a SVM to extract events from dialogue. In \u00a75 we discuss the results. In \u00a76 we sumarize our contributions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Outline", "sec_num": "1.2" }, { "text": "When beginning to think about extracting events from dialogue, we realized there is no corpus of transcribed dialogue annotated for events. There are many corpora of other text types with event annotations. TimeBank contains newswire text (Pustejovsky et al., 2003b) . MEANTIME is made up of Wikinews articles (Minard et al., 2016) . Additionally there are many event annotation schema; one of the more prominent ones is TimeML (Pustejovsky et al., 2003a) . We decided to develop our own annotation scheme due to the complexity of TimeML; it's an extremly fine-grained annotation scheme, with specific tags for different types of events, temporal expressions and links. We decided it would be too difficult to use TimeML while maintaining a high inter-annotator agreement and finishing the annotation study in a short amount Average Kappa 0.8320 of time (three months), and within a modest budget. Given that our goal was to understand spoken conversational dialogue, we decided to create a corpus from transcribed audio. This matches the nature of the data we intend to use for our event exctractor: audio recordings of dialogue that have been transcribed as a text file.", "cite_spans": [ { "start": 239, "end": 266, "text": "(Pustejovsky et al., 2003b)", "ref_id": "BIBREF13" }, { "start": 310, "end": 331, "text": "(Minard et al., 2016)", "ref_id": "BIBREF11" }, { "start": 428, "end": 455, "text": "(Pustejovsky et al., 2003a)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Personal events in dialogue annotation study", "sec_num": "2" }, { "text": "We weighed a number of different sources for the text transcripts, but we ultimately decided to use transcripts from the podcast This American Life 4 . We chose this podcast because: 1) The transcripts are freely available online. 2) A significant portion of these podcasts are made up of conversations, as opposed to narration. Additionally, This American Life formats their transcripts so that the conversations are indented as block quotations. This made it easy to separate conversations from typical podcast narration.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Personal events in dialogue annotation study", "sec_num": "2" }, { "text": "3) The subject matter of This American Life are typically stories from people's lives. We wanted our corpus to be made up of unscripted conversations; contemporary everyday conversations, so that the extractors we train from this data are better suited to understanding people talking about their lives.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Personal events in dialogue annotation study", "sec_num": "2" }, { "text": "The two authors of this paper were the annotators for this study. The first author wrote the annotation guide 5 . We trained by reading the first version of the guide, discussing short-comings, and then compiling a new version of the guide. Next, we both annotated episode 685 6 . Since we were training, we were allowed to discuss questions regarding annotation decisions. After we both finished, we ran the annotations through a program that found all the utterances with disagreements, and we discussed the mistakes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation study procedures", "sec_num": "2.1" }, { "text": "After adjudicating the training episode, the first author updated the annotation guide to address inconistencies we found during adjudication. Next, we began the actual annotation study. While annotating each episode, we could not discuss specifics about the utterances. We independently annotated each episode.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation study procedures", "sec_num": "2.1" }, { "text": "Once both annotators finished their annotations for an episode, we used a program we made that compared the annotations for each utterance. If there was any disagreement between the two annotators, both sets of markings from the annotators were added to an adjudication list. Then, we went through each utterance with disagreements, and discussed how the markings should be corrected for the gold-standard. We adjudicated each episode before annotating the next so that we, as annotators, could learn from each other's mistakes. Once the correction lists were created, they were used along with the original markings to create the goldstandard.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation study procedures", "sec_num": "2.1" }, { "text": "Before we discuss the annotation syntax, please take a look at an annotated utterance from episode 650 7 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation syntax", "sec_num": "2.2" }, { "text": "Alan: Due to safety {concerns}, safety {purposes}. But I mean, I can {type out} a little bit of, like, whatever you {want} to {tell} them, {tell} the shelter, and I can {make sure} they {get} the {message} if that'll {work} for you.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation syntax", "sec_num": "2.2" }, { "text": "The annotations were marked in text files. Each text file contains an episode transcript formated so that each utterance was on its own line. The spans of text that an annotator considered events were surrounded with brackets. Usually events were single words, but occasiaonally events were multiword expressions, like the phrase type out above. For more information about what we considered an event, and which state-of-beings were considered events, please refer to our annotation guide 8 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation syntax", "sec_num": "2.2" }, { "text": "We used the Cohen's Kappa metric (\u03ba) to compute the inter-annotator agreement Landis and Koch (1977) . According to Viera et al. (2005) \u03ba values above 0.81 are considered almost perfect agreement. The average \u03ba for our annotations is 0.83 so our inter-annotator agreement is almost perfect. This average \u03ba is a weighted average, where the \u03ba for each episode is multiplied by the number of utterances in the episode. Once the sum of weighted averages is obtained, we divide by the total number of utterances in the corpus.", "cite_spans": [ { "start": 78, "end": 100, "text": "Landis and Koch (1977)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Inter-annotator agreement", "sec_num": "2.3" }, { "text": "The \u03ba for event extraction measures interannotator agreement for a binary classification task for each token across each utterance. If both annotators marked a token as an event, this counted as a true positive, and if both annotators marked a token as a non-event, this is counted as a true negative. All other cases are disagreements; these 3 Developing the extractor Our extractor was implemented in Java. This is due to the availability of high-quality open-sourced NLP libraries. There are two aspects of the extractor's design that we will cover: 1) feature engineering and 2) protocols for training SVM models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inter-annotator agreement", "sec_num": "2.3" }, { "text": "First, we will discuss the different types of features that we extracted from each utterance in the corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature engineering", "sec_num": "3.1" }, { "text": "We used the part of speech (POS) tagger (Toutanova and Manning, 2000; Toutanova et al., 2003) from Stanford CoreNLP (Manning et al., 2014) to extract part of speech tags for each word in each utterance of our corpus. We used the english-bidirectional-distsim model. This model was chosen since it has the most accurate performance, even though it has a slower run-time. For the purpose of these experiments run-time wasn't a limiting factor.", "cite_spans": [ { "start": 40, "end": 69, "text": "(Toutanova and Manning, 2000;", "ref_id": "BIBREF17" }, { "start": 70, "end": 93, "text": "Toutanova et al., 2003)", "ref_id": "BIBREF16" }, { "start": 116, "end": 138, "text": "(Manning et al., 2014)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Part of speech tags", "sec_num": "3.1.1" }, { "text": "Each POS tag was assigned a unqiue integer value between 1 and 36. If a token has no POS tag, then it is assigned the value of -1. The following is the procedure for mapping POS tags into feature vectors: First, use Stanford CoreNLP to find the POS tags for each token in an utterance. Second, produce a vector of length 37 for each token, and fill each element with a -1. Third, for the vector representing each token, change the value of the element with the index cooresponding to the particular POS of the current token to 1. If the token has no POS tag, then the vector is unchanged.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Part of speech tags", "sec_num": "3.1.1" }, { "text": "We used the Named Enitity Recognizer (NER) (Finkel et al., 2005) from Stanford CoreNLP (Manning et al., 2014) to extract named entity types from utterances. We included named entity tag as a feature type for event extraction because we hypothesized that some named entity types should never be considered as events, like PERSON, ORGANIZATION, and MONEY. However, the DATE and DURATION classes were often classified as events.", "cite_spans": [ { "start": 43, "end": 64, "text": "(Finkel et al., 2005)", "ref_id": "BIBREF5" }, { "start": 87, "end": 109, "text": "(Manning et al., 2014)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Named entity tags", "sec_num": "3.1.2" }, { "text": "The NER tag feature was encoded into a vector of length nine. The first eight elements of this vector each represents one of the eight NER classes in Stanford's NER. The vector's final element represents whether the current token is not a named entity. This is the procedure for extracting NER tag features from an utterance: First use Stanford CoreNLP to find the NER tags for each token in an utterance. Second, produce a vector of length nine for each token, and fill each element with a -1. Third, for the vector representing each token in an utterance, change the value of the element with the index corresponding to a particular NER tag of the current token to 1. If there is no NER tag for a given token, then set the final element of the vector to 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Named entity tags", "sec_num": "3.1.2" }, { "text": "We use a similar pipeline as Eisenberg and Finlayson (2017) for verb class extraction. This pipeline determines which VerbNet (Schuler, 2005) verb classes each token in an utterance is represented by. A verb class is a set of verbs with the same semantics. For example, the verbs slurp, chomp, and crunch all belong to the verb class chew. We hypothesize that knowledge of what verb classes are instantiated by specific words is essential to extracting events from dialogue.", "cite_spans": [ { "start": 126, "end": 141, "text": "(Schuler, 2005)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Verb classes", "sec_num": "3.1.3" }, { "text": "The features for verb classes are encoded into a vector of length 279. The first 278 elements represent which of the 278 verb classes is invoked by the current token. The final element represents if no verb classes are instantiated by the token. For the first 278 elements we use the following bipolar encoding: 1 if the verb class is instantiated in the token, or -1 if not. Note that any token can instantiate more than one verb class. The final element in the vector is assigned a 1 if no verb classes are represented by the current token, or -1 if verb classes are used (which means at least one of the first 278 elements has the value of 1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Verb classes", "sec_num": "3.1.3" }, { "text": "Here is a quick overview of the pipeline for verb class extraction: first, we use It Makes Sense to perform word sense disambiguation on an utterance (Zhong and Ng, 2010) . This produces WordNet sense keys (Fellbaum, 1998) for each token in an utterance. Next we use JVerbnet 9 to map WordNet sense keys to VerbNet classes. This produces a list of VerbNet classes for each token. Finally, each list is mapped to a bipolar feature vector of length 279, as explained in the paragraph above.", "cite_spans": [ { "start": 150, "end": 170, "text": "(Zhong and Ng, 2010)", "ref_id": "BIBREF19" }, { "start": 206, "end": 222, "text": "(Fellbaum, 1998)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Verb classes", "sec_num": "3.1.3" }, { "text": "We use the Path LSTM Semantic Role Labeler (SRL) to extract a set of features from utterances (Roth and Lapata, 2016) . We extract two features for each token in an utterance: 1) is the token a predicate? and 2) is the token an argument of a predicate? These features fill a vector of length two, and once again we use bipolar encoding as all the previous features discussed in this section.", "cite_spans": [ { "start": 94, "end": 117, "text": "(Roth and Lapata, 2016)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic role labels", "sec_num": "3.1.4" }, { "text": "There are many more features in Path LSTM, however we didn't have the time to find an intelligent way to use them. One of those features is the ability to parse into semantic frames from FrameNet (Baker and Sato, 2003) . Path LSTM can parse into the over 1,200 semantic frames in FrameNet. We hypothesize that knowing which tokens represent different frame elements for each frame would be a useful feature extracting events from dialogue. This feature would provide even more fine-grained information than the verb class features.", "cite_spans": [ { "start": 196, "end": 218, "text": "(Baker and Sato, 2003)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic role labels", "sec_num": "3.1.4" }, { "text": "Here is the way we extracted SRL features from utterances: first, for each utterance use Path LSTM to extract an SRL parse. Second, produce a feature vector of length two for each token in the utterance, and initialize both elements to -1. Third, get the list of predicates from the SRL parse. For each token, if it is a predicate, set the first element of the feature vector to 1. Otherwise, do nothing. Fourth, for each predicate, get the argument map. For each token, if it is a member of any argument map, set the second element of the feature vector to 1. Otherwise, do nothing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic role labels", "sec_num": "3.1.4" }, { "text": "Second, we discuss the details about how the SVM models were trained.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning protocols", "sec_num": "3.2" }, { "text": "We used 14-fold cross-validation, or colloquially speaking episode-fold cross-validation. There are 14 episodes in our corpus. For each fold of crossvalidation, one episode is reserved for testing, and the remaining 13 folds are used for training. This procedure is performed 14 times, so that each of the 14 episodes has the chance to be used as testing data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-validation", "sec_num": "3.2.1" }, { "text": "We incorporated under-sampling into our SVM based experiments. Undersampling is a technique for boosting performance of models when training on unbalanced datasets (Japkowicz et al., 2000) . Our event corpus has about four nonevents for every one event. To mitigate this, during training a SVM model on an episode we add the feature vectors for every event to the training set. Next, we count the number of feature vectors for events in the training set. Then, we randomly select nonevent feature vectors, and add the same number of vectors to the training set as there are event vectors. Hence, for every event feature vector in the training set, there is only one nonevent feature vector. In our experiments ( \u00a74) we saw that undersampling raised the F1 for most feature sets. Our implementation of under-sampling allows us to toggle it on and off for different experiments. Hence, undersampling could be parameterized, along with the types of features used, and other variations on the SVM learning discussed below.", "cite_spans": [ { "start": 164, "end": 188, "text": "(Japkowicz et al., 2000)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Under-sampling", "sec_num": "3.2.2" }, { "text": "Since there is an element of randomness in our implementation of undersampling, we ran each undersampling experiment 100 times. We report the result for the experiment that had the highest F1 relative to the event class. This is a somewhat crude approach. In the future, we would like to employ an entropy based approach, where we select which majority class feature vectors to use based on the entropy of the set of vectors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Under-sampling", "sec_num": "3.2.2" }, { "text": "We simulate context by appending feature vectors for neighboring words to the current word's feature vector. Specificially, for each token, get the feature vector for the preceding token and the feature vector for the proceeding token, and append these two vectors to the original vector. If there is no preceding token make a feature vector where each element is -1. The length of this negative vector is that of the original feature vector. Similarly, follow the same procedure if there is no proceeding token. Using trigram context vectors slightly raised the F1 for many SVM models, but it did not have a significant effect. This leads us to hypothesize that there is probably a better way to encode context for this task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Simulating context through trigrams", "sec_num": "3.2.3" }, { "text": "Our implementation of trigram context is modular, along with the other learning protocols: context can be toggled for any experiment. Furthermore, experiments that make use of trigram context, can also take advantage of under-sampling. Each set of features can have four seperate experiments: 1) training with no augmentations, 2) training with under-sampling, 3) training with trigram context vectors, and 4) training with both under-sampling and trigram context vectors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Simulating context through trigrams", "sec_num": "3.2.3" }, { "text": "All our SVM models used a linear kernel. We chose a linear kernel because of bipolar encoding of the feature values, and it produced the best F1 during early experiments. The hyperparameters for all the SVMs were as follows: \u03b3 = 0.5, \u03bd = 0.5, C = 20, and = 0.01.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SVM hyperparameters", "sec_num": "3.2.4" }, { "text": "We report our results in Table 2 . The table is organized in four vertical columns, from left to right: 1) Features: this section contains the features used for an experiment. The possible types of features are POS, NER, Verb Classes, and SRL. The combination of features used for an experiment are indicated by X's in the column of the corresponding features. There are four possible experiments (for each of the four possible machine learning protocols chosen) run for a given feature type. In rare cases (like for experiments with only NER features) only the basic experiment results are reported because the SVM classifier could not adequately learn and classify everything as a nonevent.", "cite_spans": [], "ref_spans": [ { "start": 25, "end": 32, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "2) ML Protocols: this section contains the ma-chine learning (ML) protocols used for an experiment. The possible protocols are: undersampling and trigrams. The combination of ML protocols used for an experiment are indicated by X's in the column of the coresponding protocols. For each combination of features, four experiments are run. Each of the four experiments, for a feature set, represent a unique combination of the two ML protocols.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "3) Events: in this section we report the results (F1, precision, and recall) for all tokens that were marked as events in the gold-standard. 4) Nonevents: similarly, in this section we report the results for all tokens that were marked as nonevents in the gold-standard. Table 2 contains all combinations of features and ML protocols. We report all the results to show the fluctuations of performance for different combinations of features and protocols.", "cite_spans": [], "ref_spans": [ { "start": 271, "end": 278, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "We will compare the results in Table 2 to a minority class baseline. For our experiments, the minority class is the event class. We are interested in maximizing the F1 of the event class as opposed to the nonevent class, because we want to accurately extract events. Events are more rare than nonevents, hence this is the phenomena we are exploring. Our baseline, relative to the event class is: F1 = 0.3553, precision = 0.2160, and recall = 1.", "cite_spans": [], "ref_spans": [ { "start": 31, "end": 38, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "Our best performing event extractor uses POS and verb class features, and the ML protocols used were undersampling and trigrams, however, the performance is not significantly better than the extractors that only use either one of the two protocols. Our best peforming event extractor with no extra ML protocols was the extractor with POS, NER, and verb class features. The performance of the extractor that had all four features had the same performance as the former, so we can say that the addition of SRL features adds no extra information to the classification process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "It is interesting to see the affect of undersampling on performance. It boosted the event F1 for most feature sets. Not only did it boost the F1, but it flipped the values of precision and recall with respect to the original experiment. Without undersampling, the precision is always higher than the recall. Once undersampling is toggled, the recall becomes larger than the precision. Also, the undersampled recall is typically higher than the non-undersampled precision. This flippage is important to note for situations when the event extractor is actually used in real-world systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "If the situation requires a minimal number of false positives, than precision should be maximized, therefore no undersampling should be used when training the model. However, if minimal false negatives is a bigger priority, then recall should be maximized, hence undersampling should be used in training. Whether undersampling is used, or not, depends on the actual context the event extractor is being deployed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "In general, undersampling helped boost performance of event classification in most experiments. Trigrams gave an even smaller boost to event classification in most experiments. Experiments that had both undersampling and trigrams had the largest boost when compared to the experiment with no extra ML protocols.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "There were two feature sets that trigram context had a significant affect, both POS + SRL and POS + VERB + SRL. These are the only experiments where the trigram context protocol led to the greatest performance for the feature set, and by a significant margin. Overall, trigrams had a much smaller affect on overall performance. We hypothesize that there are better ways to implement this form of context. Either a classifier that's better suited for sequential data should be used, or a different form of encoding the context feature should be explored. Another note about a negative result: the impact of the SRL features was much less influential than we hypothesized. Going forward, we think that the actual semantic frames instantiated should be used as features, as well as different frame elements, and not just occurence of predicates and arguments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "In this paper we presented two sets of contributions: First, we have open sourced the first corpus of dialogue annotated for events. 10 This corpus can be used by researchers interested in the automatic understanding of dialogue, specifically dialogue that is rich with the personal stories of people. Second, we share the design and evaluate the performance of 57 unique event classifiers for dialogue. These results can be used by researchers to decide which features and machine learning protocols should be implemented for their own event extractors. Our best performing extractor has a 0.68 F1, which is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contributions", "sec_num": "6" }, { "text": "http://www.artie.com/data/personaleventsindialogue/ 2 http://archive.ics.uci.edu/ml/datasets/Reuters-21578+Text+Categorization+Collection", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://dspace.mit.edu/handle/1721.1/105279with extracting details about inanimate objects, like the states of being in this example,\"The mountain was covered with trees,\" and more concerned with extracting states of being describing people, like in this example, \"I was so excited when the dough rose,\" where excited is a state of being describing the speaker.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.thisamericanlife.org/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.artie.com/data/personaleventsindialogue/ 6 https://www.thisamericanlife.org/685/transcript", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.thisamericanlife.org/650/transcript 8 http://www.artie.com/data/personaleventsindialogue/ were adjudicated by both authors. A token can be annotated as an event, or a non-event.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://projects.csail.mit.edu/jverbnet/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.artie.com/data/personaleventsindialogue/ over 100% higher than baseline. We hope that this work can be used by the community to better understand how people reference events from stories in dialogue.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "A huge thanks to This American Life for giving us permission to distribute their transcripts for the open sourced PEDC. Thanks to Frances Swanson for coordinating the permissions. Thanks to Ryan Horrigan and Armando Kirwin, from Artie, Inc., for giving me the time and resources to pursue this research. Thanks to Aimee Rubensteen for being an amazing editor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The FrameNet data and software", "authors": [ { "first": "Collin", "middle": [ "F" ], "last": "Baker", "suffix": "" }, { "first": "Hiroaki", "middle": [], "last": "Sato", "suffix": "" } ], "year": 2003, "venue": "The Companion Volume to the Proceedings of 41st Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "161--164", "other_ids": { "DOI": [ "10.3115/1075178.1075206" ] }, "num": null, "urls": [], "raw_text": "Collin F. Baker and Hiroaki Sato. 2003. The FrameNet data and software. In The Companion Volume to the Proceedings of 41st Annual Meeting of the Associa- tion for Computational Linguistics, pages 161-164, Sapporo, Japan. Association for Computational Lin- guistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Dense event ordering with a multi-pass architecture", "authors": [ { "first": "Nathanael", "middle": [], "last": "Chambers", "suffix": "" }, { "first": "Taylor", "middle": [], "last": "Cassidy", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Mcdowell", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bethard", "suffix": "" } ], "year": 2014, "venue": "Transactions of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "273--284", "other_ids": { "DOI": [ "10.1162/tacl_a_00182" ] }, "num": null, "urls": [], "raw_text": "Nathanael Chambers, Taylor Cassidy, Bill McDowell, and Steven Bethard. 2014. Dense event ordering with a multi-pass architecture. Transactions of the Association for Computational Linguistics, 2:273- 284.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Automatic identification of narrative diegesis and point of view", "authors": [ { "first": "Joshua", "middle": [], "last": "Eisenberg", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Finlayson", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2nd Workshop on Computing News Storylines (CNS 2016)", "volume": "", "issue": "", "pages": "36--46", "other_ids": { "DOI": [ "10.18653/v1/W16-5705" ] }, "num": null, "urls": [], "raw_text": "Joshua Eisenberg and Mark Finlayson. 2016. Auto- matic identification of narrative diegesis and point of view. In Proceedings of the 2nd Workshop on Com- puting News Storylines (CNS 2016), pages 36-46, Austin, Texas. Association for Computational Lin- guistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A simpler and more generalizable story detector using verb and character features", "authors": [ { "first": "Joshua", "middle": [], "last": "Eisenberg", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Finlayson", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2708--2715", "other_ids": { "DOI": [ "10.18653/v1/D17-1287" ] }, "num": null, "urls": [], "raw_text": "Joshua Eisenberg and Mark Finlayson. 2017. A sim- pler and more generalizable story detector using verb and character features. In Proceedings of the 2017 Conference on Empirical Methods in Natu- ral Language Processing, pages 2708-2715, Copen- hagen, Denmark. Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "WordNet: An electronic lexical database", "authors": [ { "first": "Christiane", "middle": [], "last": "Fellbaum", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christiane Fellbaum. 1998. WordNet: An electronic lexical database. MIT Press.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Incorporating non-local information into information extraction systems by gibbs sampling", "authors": [ { "first": "Jenny", "middle": [ "Rose" ], "last": "Finkel", "suffix": "" }, { "first": "Trond", "middle": [], "last": "Grenager", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, ACL '05", "volume": "", "issue": "", "pages": "363--370", "other_ids": { "DOI": [ "10.3115/1219840.1219885" ] }, "num": null, "urls": [], "raw_text": "Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local informa- tion into information extraction systems by gibbs sampling. In Proceedings of the 43rd Annual Meet- ing on Association for Computational Linguistics, ACL '05, page 363-370, USA. Association for Com- putational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Identifying personal stories in millions of weblog entries", "authors": [ { "first": "Andrew", "middle": [], "last": "Gordon", "suffix": "" }, { "first": "Reid", "middle": [], "last": "Swanson", "suffix": "" } ], "year": 2009, "venue": "Third International Conference on Weblogs and Social Media, Data Challenge Workshop", "volume": "46", "issue": "", "pages": "16--23", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew Gordon and Reid Swanson. 2009. Identify- ing personal stories in millions of weblog entries. In Third International Conference on Weblogs and Social Media, Data Challenge Workshop, San Jose, CA, volume 46, pages 16-23, San Jose, CA.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A new approach to Animacy detection", "authors": [ { "first": "Labiba", "middle": [], "last": "Jahan", "suffix": "" }, { "first": "Geeticka", "middle": [], "last": "Chauhan", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Finlayson", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1--12", "other_ids": {}, "num": null, "urls": [], "raw_text": "Labiba Jahan, Geeticka Chauhan, and Mark Finlayson. 2018. A new approach to Animacy detection. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1-12, Santa Fe, New Mexico, USA. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Learning from imbalanced data sets: a comparison of various strategies", "authors": [ { "first": "Nathalie", "middle": [], "last": "Japkowicz", "suffix": "" } ], "year": 2000, "venue": "AAAI workshop on learning from imbalanced data sets", "volume": "68", "issue": "", "pages": "10--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nathalie Japkowicz et al. 2000. Learning from im- balanced data sets: a comparison of various strate- gies. In AAAI workshop on learning from imbal- anced data sets, volume 68, pages 10-15. Menlo Park, CA.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "The measurement of observer agreement for categorical data", "authors": [ { "first": "Richard", "middle": [], "last": "Landis", "suffix": "" }, { "first": "Gary G", "middle": [], "last": "Koch", "suffix": "" } ], "year": 1977, "venue": "biometrics", "volume": "", "issue": "", "pages": "159--174", "other_ids": {}, "num": null, "urls": [], "raw_text": "J Richard Landis and Gary G Koch. 1977. The mea- surement of observer agreement for categorical data. biometrics, pages 159-174.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The Stanford CoreNLP natural language processing toolkit", "authors": [ { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "John", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "Jenny", "middle": [], "last": "Finkel", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bethard", "suffix": "" }, { "first": "David", "middle": [], "last": "Mcclosky", "suffix": "" } ], "year": 2014, "venue": "Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations", "volume": "", "issue": "", "pages": "55--60", "other_ids": { "DOI": [ "10.3115/v1/P14-5010" ] }, "num": null, "urls": [], "raw_text": "Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language pro- cessing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Lin- guistics: System Demonstrations, pages 55-60, Bal- timore, Maryland. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "MEANTIME, the NewsReader multilingual event and time corpus", "authors": [ { "first": "Anne-Lyse", "middle": [], "last": "Minard", "suffix": "" }, { "first": "Manuela", "middle": [], "last": "Speranza", "suffix": "" }, { "first": "Ruben", "middle": [], "last": "Urizar", "suffix": "" }, { "first": "Bego\u00f1a", "middle": [], "last": "Altuna", "suffix": "" }, { "first": "Anneleen", "middle": [], "last": "Marieke Van Erp", "suffix": "" }, { "first": "Chantal", "middle": [], "last": "Schoen", "suffix": "" }, { "first": "", "middle": [], "last": "Van Son", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)", "volume": "", "issue": "", "pages": "4417--4422", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anne-Lyse Minard, Manuela Speranza, Ruben Urizar, Bego\u00f1a Altuna, Marieke van Erp, Anneleen Schoen, and Chantal van Son. 2016. MEANTIME, the NewsReader multilingual event and time corpus. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 4417-4422, Portoro\u017e, Slovenia. European Language Resources Association (ELRA).", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Timeml: Robust specification of event and temporal expressions in text", "authors": [ { "first": "James", "middle": [], "last": "Pustejovsky", "suffix": "" }, { "first": "M", "middle": [], "last": "Jos\u00e9", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Castano", "suffix": "" }, { "first": "Roser", "middle": [], "last": "Ingria", "suffix": "" }, { "first": "", "middle": [], "last": "Sauri", "suffix": "" }, { "first": "J", "middle": [], "last": "Robert", "suffix": "" }, { "first": "Andrea", "middle": [], "last": "Gaizauskas", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Setzer", "suffix": "" }, { "first": "", "middle": [], "last": "Katz", "suffix": "" }, { "first": "", "middle": [], "last": "Dragomir R Radev", "suffix": "" } ], "year": 2003, "venue": "New directions in question answering", "volume": "3", "issue": "", "pages": "28--34", "other_ids": {}, "num": null, "urls": [], "raw_text": "James Pustejovsky, Jos\u00e9 M Castano, Robert Ingria, Roser Sauri, Robert J Gaizauskas, Andrea Set- zer, Graham Katz, and Dragomir R Radev. 2003a. Timeml: Robust specification of event and temporal expressions in text. New directions in question an- swering, 3:28-34.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "The timebank corpus", "authors": [ { "first": "James", "middle": [], "last": "Pustejovsky", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Hanks", "suffix": "" }, { "first": "Roser", "middle": [], "last": "Sauri", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "See", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Gaizauskas", "suffix": "" }, { "first": "Andrea", "middle": [], "last": "Setzer", "suffix": "" }, { "first": "Dragomir", "middle": [], "last": "Radev", "suffix": "" }, { "first": "Beth", "middle": [], "last": "Sundheim", "suffix": "" }, { "first": "David", "middle": [], "last": "Day", "suffix": "" }, { "first": "Lisa", "middle": [], "last": "Ferro", "suffix": "" } ], "year": 2003, "venue": "Corpus linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "James Pustejovsky, Patrick Hanks, Roser Sauri, An- drew See, Robert Gaizauskas, Andrea Setzer, Dragomir Radev, Beth Sundheim, David Day, Lisa Ferro, et al. 2003b. The timebank corpus. In Corpus linguistics, volume 2003, page 40. Lancaster, UK.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Neural semantic role labeling with dependency path embeddings", "authors": [ { "first": "Michael", "middle": [], "last": "Roth", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1192--1202", "other_ids": { "DOI": [ "10.18653/v1/P16-1113" ] }, "num": null, "urls": [], "raw_text": "Michael Roth and Mirella Lapata. 2016. Neural seman- tic role labeling with dependency path embeddings. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1192-1202, Berlin, Germany. Association for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "VerbNet: A broadcoverage, comprehensive verb lexicon", "authors": [ { "first": "Karin Kipper", "middle": [], "last": "Schuler", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karin Kipper Schuler. 2005. VerbNet: A broad- coverage, comprehensive verb lexicon. PhD disser- tation, University of Pennsylvania.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Feature-rich part-ofspeech tagging with a cyclic dependency network", "authors": [ { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Yoram", "middle": [], "last": "Singer", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "252--259", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kristina Toutanova, Dan Klein, Christopher D. Man- ning, and Yoram Singer. 2003. Feature-rich part-of- speech tagging with a cyclic dependency network. In Proceedings of the 2003 Human Language Tech- nology Conference of the North American Chapter of the Association for Computational Linguistics, pages 252-259.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Enriching the knowledge sources used in a maximum entropy part-of-speech tagger", "authors": [ { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 2000 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora: Held in Conjunction with the 38th Annual Meeting of the Association for Computational Linguistics", "volume": "13", "issue": "", "pages": "63--70", "other_ids": { "DOI": [ "10.3115/1117794.1117802" ] }, "num": null, "urls": [], "raw_text": "Kristina Toutanova and Christopher D. Manning. 2000. Enriching the knowledge sources used in a maxi- mum entropy part-of-speech tagger. In Proceedings of the 2000 Joint SIGDAT Conference on Empiri- cal Methods in Natural Language Processing and Very Large Corpora: Held in Conjunction with the 38th Annual Meeting of the Association for Compu- tational Linguistics -Volume 13, EMNLP '00, page 63-70, USA. Association for Computational Lin- guistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Understanding interobserver agreement: the kappa statistic", "authors": [ { "first": "J", "middle": [], "last": "Anthony", "suffix": "" }, { "first": "Joanne", "middle": [ "M" ], "last": "Viera", "suffix": "" }, { "first": "", "middle": [], "last": "Garrett", "suffix": "" } ], "year": 2005, "venue": "Fam med", "volume": "37", "issue": "5", "pages": "360--363", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anthony J Viera, Joanne M Garrett, et al. 2005. Under- standing interobserver agreement: the kappa statis- tic. Fam med, 37(5):360-363.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "It makes sense: A wide-coverage word sense disambiguation system for free text", "authors": [ { "first": "Zhi", "middle": [], "last": "Zhong", "suffix": "" }, { "first": "Hwee Tou", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the ACL 2010 System Demonstrations", "volume": "", "issue": "", "pages": "78--83", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhi Zhong and Hwee Tou Ng. 2010. It makes sense: A wide-coverage word sense disambiguation system for free text. In Proceedings of the ACL 2010 Sys- tem Demonstrations, pages 78-83, Uppsala, Swe- den. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "TABREF1": { "text": "Statistics for event annotations in dialogue corpus", "type_str": "table", "html": null, "num": null, "content": "" }, "TABREF3": { "text": "Classification results across different feature sets and machine learning protocols", "type_str": "table", "html": null, "num": null, "content": "
" } } } }