Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
84.1 kB
{
"paper_id": "I11-1012",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:31:30.824082Z"
},
"title": "A Unified Event Coreference Resolution by Integrating Multiple Resolvers",
"authors": [
{
"first": "Bin",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Jian",
"middle": [],
"last": "Su",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Sinno",
"middle": [
"Jialin"
],
"last": "Pan",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Chew",
"middle": [],
"last": "Lim Tan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National University of Singapore",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Event coreference is an important and complicated task in cascaded event template extraction and other natural language processing tasks. Despite its importance, it was merely discussed in previous studies. In this paper, we present a globally optimized coreference resolution system dedicated to various sophisticated event coreference phenomena. Seven resolvers for both event and object coreference cases are utilized, which include three new resolvers for event coreference resolution. Three enhancements are further proposed at both mention pair detection and chain formation levels. First, the object coreference resolvers are used to effectively reduce the false positive cases for event coreference. Second, A revised instance selection scheme is proposed to improve link level mention-pair model performances. Last but not least, an efficient and globally optimized graph partitioning model is employed for coreference chain formation using spectral partitioning which allows the incorporation of pronoun coreference information. The three techniques contribute to a significant improvement of 8.54% in B 3 F-score for event coreference resolution on OntoNotes 2.0 corpus.",
"pdf_parse": {
"paper_id": "I11-1012",
"_pdf_hash": "",
"abstract": [
{
"text": "Event coreference is an important and complicated task in cascaded event template extraction and other natural language processing tasks. Despite its importance, it was merely discussed in previous studies. In this paper, we present a globally optimized coreference resolution system dedicated to various sophisticated event coreference phenomena. Seven resolvers for both event and object coreference cases are utilized, which include three new resolvers for event coreference resolution. Three enhancements are further proposed at both mention pair detection and chain formation levels. First, the object coreference resolvers are used to effectively reduce the false positive cases for event coreference. Second, A revised instance selection scheme is proposed to improve link level mention-pair model performances. Last but not least, an efficient and globally optimized graph partitioning model is employed for coreference chain formation using spectral partitioning which allows the incorporation of pronoun coreference information. The three techniques contribute to a significant improvement of 8.54% in B 3 F-score for event coreference resolution on OntoNotes 2.0 corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Coreference resolution, the task of resolving and linking different mentions of the same object/event in a text, is important for an intelligent text processing system. The resolved coreferent mentions form a coreference chain representing a particular object/event. Following the natural order in the texts, any two consecutive mentions in a coreference chain form an anaphoric pair with the latter mention referring back to the prior one. The latter mention is called the anaphor while the prior one is named as the antecedent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most of previous works on coreference resolution such as (Soon et al, 2001; Yang et al, 2006) , aimed at object coreference which both the anaphor and its antecedent are mentions of the same real world object such as person, location and organization. In contrast, an event coreference as defined in (Asher, 1993) is an anaphoric reference to an event, fact, and proposition which is representative of eventuality and abstract entities. In the following example:",
"cite_spans": [
{
"start": 57,
"end": 75,
"text": "(Soon et al, 2001;",
"ref_id": "BIBREF13"
},
{
"start": 76,
"end": 93,
"text": "Yang et al, 2006)",
"ref_id": "BIBREF16"
},
{
"start": 300,
"end": 313,
"text": "(Asher, 1993)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\" Authority) . The pronouns noun phrases and action verbs are taken as the representation of events which is also in line with OntoNotes 2.0 practices. Event coreference resolution is an important task in natural language processing (NLP) research. According to our corpus study, 68.05% of articles in OntoNotes 2.0 corpus contain at least one event chain while 15.52% of all coreference chains are event chains. In addition to the significant proportion, event coreference resolution allows event extraction system to acquire necessary details. Considering the previous example, resolving the event chain [fired]-[it]-[fired]-[the attack] will provide us all necessary details about the \"air strike\" event mentioned in different sentences. Such details includes \"Israel/Israel helicopter gunships\" as the actuator, \"offices of Palestinian Authority\" as the target, \"7 deaths and many injuries\" as the consequence, \"Gaza Strip\" as the location and \"more than two hours\" as the duration. Without a successful event coreference resolution such separated pieces of information cannot be assembled properly.",
"cite_spans": [
{
"start": 2,
"end": 12,
"text": "Authority)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "On the other hand, event coreference resolution incurs more difficulties comparing to the traditional object coreference from two aspects. In a semantic view, an object (such as a person, location and etc.) is uniquely defined by its name (e.g. Barrack Obama) while an event requires its role 1 information to distinguish it from other events. For example, \"the crash yesterday\" -\"crash in 1968\" shares the same event head phrase \"crash\", but they are distinguished by the time arguments. In a syntactic view, object coreferences only involve mentions from noun category while event coreference involves mentions from different categories. The syntactic differences will cause the tradition coreference features crippled or malfunctioned as reported by (Chen et al, 2010a; for Verb-Pronoun/Verb-NP resolution. In addition to their findings, we further find that even the event NP-Pronoun/NP-NP resolution requires more sophisticated feature engineering than the traditional ones. For example, previous semantic compatibility features only focus on measuring the compatibility between object such as \"person\", \"location\" and etc. Event cases are generally falls in the \"other\" category which provides us no useful information in distinguishing different events. These extra syntactic and semantic difficulties make event coreference resolution a more complicated task comparing to object coreferences.",
"cite_spans": [
{
"start": 753,
"end": 772,
"text": "(Chen et al, 2010a;",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we address the various different event coreference phenomena with seven distinct mention-pair resolvers designed with sophisticated features. We then propose three enhancements to boost up performance at both mention pair detection and chain formation level. First, for the mention-pair resolvers, we have proposed the technique to utilize competitive classifiers' results to further boost mention-pair resolvers' performances. Second, a revised instance selection strategy is proposed to avoid mention-pair resolvers from being misguided by locally preferred instances used previously. Last, on top of coreferent pairs identified by the mention-pair resolvers, we have incorporated the spectral partitioning approach to form the coreference chains in a globally optimized way. Especially, we proposed a technique to enhance the chain level performance by incorporating the pronoun information which the previous attempts did not utilized.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of this paper will be organized in the following way. The next section (section 2) will introduce related works. A review on coreference resolution framework and its weaknesses is presented in section 3. After that we will move on to our proposed model to overcome the weaknesses in section 4. Section 5 will present the experiment results with discussions. Last section will wrap up with a conclusion and future research directions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Although event coreference resolution is an important task, it has not attracted much attention. There is only a limited number of previous works related to this task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "In (Asher, 1993) chapter 6, a method to resolve references to abstract entities using discourse representation theory is discussed. However, no computational system was proposed.",
"cite_spans": [
{
"start": 3,
"end": 16,
"text": "(Asher, 1993)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "Besides linguistic studies, there are only a few previous works attempting to tackle subproblems of the event coreference resolution. (Byron, 2002; M\u00fcller, 2007; Chen et al, 2010a) attempted event pronoun resolution. (Chen et al, 2010b) attempted resolving noun phrases to verb mentions. All these works only focused on identifying pairs of coreferent event mentions in their targeted phenomena. The ultimate goal, which is extracting event chain, is lack of attention. (Pradhan, et al, 2007 ) applied a conventional co-reference resolution system to OntoNotes1.0 corpus using the same set of features for object coreference resolution. However, there is no specific performance reported on event coreference. As (Chen et al, 2010b) pointed out, the conventional features do not function properly on event coreference problem. Thus, a thorough investigation on event coreference phenomena is required for a better understanding of the problem.",
"cite_spans": [
{
"start": 134,
"end": 147,
"text": "(Byron, 2002;",
"ref_id": "BIBREF2"
},
{
"start": 148,
"end": 161,
"text": "M\u00fcller, 2007;",
"ref_id": null
},
{
"start": 162,
"end": 180,
"text": "Chen et al, 2010a)",
"ref_id": "BIBREF3"
},
{
"start": 470,
"end": 491,
"text": "(Pradhan, et al, 2007",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "Before we introduce our proposed system to event coreference, we would like to revisit the two-step resolution framework to understand some of its weaknesses. Most of previous coreference resolution system employs a two-steps approach as in (Soon et al, 2001; Nicolae & Nicolae, 2006 ) and many others. The first step identifies all the pairs of coreferent mentions. The second step forms coreference chains using the coreferent pairs identified from the first step.",
"cite_spans": [
{
"start": 241,
"end": 259,
"text": "(Soon et al, 2001;",
"ref_id": "BIBREF13"
},
{
"start": 260,
"end": 283,
"text": "Nicolae & Nicolae, 2006",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Resolution Framework",
"sec_num": "3"
},
{
"text": "Although a handful of single-step frameworks were proposed recently such as (Cai & Strube, 2010) , two-step framework is still widely in use because it has been well-studied. Conceptually, the two-step framework adopts a divide-andconquer strategy which in turn, allows us to focus on different sub-problems at different stages. The mention-pair detection step allows us to employ many features associated with strong linguistic intuitions which have been proven useful in the previous linguistic study. The chain formation step allows us to leverage on efficient and robust graph partitioning algorithms such spectral partitioning used in this paper. Practically, the two-step framework is also more mature for practical uses and has been implemented as a number of standard coreference resolution toolkits widely available such as RECONCILE in (Stoyanov et al, 2010) and BART in (Versley et al, 2008) . Performance-wise, two-step approaches also show comparable performance to single step approaches on some benchmark datasets 2 .",
"cite_spans": [
{
"start": 76,
"end": 96,
"text": "(Cai & Strube, 2010)",
"ref_id": "BIBREF3"
},
{
"start": 846,
"end": 868,
"text": "(Stoyanov et al, 2010)",
"ref_id": "BIBREF14"
},
{
"start": 881,
"end": 902,
"text": "(Versley et al, 2008)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Resolution Framework",
"sec_num": "3"
},
{
"text": "In this paper, we are exploiting a brand new type of coreference phenomenon with merely previous attempts. Therefore, we employed the much matured two-step framework with innovative extensions to accommodate complicated event coreference phenomena. Such a divideand-conquer strategy will provide us more insight for further advancements as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Resolution Framework",
"sec_num": "3"
},
{
"text": "Most of mention-pair models adopt the wellknown machine learning framework for object coreference as proposed in (Soon et al, 2001 ).",
"cite_spans": [
{
"start": 113,
"end": 130,
"text": "(Soon et al, 2001",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Mention-Pair Resolution Models",
"sec_num": "3.1"
},
{
"text": "In this learning framework, a training/testing instance has the form of fv(cand i , ana), where ana is the anaphor and cand i is the i th candidate of the given anaphor. During training, we employed the widely used instance selection strategy described in (Ng & Cardie, 2002) . In brief, only the closest antecedent of a given anaphor is used as positive instance while only candidates in between the anaphor and its closest antecedent are used as negative instances. During testing, an instance is generated in a similar manner with an additional constraint that the candidate must be within n sentences from the anaphor.",
"cite_spans": [
{
"start": 256,
"end": 275,
"text": "(Ng & Cardie, 2002)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Instances Generation",
"sec_num": null
},
{
"text": "An obvious weakness of such an instance selection strategy is the representation power of the selected instances. Ideally, the selected instances should represent the coreferent status between any two mentions. However this strategy turns the selected set into a local preference representation. The positive instance is the closest preferred mention while the negatives are local nonpreferable ones. Such an instance set may help in locally choosing a preferable candidate. But it may be harmful if we want to use the classifier's results in a global approach such as graph partitioning. In the section 4, we will propose a revised instance selection strategy to overcome such a weakness.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Instances Generation",
"sec_num": null
},
{
"text": "In such a learning framework, many well-known learning models can be applied to the coreference resolution task. In this paper, support vector machine (SVM) is employed for its robust performance in high dimensional space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SVM with Tree-Kernel",
"sec_num": null
},
{
"text": "In addition to the traditional SVM, we incorporate the syntactic structures through a convolution tree kernel. Tree kernel is used to capture the implicitly structural knowledge embedded in the syntax tree. Effectiveness of various structures was investigated in (Yang et al, 2006; Chen et al, 2010a; . Based on their findings, we choose minimum-expansion for this paper. In brief, it contains only the path in the parse tree connecting an anaphor and its antecedent. The convolution tree kernel and traditional flat kernel are combined to form a composite kernel.",
"cite_spans": [
{
"start": 263,
"end": 281,
"text": "(Yang et al, 2006;",
"ref_id": "BIBREF16"
},
{
"start": 282,
"end": 300,
"text": "Chen et al, 2010a;",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "SVM with Tree-Kernel",
"sec_num": null
},
{
"text": "After the coreferent mention pairs are identified, coreference chains are formed based on those coreferent pairs. There are two major ways to form coreference chains in the literature, bestlink heuristic and graph partitioning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Chain Formation",
"sec_num": "3.2"
},
{
"text": "The best-link heuristic selects the candidate with highest confidence for each anaphor and forms a \"best-link\" between them. After that, it simply joins all the mentions connected by \"best-links\" into the same coreference chain. The best-link heuristic approach is widely used as in (Soon et al, 2001; Yang et al, 2006) because of its simplicity and reasonably good performance.",
"cite_spans": [
{
"start": 283,
"end": 301,
"text": "(Soon et al, 2001;",
"ref_id": "BIBREF13"
},
{
"start": 302,
"end": 319,
"text": "Yang et al, 2006)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Best-Link Heuristics Approach",
"sec_num": null
},
{
"text": "The major critics of best-link heuristic fall on its lack of global consideration when forming the coreference chains. The mentions are only joined through locally selected \"best-links\". Thus the chain consistency is not enforced. Remedies to such a critic are proposed such as best-cut in the next subsection and our proposed method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Best-Link Heuristics Approach",
"sec_num": null
},
{
"text": "Graph partitioning approaches are proposed by various researchers to form coreference chains with global consideration. Here we take Best-Cut proposed in (Nicolae & Nicolae, 2006) as a representative of graph partitioning approaches. Best-Cut is a variant from the well-known minimum-cut algorithm. A graph is formed using all the mentions as vertices. An edge is added between two mentions if a positive output from the mention-pair model. Then the set of edges are iteratively cut to form the coreference chains.",
"cite_spans": [
{
"start": 154,
"end": 179,
"text": "(Nicolae & Nicolae, 2006)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Partitioning Approach",
"sec_num": null
},
{
"text": "According to (Nicolae & Nicolae, 2006) , bestcut does not utilize coreferent pairs involving pronouns. However, event coreference chains contain a significant proportion of pronouns (18.8% of event coreference mentions in the On-toNotes2.0 corpus). Leaving them untouched is obviously not a preferable choice. In the next section, we will propose an alternative chain formation method to incorporate coreferent pronouns into the graph partitioning to accommodate its intensive occurrences in event chains.",
"cite_spans": [
{
"start": 13,
"end": 38,
"text": "(Nicolae & Nicolae, 2006)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Partitioning Approach",
"sec_num": null
},
{
"text": "Our proposed resolution framework follows a similar system flow as the two-step framework which is illustrated in figure 1 for an overview of our resolution system. A brief discussion on various types of event coreference is given in the first subsection 4.1. Each type corresponds to a distinct mention-pair resolver. New features are proposed to capture 3 newly encountered phenomena. After that, we proposed two techniques to improve the mention-pair performance, namely a revised instance selection strategy and utilizing competing classifiers' results. At chain formation step, we also proposed the alternative method, spectral graph partitioning to utilizing pronoun coreferent information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Proposed Model",
"sec_num": "4"
},
{
"text": "As we mentioned, one major difficulty of event coreference lies in the gap between different syntactic types of mentions (e.g. nouns, verbs and pronouns). As discussed in (Chen et al, 2010a;b), different syntactic types of coreferent mentions behave differently which requires different features to resolve them. Following this insight, we have built five distinct resolution models for event coreferences involving noun phrases (NP), pronouns and verbs. They are Verb-Pronoun, Verb-NP, Verb-Verb, NP-NP and NP-Pronoun resolver. Conventionally, pronouns can only appear as anaphor but not antecedent. Therefore we do not train Pronoun-Pronoun resolvers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Seven Distinct Mention-Pair Models",
"sec_num": "4.1"
},
{
"text": "In addition to the syntactic difference, we find event NPs have different behaviors from the object NPs. Event NPs require the event roles to distinguish it from other events while the object NPs are quite self-explaining. The conventional features such as string-matching and headmatching will not work properly when handling cases like \"confliction in Mid-East\" vs. \"confliction in Afghanistan\". In our approach, a sophisticated argument matching feature is proposed to capture such information. The arguments information is extracted automatically from the premodifiers and propositional phrase attachments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Seven Distinct Mention-Pair Models",
"sec_num": "4.1"
},
{
"text": "Similarly, conventional features try to match mentions into semantic categories like person, location and etc. Then it evaluates the semanticmatching features to pair-up mentions from the same semantic type. However, event NPs exhibit a very different hierarchy in WordNet from the object NPs. A dedicated event hierarchy matching feature is proposed to match event of the same type. With respect to the differences between object NPs and Event NPs, we train two distinct models to handle object NP-NP and event NP-NP resolution separately with distinct features. Similarly, we train separate resolvers with distinct features for event/object NP-Pronoun. In total we have seven distinct mention-pair resolvers for different syntactic and semantic types of mentions. Five of them focus on event coreference while the other two aim at object coreference. Object coreference results are used to enhance event coreference performance by rule out in appropriate anaphors. All the features we incorporated are tabulated below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Seven Distinct Mention-Pair Models",
"sec_num": "4.1"
},
{
"text": "Besides the new features we proposed above (e.g. Event-Semantic and Argument-Matching), the other features we used in the seven mention pair resolvers are employed from a number of previous works such as (Soon et al, 2001; Yang et al, 2008) for object coreference feature, (Chen et al, 2010a;b) for features involving verbs.",
"cite_spans": [
{
"start": 204,
"end": 222,
"text": "(Soon et al, 2001;",
"ref_id": "BIBREF13"
},
{
"start": 223,
"end": 240,
"text": "Yang et al, 2008)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Seven Distinct Mention-Pair Models",
"sec_num": "4.1"
},
{
"text": "For the same mention, different mention-pair resolvers will resolve it to different antecedents. Some of these resolution results contradict each other. In the following example: [it] , event NP-Pronoun resolver may pick [the attack] as antecedent while object NP-Pronoun resolver may pick {some evidence} as antecedent. Instead of choosing one as the final resolution result from these contradicting outputs, we feed the object resolver results into the event resolvers as a feature and re-train the event resolvers. The idea behind is to provide the learning models with a confidence on how likely the anaphor refers to an object.",
"cite_spans": [],
"ref_spans": [
{
"start": 179,
"end": 183,
"text": "[it]",
"ref_id": null
}
],
"eq_spans": [],
"section": "Utilizing Competing Classifiers' Results",
"sec_num": null
},
{
"text": "\"USA",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Utilizing Competing Classifiers' Results",
"sec_num": null
},
{
"text": "As we mentioned previously, the traditional training instance selection strategy as in (Ng & Cardie, 2002) has a significant weakness. The original purpose of mention pair resolvers is to identify any two coreferent mentions (not restricted to the closest one). By using the previous training instance selection strategy, the selected training instances actually represent a sample space of locally closest preferable mention vs. locally non-preferable mentions. In most of previous works, it shows a reasonably good performance when using with \"best-link\" chain formation technique. Our investigation shows it actually misguided the graph partitioning methods. Therefore, we propose a revised training instance selection strategy which reflects the true sample space of the original coreferent/non-coreferent status between mentions. In brief, our revised strategy exhaustively selects all the coreferent mention-pairs as positive instances and noncoreferent pairs as negative instances regardless of their closeness to the anaphor. space is represented using our training instances selection strategy.",
"cite_spans": [
{
"start": 87,
"end": 106,
"text": "(Ng & Cardie, 2002)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Revised Training Instances Selection Strategy",
"sec_num": null
},
{
"text": "After deriving the potential coreferent mention pairs, we further use spectral graph partitioning as described in (Ng et al, 2002) to form the globally optimized coreference chains. As we mentioned previously, traditional chain formation technique suffers from a local decision (as in best-link approaches) or failure to incorporate pronoun information (as in best-cut approaches). Spectral graph partitioning shows its advantages over previous approaches. Spectral graph partitioning (aka. Spectral clustering) has made its success in a number of fields such as image segmentation in (Shi & Malik, 2000) and gene expression clustering in (Shamir & Sharan, 2002) .",
"cite_spans": [
{
"start": 114,
"end": 130,
"text": "(Ng et al, 2002)",
"ref_id": "BIBREF6"
},
{
"start": 585,
"end": 604,
"text": "(Shi & Malik, 2000)",
"ref_id": "BIBREF11"
},
{
"start": 639,
"end": 662,
"text": "(Shamir & Sharan, 2002)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Spectral Graph Partitioning",
"sec_num": "4.2"
},
{
"text": "Compared to the \"traditional algorithms\" such as k-means or minimum-cut, spectral clustering has many fundamental advantages. Results obtained by spectral clustering often outperform the traditional approaches, spectral clustering is very simple to implement and can be solved efficiently by standard linear algebra methods. More attractively, according to (Luxburg, 2006) , spectral clustering does not intrinsically suffer from local optima problem. In this paper, the similarity graph is formed in similar way as in (Nicolae & Nicolae, 2006) using SVM confidence 3 outputs.",
"cite_spans": [
{
"start": 357,
"end": 372,
"text": "(Luxburg, 2006)",
"ref_id": "BIBREF5"
},
{
"start": 519,
"end": 544,
"text": "(Nicolae & Nicolae, 2006)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Spectral Graph Partitioning",
"sec_num": "4.2"
},
{
"text": "Besides the simplicity and efficiency of spectral graph partitioning, one particular reason to employ spectral partitioning is that the previous best-cut approach failed to incorporate pronoun information in their similarity graph. It may not be an issue in object coreference scenario as pronouns are only a relatively small proportion (9.78% of object mentions in OntoNotes). However, in event cases, pronouns contribute 18.8% of the event mentions. As we further demonstrated in our corpus study, event chains are relatively more sparse and shorter than object chains. Removing pronouns from the similarity graph will break a significant proportion of the event chains. Thus we propose this spectral graph partitioning approach to overcome this weakness from the previous models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Utilizing Pronoun Information",
"sec_num": null
},
{
"text": "Instead of re-implementing the minimum-cut algorithm, we apply the spectral partitioning to a similarity graph without pronoun information. This setting is based on two considerations. Firstly, spectral partitioning is theoretically equivalent to minimum-cut partitioning which means they can handle the same problem set. Secondly, by using the same model, we can eliminate any empirical difference in these two partitioning algorithms and show the true contribution from incorporating pronoun information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Utilizing Pronoun Information",
"sec_num": null
},
{
"text": "In this section, we present various sets of experiment results to verify the effectiveness of our proposed methods individually and collectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Settings and Results",
"sec_num": "5"
},
{
"text": "The corpus we used is OntoNotes2.0 which contains 300K of English news wire data from Wall Street Journal and 200K of English broadcasting news from various sources including (ABC, CNN and etc.). OntoNotes2.0 provides gold annotation for parsing, named entity, and coreference. The distribution of event coreference is tabulated below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Study",
"sec_num": "5.1"
},
{
"text": "The distribution of event chains is quite sparse. In average, an article contains only 2.6 event chains comparing to 9.7 object chains. Furthermore, event chains are generally shorter than object chains. Each event chain contains 2.72 mentions comparing to 3.74 in the object chains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Study",
"sec_num": "5.1"
},
{
"text": "Settings In this work, we employ two performance metrics for evaluation purposes. At mention-pair level, we used the standard pair-wise precision/recall/f-score to evaluate the seven mentionpair resolvers. At coreference chain level, we use B-Cube (B 3 ) measure as proposed in (Bagga & Baldwin, 1998) . B 3 provides an overall evaluation of coreference chains instead of coreferent links. Thus it is widely used in previous works.",
"cite_spans": [
{
"start": 278,
"end": 301,
"text": "(Bagga & Baldwin, 1998)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Performance Metrics & Experiment",
"sec_num": "5.2"
},
{
"text": "For each experiment conducted, we use the following data splitting. 400 articles are reserved to train the object NP-Pronoun and NP-NP resolvers. (400 news articles are sufficient for object coreference training, comparing with other data sets used for both training and testing such as 519 articles in ACE-02, 60 articles in MUC-6 and 50 articles in MUC-7.) Among the remaining 1118 articles, we random selected 894 (80%) for training the 5 event resolvers while the other 224 articles are used for testing. In order to separate the propagated errors from preprocessing procedures such as parsing and NE tagging, we used OntoNotes 2.0 gold annotation for Parsing and Named Entities only. Coreferent mentions are generated by our system instead of using the gold annotations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance Metrics & Experiment",
"sec_num": "5.2"
},
{
"text": "In order to test the significance in performance differences, we perform paired t-test at 5% level of significance. We conduct the experiments 20 times through a random sampling method to perform meaningful statistical significance test.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "# of Articles # of Chains # of Mentions",
"sec_num": null
},
{
"text": "In this section, we will present the experiment results to verify each of the improvements we proposed in previous sections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "5.3"
},
{
"text": "The first set of experiment results presented here is the seven mention-pair resolvers using all conventional settings without any proposed methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mention-Pair Models Performances",
"sec_num": null
},
{
"text": "The Verb-Verb resolver performance is particularly low due to lack of training instances where only 48 positive instances available from the corpus. Our Mention-pair models are not directly comparable with (Chen et al, 2010a;b) which used gold annotation for object coreference information while we resolve such coreferent pairs using our trained resolvers. There are also a number of differences in the preprocessing stage which makes the direct comparison impractical.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mention-Pair Models Performances",
"sec_num": null
},
{
"text": "The coreference chains formed using spectral partitioning without any proposed improvements yields a B 3 f-score of 38.33% which serves as our initial baseline (BL) for further comparisons.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mention-Pair Models Performances",
"sec_num": null
},
{
"text": "Since object resolver results are in general better than event resolver, we propose to utilize competing object classifiers' results to improve event resolvers' performance. The experiment results are tabulated below. The \"BL+CC\" row presents the performance when utilized competing classifiers' results into the baseline system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Utilizing Competing Classifiers' Results",
"sec_num": null
},
{
"text": "By incorporating the object coreference information, we manage to improve the event coreference resolution significantly, more than 9% F-score for Verb-Pronoun resolver and about 7% F-score for event NP-Pronoun resolver. Object coreference information improves pronoun resolution more than NP resolution. This is mainly because pronouns contain much less information than NP. Such additional information will helps greatly in preventing object pronouns to be resolved by event resolvers mistakenly. Although object coreference is incorporated at mention-pair level, we still measures its contribution to B 3 score at chain level. It improves the B 3 f-score from 38.33% to 43.61% which is a 5.28% improvement. This observation also shows the importance of collective decision of multiple classifiers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Utilizing Competing Classifiers' Results",
"sec_num": null
},
{
"text": "The second technique we proposed is a revised training instances selection strategy. Table 5 shows improvement using revised instance selection strategy. We refer the traditional instance selection strategy as \"BL+CC\" and our proposed instance selection strategy as \"BL+CC+RIS\" (Revised Instance Selection). At mention-pair level we take event NP-Pronoun resolver for demonstration. Similar behaviors are observed in all the mention-pair models. In order to demonstrate power of revised instance selection scheme, we evaluate the mention-pair results in two different ways. The best-candidate evaluation follows the traditional mention pair evaluation. It firstly groups mention-pair predictions by anaphor. Then an anaphor is correctly resolved as long as the candidate-anaphora pair with highest resolver's score is the true antecedentanaphor pair. The correct/wrong of other candidates' resolution outputs are not counted at all. The coreferent link evaluation counts each candidate-anaphor pair resolution separately. Intuitively, best-candidate evaluation measures how good a resolver can rank the candidates while the coreferent link evaluation measures the how good a resolver identifies coreferent pairs. An interesting phenomenon here is the performance evaluation using the best candidate actually drops 3.26% in f-measure when employing the revised instance selection scheme. But when we look at the coreferent link results, the revised instance selection scheme improves the performance by 2.84% f-measure. As a result, our revised instance selection scheme trains better classifier with higher coreferent link prediction results. Since this coreferent link information is further used in the final chain formation step. Our revised scheme contributes an improvement on the final event chain formation by 2.02% F-Score in B 3 measure. This observation shows that the traditional mention-pair model should be revised to maximize the coreferent link performance instead of the traditional best-candidate performance. Because the coreferent link performance is more influential to the final chain formation process using graph partitioning approach.",
"cite_spans": [],
"ref_spans": [
{
"start": 85,
"end": 92,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Revised Instance Selection",
"sec_num": null
},
{
"text": "The third improvement we proposed is the spectral partitioning with pronoun information. The performance improvement is demonstrated in table 6. In order to separate the contribution from incorporating pronouns and revising instance 4 The B 3 -F-Score difference between RIS and Baseline is statistically significant using paired t-test at 5% level of significance selection, we conducted the experiment using traditional training instance selection. Table 6 : Performance in % using pronoun information By incorporating the coreferent pronoun information, the performance is significantly improved by 2.19% in f-measure. By further incorporating the revised instance selection scheme, we achieve B 3-Precision/Recall/F-Score as 35.27 / 70.02 / 46.91% respectively which is an 8.54% F-score improvement from the initial resolution system. 46.91% F-score is the highest performance we achieved in this event coreference resolution work.",
"cite_spans": [
{
"start": 233,
"end": 234,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 451,
"end": 458,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Spectral Partitioning Utilizing Pronoun Information",
"sec_num": null
},
{
"text": "This paper presents a unified event coreference resolution system by integrating multiple mention-pair classifiers including 3 new mention-pair resolvers. Furthermore, we proposed three techniques to enhance the resolution performance. First, we utilize the competing classifiers' results to enhance mention-pair model. Then we propose the revised training instance selection scheme to provide better coreferent link information to graph partitioning model. Lastly, we employ spectral partitioning method with pronoun information to improve chain formation performance. All the three techniques contribute to a significant improvement of 8.54% over the initial 38.33% in B 3 F-score. In future, we plan to incorporate more semantic knowledge for mentionpair models such as semantic roles and word senses. For chain formation, we plan to incorporate domain knowledge to enforce chain consistency. 5 The B 3 -F-Score difference between Baseline and Base-line+Pronoun is statistically significant using paired t-test at 5% level of significance Table 5 : Performance in % using revised instance selection",
"cite_spans": [
{
"start": 896,
"end": 897,
"text": "5",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1042,
"end": 1049,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion and Future Works",
"sec_num": "6"
},
{
"text": "Event roles refer to the arguments of the event such as actuator, patient, time, location and etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "(Stoyanov et al, 2010) reportedRECONCILE(two-steps) achieving 74.25% B 3 f-score on ACE 2005. (Haghighi & Klein, 2010) using single-step approach reported 75.10% B 3 f-score on the same dataset with same train/test-splitting. According to our experiences, such a 0.95% difference is not statistically significant. Other single-step works as(Rahman & Ng, 2009) and(Poon & Domingo, 2008) reported clearly lower B 3 f-score than RECONCILE using same datasets but different train/test-splitting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Confidence is computed from kernel outputs using sigmoid function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Reference to Abstract Objects in Discourse",
"authors": [
{
"first": "N",
"middle": [],
"last": "Asher",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Asher,N. 1993. Reference to Abstract Objects in Dis- course. Kluwer Academic Publisher.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Algorithms for scoring coreference chains",
"authors": [
{
"first": "A",
"middle": [],
"last": "Bagga",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the Linguistic Coreference Workshop at Conference on Language Resources and Evaluation (LREC-1998)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bagga,A. & Baldwin,B. 1998. Algorithms for scoring coreference chains. In Proceedings of the Linguis- tic Coreference Workshop at Conference on Lan- guage Resources and Evaluation (LREC-1998).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Resolving Pronominal Reference to Abstract Entities",
"authors": [
{
"first": "D",
"middle": [],
"last": "Byron",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Byron,D. 2002. Resolving Pronominal Reference to Abstract Entities, In Proceedings of the 40th An- nual Meeting of the Association for Computational Linguistics (ACL), USA.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A Twin-Candidate Based Approach for Event Pronoun Resolution using Composite Kernel",
"authors": [
{
"first": "J",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "M",
"middle": [
";"
],
"last": "Strube",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "China",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "C",
"middle": [
"L"
],
"last": "Tan",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the International Conference on Computational Linguistics (COLING)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cai,J. & Strube,M. 2010. End-to-End Coreference Resolution via Hypergraph Partitioning. In Pro- ceedings of the International Conference on Com- putational Linguistics (COLING), China, Chen,B.; Su,J. & Tan,C.L. 2010a. A Twin-Candidate Based Approach for Event Pronoun Resolution us- ing Composite Kernel. In Proceedings of the In- ternational Conference on Computational Linguis- tics (COLING), China, Chen,B.; Su,J. & Tan,C.L. 2010b. Resolving Noun Phrases to their Verbal Mentions. In Proceeding of conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), USA.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "OntoNotes: The 90% Solution",
"authors": [
{
"first": "E",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Weischedel",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Human Language Technology Conference of the NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hovy,E.; Marcus,M.; Palmer,M.; Ramshaw,L. & Weischedel,R. 2006. OntoNotes: The 90% Solu- tion. In Proceedings of the Human Language Technology Conference of the NAACL, USA",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Tubingen: Max Planck Institute for Biological Cybernetic M\u00fcller,C. 2007. Resolving it, this, and that in unrestricted multi-party dialog",
"authors": [
{
"first": "U",
"middle": [],
"last": "Luxburg",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of ACL-2007",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luxburg,U. 2006. A tutorial on spectral clustering. In (MPITechnical Reports No. 149). Tubingen: Max Planck Institute for Biological Cybernetic M\u00fcller,C. 2007. Resolving it, this, and that in unre- stricted multi-party dialog. In Proceedings of ACL-2007, Czech Republic.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Combining sample selection and error-driven pruning for machine learning of coreference rules",
"authors": [
{
"first": "A",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Jordan",
"suffix": ""
},
{
"first": "Y",
"middle": [
"; V"
],
"last": "Weiss",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of conference on Empirical Methods in Natural Language Processing",
"volume": "14",
"issue": "",
"pages": "849--856",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ng,A.; Jordan,M. & Weiss,Y. 2002. On spectral clus- tering: analysis and an algorithm. In Advances in Neural Information Processing Systems 14 (pp. 849 -856). MIT Press Ng,V. & Cardie,C. 2002. Combining sample selection and error-driven pruning for machine learning of coreference rules. In Proceedings of conference on Empirical Methods in Natural Language Processing (EMNLP), USA.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "BESTCUT: A Graph Algorithm for Coreference Resolution",
"authors": [
{
"first": "C",
"middle": [],
"last": "Nicolae",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "& Nicolae",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicolae,C & Nicolae,G. 2006. BESTCUT: A Graph Algorithm for Coreference Resolution. In Pro- ceedings of conference on Empirical Methods in Natural Language Processing (EMNLP), Austrilia.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Joint Unsupervised Coreference Resolution with Markov Logic",
"authors": [
{
"first": "H",
"middle": [],
"last": "Poon",
"suffix": ""
},
{
"first": "",
"middle": [
"P"
],
"last": "Domingos",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Poon,H. & Domingos.P. 2008. Joint Unsupervised Coreference Resolution with Markov Logic. In Proceedings of the Conference on Empirical Me- thods in Natural Language Processing (EMNLP), 2008.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Unrestricted Coreference: Identifying Entities and Events in OntoNotes",
"authors": [
{
"first": "S",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Weischedel",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Macbride",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Micciulla",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the IEEE International Conference on Semantic Computing (ICSC)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pradhan,S.; Ramshaw,L.; Weischedel,R.; MacBride,J. & Micciulla,L. 2007. Unrestricted Coreference: Identifying Entities and Events in OntoNotes. In Proceedings of the IEEE International Conference on Semantic Computing (ICSC), USA.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Algorithmic approaches to clustering gene expression data. Current Topics in Computational Molecular Biology",
"authors": [
{
"first": "R",
"middle": [],
"last": "Shamir",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Sharan",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "269--300",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shamir,R.; and Sharan,R. 2002. Algorithmic ap- proaches to clustering gene expression data. Cur- rent Topics in Computational Molecular Biology, 269-300.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Normalized cuts and image segmentation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Malik",
"suffix": ""
}
],
"year": 2000,
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence(PAMI)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shi,J. & Malik,J. 2000. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence(PAMI).",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Focusing in the comprehension of definite anaphora",
"authors": [
{
"first": "C",
"middle": [
"L"
],
"last": "Sidner",
"suffix": ""
}
],
"year": 1983,
"venue": "Computational Models of Discourse",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sidner,C.L. 1983. Focusing in the comprehension of definite anaphora. In Computational Models of Discourse. MIT Press.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A machine learning approach to coreference resolution of noun phrases",
"authors": [
{
"first": "W",
"middle": [],
"last": "Soon",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Lim",
"suffix": ""
}
],
"year": 2001,
"venue": "Computational Linguistics",
"volume": "27",
"issue": "4",
"pages": "521--544",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Soon,W.; Ng,H. & Lim, D. 2001. A machine learning approach to coreference resolution of noun phras- es. Computational Linguistics, 27(4):521-544.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Coreference Resolution with Reconcile",
"authors": [
{
"first": "V",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Cardie",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Gilbert",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Riloff",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Buttler",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Hysom",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Conference of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stoyanov,V.; Cardie,C.; Gilbert,N.; Riloff,E.; Butt- ler,D. & Hysom,D. 2010 Coreference Resolution with Reconcile. In Proceedings of the Conference of the 48th Annual Meeting of the Association for Computational Linguistics (ACL 2010)",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "BART: A Modular Toolkit for Coreference Resolution",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Versley",
"suffix": ""
},
{
"first": "S",
"middle": [
"P"
],
"last": "Ponzetto",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Poesio",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Eidelman",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Jern",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 6th International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Versley,Y.; Ponzetto,S.P.; Poesio,M.; Eidelman,V.; Jern,A.; Smith,J.; Yang,X. & Moschitti,A. 2008. BART: A Modular Toolkit for Coreference Reso- lution. In Proceedings of the 6th International Conference on Language Resources and Evalua- tion (LREC 2008).",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Kernel-Based Pronoun Resolution with Structured Syntactic Knowledge",
"authors": [
{
"first": "X",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "C",
"middle": [
"L"
],
"last": "Tan",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Conference of the 46th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang,X.; Su,J. & Tan,C.L. 2006. Kernel-Based Pro- noun Resolution with Structured Syntactic Know- ledge. In Proceedings of the Conference of the 46th Annual Meeting of the Association for Com- putational Linguistics. Australia.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A Twin-Candidates Model for Learning-Based Coreference Resolution",
"authors": [
{
"first": "X",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "C",
"middle": [
"L"
],
"last": "Tan",
"suffix": ""
}
],
"year": 2008,
"venue": "Computational Linguistics",
"volume": "34",
"issue": "",
"pages": "327--356",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang,X.; Su,J. & Tan,C.L. 2008. A Twin-Candidates Model for Learning-Based Coreference Resolution. In Computational Linguistics, 34(3):327-356.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Figure 1: System Overview"
},
"TABREF0": {
"content": "<table/>",
"type_str": "table",
"text": "are referring to the same event (an Israel attack in Gaza Strip on Palestinian",
"num": null,
"html": null
},
"TABREF4": {
"content": "<table><tr><td>Verb-Pronoun</td><td>32.34</td><td>68.32</td><td>43.90</td></tr><tr><td>Verb-NP</td><td>54.22</td><td>68.56</td><td>60.55</td></tr><tr><td>Verb-Verb</td><td>22.47</td><td>83.33</td><td>35.40</td></tr><tr><td>NP-Pronoun</td><td>46.62</td><td>70.47</td><td>56.12</td></tr><tr><td>NP-NP</td><td>48.83</td><td>60.08</td><td>53.88</td></tr><tr><td>Object Resolvers</td><td/><td/><td/></tr><tr><td>NP-NP</td><td>58.89</td><td>66.04</td><td>62.26</td></tr><tr><td>NP-Pronoun</td><td>61.37</td><td>84.33</td><td>71.04</td></tr><tr><td>Event Chain B 3</td><td colspan=\"3\">Precision Recall F-Score</td></tr><tr><td>BL</td><td>26.67</td><td>68.09</td><td>38.33</td></tr><tr><td colspan=\"4\">Table 3: Mention-Pair Performance in %</td></tr></table>",
"type_str": "table",
"text": "",
"num": null,
"html": null
},
"TABREF5": {
"content": "<table><tr><td>Mention-Pair</td><td colspan=\"3\">Precision Recall F-Score</td></tr><tr><td colspan=\"2\">Event Verb-Pronoun Resolver</td><td/><td/></tr><tr><td>w/o object info</td><td>32.34</td><td>68.32</td><td>43.90</td></tr><tr><td>with object info</td><td>45.09</td><td>64.73</td><td>53.00</td></tr><tr><td colspan=\"2\">Event Verb-NP Resolver</td><td/><td/></tr><tr><td>w/o object info</td><td>54.22</td><td>68.56</td><td>60.55</td></tr><tr><td>with object info</td><td>56.67</td><td>67.61</td><td>61.66</td></tr><tr><td colspan=\"2\">Event NP-Pronoun Resolver</td><td/><td/></tr><tr><td>w/o object info</td><td>46.62</td><td>70.47</td><td>56.12</td></tr><tr><td>with object info</td><td>57.83</td><td>69.15</td><td>62.99</td></tr><tr><td colspan=\"2\">Event NP-NP Resolver</td><td/><td/></tr><tr><td>w/o object info</td><td>48.83</td><td>60.08</td><td>53.88</td></tr><tr><td>with object info</td><td>51.35</td><td>59.20</td><td>55.00</td></tr><tr><td>Event Chain B 3</td><td colspan=\"3\">Precision Recall F-Score</td></tr><tr><td>BL</td><td>26.67</td><td>68.09</td><td>38.33</td></tr><tr><td>BL + CC</td><td>32.33</td><td>67.08</td><td>43.61</td></tr><tr><td colspan=\"4\">: Performance in % using competing classifi-</td></tr><tr><td/><td>ers' results</td><td/><td/></tr></table>",
"type_str": "table",
"text": "",
"num": null,
"html": null
},
"TABREF7": {
"content": "<table><tr><td colspan=\"4\">Event NP-Pronoun using Best Candidate Evaluation</td></tr><tr><td>BL+CC</td><td>57.83</td><td>69.15</td><td>62.99</td></tr><tr><td>BL+CC+RIS</td><td>52.05</td><td>67.11</td><td>58.63</td></tr><tr><td colspan=\"4\">Event NP-Pronoun using Coreferent Link Evaluation</td></tr><tr><td>BL+CC</td><td>39.96</td><td>64.03</td><td>49.21</td></tr><tr><td>BL+CC+RIS</td><td>43.33</td><td>65.47</td><td>52.15</td></tr><tr><td>Event Chain B 3</td><td colspan=\"3\">Precision Recall F-Score</td></tr><tr><td>BL+CC</td><td>32.33</td><td>67.08</td><td>43.61 1</td></tr><tr><td>BL+CC+RIS</td><td>35.21</td><td>64.74</td><td>45.63 4</td></tr></table>",
"type_str": "table",
"text": "",
"num": null,
"html": null
}
}
}
}