{ "paper_id": "I11-1044", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:30:42.212479Z" }, "title": "Extracting Relation Descriptors with Conditional Random Fields", "authors": [ { "first": "Yaliang", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "Singapore Management University", "location": { "country": "Singapore" } }, "email": "ylli@smu.edu.sg" }, { "first": "Jing", "middle": [], "last": "Jiang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Singapore Management University", "location": { "country": "Singapore" } }, "email": "jingjiang@smu.edu.sg" }, { "first": "Hai", "middle": [], "last": "Leong Chieu", "suffix": "", "affiliation": { "laboratory": "", "institution": "DSO National Laboratories", "location": { "country": "Singapore" } }, "email": "" }, { "first": "Ming", "middle": [ "A" ], "last": "Kian", "suffix": "", "affiliation": {}, "email": "ckianmin@dso.org.sg" }, { "first": "", "middle": [], "last": "Chai", "suffix": "", "affiliation": { "laboratory": "", "institution": "DSO National Laboratories", "location": { "country": "Singapore" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper we study a novel relation extraction problem where a general relation type is defined but relation extraction involves extracting specific relation descriptors from text. This new task can be treated as a sequence labeling problem. Although linear-chain conditional random fields (CRFs) can be used to solve this problem, we modify this baseline solution in order to better fit our task. We propose two modifications to linear-chain CRFs, namely, reducing the space of possible label sequences and introducing long-range features. Both modifications are based on some special properties of our task. Using two data sets we have annotated, we evaluate our methods and find that both modifications to linear-chain CRFs can significantly improve the performance for our task.", "pdf_parse": { "paper_id": "I11-1044", "_pdf_hash": "", "abstract": [ { "text": "In this paper we study a novel relation extraction problem where a general relation type is defined but relation extraction involves extracting specific relation descriptors from text. This new task can be treated as a sequence labeling problem. Although linear-chain conditional random fields (CRFs) can be used to solve this problem, we modify this baseline solution in order to better fit our task. We propose two modifications to linear-chain CRFs, namely, reducing the space of possible label sequences and introducing long-range features. Both modifications are based on some special properties of our task. Using two data sets we have annotated, we evaluate our methods and find that both modifications to linear-chain CRFs can significantly improve the performance for our task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Relation extraction is the task of identifying and characterizing the semantic relations between entities in text. Depending on the application and the resources available, relation extraction has been studied in a number of different settings. When relation types are well defined and labeled relation mention instances are available, supervised learning is usually applied (Zelenko et al., 2003; Zhou et al., 2005; Bunescu and Mooney, 2005; Zhang et al., 2006) . When relation types are known but little training data is available, bootstrapping has been used to iteratively expand the set of seed examples and relation patterns (Agichtein and Gravano, 2000) . When no relation type is pre-defined but there is a focused corpus of interest, unsupervised relation discovery tries to cluster entity pairs in order to identify interesting relation types (Hasegawa et al., 2004; Rosenfeld and Feldman, 2006; Shinyama and Sekine, 2006) . More recently, open relation extraction has also been proposed where there is no fixed domain or predefined relation type, and the goal is to identify all possible relations from an open-domain corpus (Banko and Etzioni, 2008; Wu and Weld, 2010; Hoffmann et al., 2010) .", "cite_spans": [ { "start": 375, "end": 397, "text": "(Zelenko et al., 2003;", "ref_id": "BIBREF13" }, { "start": 398, "end": 416, "text": "Zhou et al., 2005;", "ref_id": "BIBREF15" }, { "start": 417, "end": 442, "text": "Bunescu and Mooney, 2005;", "ref_id": "BIBREF2" }, { "start": 443, "end": 462, "text": "Zhang et al., 2006)", "ref_id": "BIBREF14" }, { "start": 631, "end": 660, "text": "(Agichtein and Gravano, 2000)", "ref_id": "BIBREF0" }, { "start": 853, "end": 876, "text": "(Hasegawa et al., 2004;", "ref_id": "BIBREF5" }, { "start": 877, "end": 905, "text": "Rosenfeld and Feldman, 2006;", "ref_id": "BIBREF9" }, { "start": 906, "end": 932, "text": "Shinyama and Sekine, 2006)", "ref_id": "BIBREF11" }, { "start": 1136, "end": 1161, "text": "(Banko and Etzioni, 2008;", "ref_id": "BIBREF1" }, { "start": 1162, "end": 1180, "text": "Wu and Weld, 2010;", "ref_id": "BIBREF12" }, { "start": 1181, "end": 1203, "text": "Hoffmann et al., 2010)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "These different relation extraction settings suit different applications. In this paper, we focus on another setting where the relation types are defined at a general level but a more specific relation description is desired. For example, in the widely used ACE 1 data sets, relation types are defined at a fairly coarse granularity. Take for instance the \"employment\" relation, which is a major relation type defined in ACE. In ACE evaluation, extraction of this relation only involves deciding whether a person entity is employed by an organization entity. In practice, however, we often also want to find the exact job title or position this person holds at the organization if this information is mentioned in the text. Table 1 gives some examples. We refer to the segment of text that describes the specific relation between the two related entities (i.e., the two arguments) as the relation descriptor. This paper studies how to extract such relation descriptors given two arguments. One may approach this task as a sequence labeling problem and apply methods such as the linearchain conditional random fields (CRFs) (Lafferty et al., 2001 ). However, this solution ignores a useful property of the task: the space of possible label sequences is much smaller than that enumerated by a linear-chain CRF. There are two implications. First, the normalization constant in the linear-chain CRF is too large because it also enumerates the impossible sequences. Second, the restriction to the correct space of label sequence per- ", "cite_spans": [ { "start": 1123, "end": 1145, "text": "(Lafferty et al., 2001", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 724, "end": 731, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A ARG-2 spokesman , ARG-1 , said the company now ... spokesman At ARG-2 , by contrast , ARG-1 said customers spend on ... Nil", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "vice president (PER, ORG)", "sec_num": null }, { "text": "Personal/Social ARG-1 had an elder brother named ARG-2 . an elder brother (PER, PER)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "vice president (PER, ORG)", "sec_num": null }, { "text": "ARG-1 was born at ... , as the son of ARG-2 of Sweden ... the son ARG-1 later married ARG-2 in 1973 , ... married Through his contact with ARG-1 , ARG-2 joined the Greek Orthodox Church . Nil Table 1 : Some examples of candidate relation instances and their relation descriptors.", "cite_spans": [], "ref_spans": [ { "start": 192, "end": 199, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "vice president (PER, ORG)", "sec_num": null }, { "text": "mits the use of long-range features without an exponential increase in computational cost. We compare the performance of the baseline linear-chain CRF model and our special CRF model on two data sets that we have manually annotated. Our experimental results show that both reducing the label sequence space and introducing long-range features can significantly improve the baseline performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "vice president (PER, ORG)", "sec_num": null }, { "text": "The rest of the paper is organized as follows. In Section 2 we review related work. We then formally define our task in Section 3. In Section 4 we present a baseline linear-chain CRF-based solution and our modifications to the baseline method. We discuss the annotation of our data sets and show our experimental results in Section 5. We conclude in Section 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "vice president (PER, ORG)", "sec_num": null }, { "text": "Most existing work on relation extraction studies binary relations between two entities. For supervised relation extraction, existing work often uses the ACE benchmark data sets for evaluation (Bunescu and Mooney, 2005; Zhou et al., 2005; Zhang et al., 2006) . In this setting, a set of relation types are defined and the task is to identify pairs of entities that are related and to classify their relations into one of the pre-defined relation types. It is assumed that the relation type itself is sufficient to characterize the relation between the two related entities. However, based on our observation, some of the relation types defined in ACE such as the \"employment\" relation and the \"personal/social\" relation are very general and can be further characterized by more specific descriptions.", "cite_spans": [ { "start": 193, "end": 219, "text": "(Bunescu and Mooney, 2005;", "ref_id": "BIBREF2" }, { "start": 220, "end": 238, "text": "Zhou et al., 2005;", "ref_id": "BIBREF15" }, { "start": 239, "end": 258, "text": "Zhang et al., 2006)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Recently open relation extraction has been proposed for open-domain information extraction (Banko and Etzioni, 2008) . Since there are no fixed relation types, open relation extraction aims at extracting all possible relations between pairs of entities. The extracted results are (ARG-1, REL, ARG-2) tuples. The TextRunner system based on (Banko and Etzioni, 2008) extracts a diverse set of relations from a huge Web corpus. These extracted predicate-argument tuples are presumably the most useful to support Web search scenarios where the user is looking for specific relations. However, because of the diversity of the extracted relations and the domain independence, open relation extraction is probably not suitable for populating relational databases or knowledgebases. In contrast, the task of extracting relation descriptors as we have proposed still assumes a pre-defined general relation type, which ensures that the extracted tuples follow the same relation definition and thus can be used in applications such as populating relational databases.", "cite_spans": [ { "start": 91, "end": 116, "text": "(Banko and Etzioni, 2008)", "ref_id": "BIBREF1" }, { "start": 339, "end": 364, "text": "(Banko and Etzioni, 2008)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In terms of models and techniques, we use standard linear-chain CRF as our baseline, which is the main method used in (Banko and Etzioni, 2008) as well as for many other information extraction problems. The major modifications we propose for our task are the reduction of the label sequence space and the incorporation of long-range features. We note that these modifications are closely related to the semi-Markov CRF models proposed by Sarawagi and Cohen (2005) . In fact, the modified CRF model for our task can be considered as a special case of semi-Markov CRF where we only consider label sequences that contain at most one relation descriptor sequence.", "cite_spans": [ { "start": 118, "end": 143, "text": "(Banko and Etzioni, 2008)", "ref_id": "BIBREF1" }, { "start": 438, "end": 463, "text": "Sarawagi and Cohen (2005)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In this section we define the task of extracting relation descriptors for a given pre-defined class of relations such as \"employment.\" Given two named entities occurring in the same sentence, one acting as ARG-1 and the other as ARG-2, we aim to extract a segment of text from the sentence that best describes a pre-defined general relation between the two entities. Formally, let (w 1 , w 2 , . . . , w n ) denote the sequence of tokens in a sentence, where w p is ARG-1 and w q is ARG-2 (1 \u2264 p, q \u2264 n, p = q). Our goal is to locate a subsequence (w r , . . . , w s ) (1 \u2264 r \u2264 s \u2264 n) that best describes the relation between ARG-1 and ARG-2. If ARG-1 and ARG-2 are not related through the pre-defined general relation, Nil should be returned.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Definition", "sec_num": "3" }, { "text": "The above definition constrains ARG-1 and ARG-2 to single tokens. In our experiments, we will replace the original lexical strings of ARG-1 and ARG-2 with the generic tokens ARG1 and ARG2. Examples of sentences with the named entities replaced with argument tokens are shown in the second column of Table 1 .", "cite_spans": [], "ref_spans": [ { "start": 299, "end": 306, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Task Definition", "sec_num": "3" }, { "text": "The relation descriptor extraction task can be treated as a sequence labeling problem. Let x = (x 1 , x 2 , . . . , x n ) denote the sequence of observations in a relation instance, where x i is w i augmented with additional information such as the POS tag of w i , and the phrase boundary information. Each observation x i is associated with a label y i \u2208 Y which indicates whether w i is part of the relation descriptor. Following the commonly used BIO notation (Ramshaw and Marcus, 1995) in sequence labeling, we define Y = {B-REL, I-REL, O}. Let y = (y 1 , y 2 , . . . , y n ) denote the sequence of labels for x. Our task can be reduced to finding the best label sequence\u0177 among all the possible label sequences for x.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method 4.1 Representation", "sec_num": "4" }, { "text": "For sequence labeling tasks in NLP, linear-chain CRFs have been rather successful. It is an undirected graphical model in which the conditional probability of a label sequence y given the observation sequence x is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Linear-Chain CRF Solution", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(y|x, \u039b) = exp \" P i P k \u03bb k f k (yi\u22121, yi, x) \" Z(x, \u039b) ,", "eq_num": "(1)" } ], "section": "A Linear-Chain CRF Solution", "sec_num": "4.2" }, { "text": "where \u039b = {\u03bb k } is the set of model parameters, f k is an arbitrary feature function defined over two consecutive labels and the whole observation sequence, and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Linear-Chain CRF Solution", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Z(x, \u039b) = X y exp \" X i X k \u03bb k f k (y i\u22121 , y i , x) \"", "eq_num": "(2)" } ], "section": "A Linear-Chain CRF Solution", "sec_num": "4.2" }, { "text": "is the normalization constant. Given a set of training instances {x j , y * j } where y * j is the correct label sequence for x j , we can learn the best model parameters\u039b as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Linear-Chain CRF Solution", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u039b = arg min \u039b \u2212 X j log p(y * j |xj, \u039b) + \u03b2 X k \u03bb 2 k ! .", "eq_num": "(3)" } ], "section": "A Linear-Chain CRF Solution", "sec_num": "4.2" }, { "text": "Here \u03b2 k \u03bb 2 k is a regularization term.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Linear-Chain CRF Solution", "sec_num": "4.2" }, { "text": "We note that while we can directly apply linearchain CRFs to extract relation descriptors, there are some special properties of our task that allow us to modify standard linear-chain CRFs to better suit our needs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Improvement over Linear-Chain CRFs", "sec_num": "4.3" }, { "text": "In linear-chain CRFs, the normalization constant Z considers all possible label sequences y. For the relation descriptor extraction problem, however, we expect that there is either a single relation descriptor sequence or no such sequence. In other words, for a given relation instance, we only expect two kinds of label sequences: (1) All y i are O, and (2) exactly one y i is B-REL followed by zero or more consecutive I-REL while all other y i are O. Therefore the space of label sequences should be reduced to only those that satisfy the above constraint.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Label sequence constraint", "sec_num": null }, { "text": "One way to exploit this constraint within linearchain CRFs is to enforce it only during testing. We can pick the label sequence that has the highest probability in the valid label sequence space instead of the entire label sequence space. For a candidate relation instance x, let\u1ef8 denote the set of valid label sequences, i.e., those that have either one or no relation descriptor sequence. We then choose the best sequence\u0177 as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Label sequence constraint", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "y = arg max y\u2208\u1ef8 p(y|x,\u039b).", "eq_num": "(4)" } ], "section": "Label sequence constraint", "sec_num": null }, { "text": "Arguably, the more principled way to exploit the constraint is to modify the probabilistic model itself. So at the training stage, we should also consider only\u1ef8 by defining the normalization termZ as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Label sequence constraint", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Z(x, \u039b) = X y \u2208\u1ef8 exp \" X i X k \u03bb k f k (y i\u22121 , y i , x) \" .", "eq_num": "(5)" } ], "section": "Label sequence constraint", "sec_num": null }, { "text": "The difference between Equation 5and Equation (2) is the set of label sequences considered. In other words, while in linear-chain CRFs the correct label sequence competes with all possible label sequences for probability mass, for our task the correct label sequence should compete with only other valid label sequences. In Section 5 we will compare these two different normalization terms and show the advantage of using Equation 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Label sequence constraint", "sec_num": null }, { "text": "In linear-chain CRF models, only first-order label dependencies are considered because features are defined over two consecutive labels. Inference in linear-chain CRFs can be done efficiently using dynamic programming. More general higherorder CRF models also exist, allowing long-range features defined over more than two consecutive labels. But the computational cost of higher-order CRFs also increases exponentially with the order of dependency.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding long-range features", "sec_num": null }, { "text": "For our task, because of the constraint on the space of label sequences, we can afford to use long-range features. In our case, inference is still efficient because the number of sequences to be enumerated has been drastically reduced due to the constraint. Let g(y, x) denote a feature function defined over the entire label sequence y and the observation sequence x. We can include such feature functions in our model as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding long-range features", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(y|x, \u0398) = 1 Z(x,\u0398) \" exp \" P i P k \u03bb k f k (yi\u22121, yi, x) + P l \u00b5 l g l (y, x) \" # ,", "eq_num": "(6)" } ], "section": "Adding long-range features", "sec_num": null }, { "text": "where \u0398 = {{\u03bb k }, {\u00b5 l }} is the set of all model parameters. Both {\u03bb k } and {\u00b5 l } are regularized as in Equation 3. Note that although each f (y i\u22121 , y i , x) may be subsumed under a g(y, x), here we group all features that can be captured by linear-chain CRFs under f and other real longrange features under g. In Section 5 we will see that with the additional feature functions g, relation extraction performance can also be further improved.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding long-range features", "sec_num": null }, { "text": "We now describe the features we use in the baseline linear-chain CRF model and our modified model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4.4" }, { "text": "The linear-chain features are those that can be formulated as f (y i\u22121 , y i , x), i.e., those that depend on x and two consecutive labels only. We use typical features that include tokens, POS tags and phrase boundary information coupled with label values. Let t i denote the POS tag of w i and p i denote the phrase boundary tag of w i . The phrase boundary tags also follow the BIO notation. Examples include B-NP, I-VP, etc. Table 2 shows the feature templates covering only the observations. Each feature shown in Table 2 is further combined with either the value of the current label y i or the values of the previous and the current labels y i\u22121 and y i to form zeroth order and first order features. For example, a zeroth order feature is \"y i is B-REL and w i is the and w i+1 is president\", and a first order feature is \"y i\u22121 is O and y i is B-REL and t i is N\".", "cite_spans": [], "ref_spans": [ { "start": 429, "end": 436, "text": "Table 2", "ref_id": null }, { "start": 519, "end": 526, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Linear-chain features", "sec_num": null }, { "text": "Long-range features are those that cannot be defined based on only two consecutive labels. When defining long-range features, we treat the whole relation descriptor sequence as a single unit, denoted as REL. Given a label sequence y that contains a relation descriptor sequence, let (w r , w r+1 , . . . , w s ) denote the relation descriptor, that is, y r = B-REL and y t = I-REL where r + 1 \u2264 t \u2264 s. The long-range features we use are categorized and summarized in Table 3 . These features capture the context of the entire relation descriptor, its relation to the two arguments, and whether the boundary of the relation descriptor conforms to the phrase boundaries (since we expect that most relation descriptors consist of a single or a sequence of phrases).", "cite_spans": [], "ref_spans": [ { "start": 467, "end": 474, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Long-range features", "sec_num": null }, { "text": "Since the task of extracting relation descriptors is new, we are not aware of any data set that can be directly used to evaluate our methods. We therefore annotated two data sets for evaluation, one for the general \"employment\" relation and the other for the general \"personal/social\" relation. 2 The first data set contains 150 business articles from New York Times. The articles were crawled from the NYT website between November 2009 Description Feature Template Example single token wi+j (\u22122 \u2264 j \u2264 2) wi+1 (next token) is president single POS tag ti+j (\u22122 \u2264 j \u2264 2) ti (current POS tag) is DET single phrase tag pi+j (\u22122 \u2264 j \u2264 2) pi\u22121 (previous phrase boundary tag) is I-NP two consecutive tokens wi+j\u22121&wi+j (\u22121 \u2264 j \u2264 2) wi is the and wi+1 is president two consecutive POS tags ti+j\u22121&ti+j (\u22121 \u2264 j \u2264 2) ti is DET and ti+1 is N two consecutive phrase tags pi+j\u22121&pi+j (\u22121 \u2264 j \u2264 2) pi is B-NP and pi+1 is I-NP Table 2 : Linear-chain feature templates. Each feature is defined with respect to a particular (current) position in the sequence. i indicates the current position and j indicates the position relative to the current position. All features are defined using observations within a window size of 5 of the current position.", "cite_spans": [ { "start": 295, "end": 296, "text": "2", "ref_id": null } ], "ref_spans": [ { "start": 912, "end": 919, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Data Preparation", "sec_num": "5.1" }, { "text": "Contextual Features word wr\u22121 or POS tag tr\u22121 preceding relation descriptor , REL word ws+1 or POS tag ts+1 following relation descriptor REL PREP", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Category Feature Template Description Example", "sec_num": null }, { "text": "Path-based Features word or POS tag sequence between ARG1 and relation descriptor ARG1 is REL word or POS tag sequence between ARG2 and relation descriptor REL PREP ARG2 word or POS tag sequence containing ARG1, ARG2 and relation descriptor ARG2 's REL , ARG1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Category Feature Template Description Example", "sec_num": null }, { "text": "Phrase Boundary whether relation descriptor violates phrase boundaries 1 or 0 Feature Table 3 : Long-range feature templates. r and s are the indices of the first word and the last word of the relation descriptor, respectively. and January 2010. After sentence segmentation and tokenization, we used the Stanford NER tagger (Finkel et al., 2005) to identify PER and ORG named entities from each sentence. For named entities that contain multiple tokens we concatenated them into a single token. We then took each pair of (PER, ORG) entities that occur in the same sentence as a single candidate relation instance, where the PER entity is treated as ARG-1 and the ORG entity is treated as ARG-2.", "cite_spans": [ { "start": 324, "end": 345, "text": "(Finkel et al., 2005)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 86, "end": 93, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Category Feature Template Description Example", "sec_num": null }, { "text": "The second data set comes from a Wikipedia personal/social relation data set previously used in (Culotta et al., 2006) . The original data set does not contain annotations of relation descriptors such as \"sister\" or \"friend\" between the two PER arguments. We therefore also manually annotated this data set. Similarly, we performed sentence segmentation, tokenization and NER tagging, and took each pair of (PER, PER) entities occurring in the same sentence as a candidate relation instance. Because both arguments involved in the \"personal/social\" relation are PER entities, we always treat the first PER entity as ARG-1 and the second PER entity as ARG-2. 3 We go through each candidate relation instance to find whether there is an explicit sequence of words describing the relation between ARG-1 and ARG-2, and label the sequence of words, if any. Note that we only consider explicitly stated relation descriptors. If we cannot find such a relation descriptor, even if ARG-1 and ARG-2 actually have some kind of relation, we still label the instance as Nil. For example, in the instance \"he is the son of ARG1 and ARG2\", although we can infer that ARG-1 and ARG-2 have some family relation, we regard this as a negative instance.", "cite_spans": [ { "start": 96, "end": 118, "text": "(Culotta et al., 2006)", "ref_id": "BIBREF3" }, { "start": 658, "end": 659, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Category Feature Template Description Example", "sec_num": null }, { "text": "A relation descriptor may also contain multiple relations. For example, in the instance \"ARG1 is the CEO and president of ARG2\", we label \"the CEO and president\" as the relation descriptor, which actually contains two job titles, namely, CEO and president.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Category Feature Template Description Example", "sec_num": null }, { "text": "Note that our annotated relation descriptors are not always nouns or noun phrases. An example is the third instance for personal/social relation in Table 1 , where the relation descriptor \"married\" is a verb and indicates a spouse relation.", "cite_spans": [], "ref_spans": [ { "start": 148, "end": 155, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Category Feature Template Description Example", "sec_num": null }, { "text": "The total number of relation instances, the number of positive and negative instances as well as the number of distinct relation descriptors in each data set are summarized in Table 4 . ", "cite_spans": [], "ref_spans": [ { "start": 176, "end": 183, "text": "Table 4", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Category Feature Template Description Example", "sec_num": null }, { "text": "We compare the following methods in our experiments:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Setup", "sec_num": "5.2" }, { "text": "\u2022 LC-CRF: This is the standard linear-chain CRF model with features described in Table 2 .", "cite_spans": [], "ref_spans": [ { "start": 81, "end": 88, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Experiment Setup", "sec_num": "5.2" }, { "text": "\u2022 M-CRF-1: This is our modified linear-chain CRF model with the space of label sequences reduced but with features fixed to the same as those used in LC-CRF.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Setup", "sec_num": "5.2" }, { "text": "\u2022 M-CRF-2: This is M-CRF-1 with the addition of the contextual long-range features described in Table 3 .", "cite_spans": [], "ref_spans": [ { "start": 96, "end": 103, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Experiment Setup", "sec_num": "5.2" }, { "text": "\u2022 M-CRF-3: This is M-CRF-2 with the addition of the path-based long-range features described in Table 3 .", "cite_spans": [], "ref_spans": [ { "start": 96, "end": 103, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Experiment Setup", "sec_num": "5.2" }, { "text": "\u2022 M-CRF-4: This is M-CRF-3 with the addition of the phrase boundary long-range feature described in Table 3 .", "cite_spans": [], "ref_spans": [ { "start": 100, "end": 107, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Experiment Setup", "sec_num": "5.2" }, { "text": "For the standard linear-chain CRF model, we use the package CRF++ 4 . We implement our own version of the modified linear-chain CRF models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Setup", "sec_num": "5.2" }, { "text": "We perform 10-fold cross validation for all our experiments. For each data set we first randomly divide it into 10 subsets. Each time we take 9 subsets for training and the remaining subset for testing. We report the average performance across the 10 runs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Setup", "sec_num": "5.2" }, { "text": "Based on our preliminary experiments, we have found that using a smaller set of general POS tags instead of the Penn Treebank POS tag set could slightly improve the overall performance. We therefore only report the performance obtained using our POS tags. For example, we group NN, NNP, NNS and NNPS of the Penn Treebank set under a general tag N.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Setup", "sec_num": "5.2" }, { "text": "We evaluate the performance using two different criteria: overlap match and exact match. Overlap match is a more relaxed criterion: if the extracted relation descriptor overlaps with the true relation descriptor (i.e., having at least one token in common), it is considered correct. Exact match is a much stricter criterion: it requires that the extracted relation descriptor be exactly the same as the true relation descriptor in order to be considered correct. Given these two criteria, we can define accuracy, precision, recall and F1 measures. Accuracy is the percentage of candidate relation instances whose label sequence is considered correct. Both positive and negative instances are counted when computing accuracy. Because our data sets are quite balanced, it is reasonable to use accuracy. Precision, recall and F1 are defined in the usual way at the relation instance level.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Setup", "sec_num": "5.2" }, { "text": "In Table 5 , we summarize the performance in terms of the various measures on the two data sets. For both the baseline linear-chain CRF model and our modified linear-chain CRF models, we have tuned the regularization parameters and show only the results using the optimal parameter values for each data set, chosen from \u03b2 = 10 \u03b3 for \u03b3 \u2208 [\u22123, \u22122, . . . , 2, 3].", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Method Comparison", "sec_num": "5.3" }, { "text": "First, we can see from the table that by reducing the label sequence space, M-CRF-1 can significantly outperform the baseline LC-CRF in terms of F1 in all cases. In terms of accuracy, there is significant improvement for the NYT data set but not for the Wikipedia data set. We also notice that for both data sets the advantage of M-CRF-1 is mostly evident in the improvement of recall. This shows that a larger number of true relation descriptors are extracted when the label sequence space is reduced.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method Comparison", "sec_num": "5.3" }, { "text": "Next we see from the table that long-range features are also useful, and the improvement comes mostly from the path-based long-range features. In terms of both accuracy and F1, M-CRF-3 can significantly outperform M-CRF-1 in all settings. In this case, the improvement is a mixture of both precision and recall. This shows that by explicitly capturing the patterns between the two arguments and the relation descriptor, we can largely improve the extraction performance. On the other hand, neither the contextual long-range features nor the phrase boundary long-range features exhibit any Table 5 : Comparison of different methods on the New York Times data set and Wikipedia data set. Accu., Prec., Rec. and F1 stand for accuracy, precision, recall and F1 measures, respectively. \u2020 indicates that the current value is statistically significantly better than the value in the previous row at a 0.95 level of confidence by one-tailed paired T-test. significant impact. We hypothesize the following. For contextual long-range features, they have already been captured in the linear-chain features. For example, the long-range feature \"is REL\" is similar to the linear-chain feature \"w i\u22121 = is & y i = B-R\". For the phrase boundary long-range feature, since phrase boundary tags have also been used in the linear-chain features, this feature does not provide additional information. In addition, we have found that a large percentage of relation descriptors violate phrase boundaries: 22% in the NYT data set, and 29% in the Wikipedia data set. Therefore, it seems that phrase boundary information is not important for relation descriptor extraction. Overall, performance is much higher on the NYT data set than on the Wikipedia data set. Based on our observations during annotation, this is due to the fact that the \"employment\" relations expressed in the NYT data set often follow some standard patterns, whereas in Wikipedia the \"personal/social\" relations can be expressed in more varied ways. The lower performance achieved on the Wikipedia data set suggests that extracting relation descriptors is not an easy task even under a supervised learning setting.", "cite_spans": [], "ref_spans": [ { "start": 589, "end": 596, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Method Comparison", "sec_num": "5.3" }, { "text": "Presumably relation descriptors that are not seen in the training data are harder to extract. We would therefore also like to see how well our model works on such unseen relation descriptors. We find that with 10-fold cross validation, for the NYT data set, on average our model is able to extract approximately 67% of the unseen relation descriptors in the test data using exact match criteria. For the Wikipedia data set this percentage is approximately 27%. Both numbers are lower than the overall recall values the model can achieve on the entire test data, showing that unseen relation descriptors are indeed harder to extract. However, our model is still able to pick up new relation descriptors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method Comparison", "sec_num": "5.3" }, { "text": "In the previous experiments, we have used 90% of the data for training and the remaining 10% for testing. We now take a look at how the performance changes with different numbers of training instances. We vary the training data size from only a few instances (2, 5, and 10) to 20%, 40%, 60% and 80% of the entire data set. The results are shown in Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 348, "end": 356, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "The Effect of Training Data Size", "sec_num": "5.4" }, { "text": "As we can expect, when the number of training instances is small, the performance on both data sets is low. The figure also shows that the Wikipedia data set is the more difficult than the NYT data set. This is consistent with our observation in the previous section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Effect of Training Data Size", "sec_num": "5.4" }, { "text": "The modified linear-chain CRF model consistently outperforms the baseline linear-chain CRF model. For similar level of performance, the modified linear-chain CRF model requires less training data than the baseline linear-chain CRF model. For example, Figure 1(b) shows that the modified linear-chain CRF model achieve 0.72 F1 with about 215 training instances, while the baseline linear-chain CRF model requires about 480 training instances for a similar F1.", "cite_spans": [], "ref_spans": [ { "start": 251, "end": 262, "text": "Figure 1(b)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "The Effect of Training Data Size", "sec_num": "5.4" }, { "text": "In this paper, we studied relation extraction under a new setting: the relation types are defined at a general level but more specific relation descriptors are desired. Based on the special properties of this new task, we found that standard linear-chain CRF models have some potential limitations for this task. We subsequently proposed some modifications to linear-chain CRFs in order to suit our task better. We annotated two data sets to evaluate our methods. The experiments showed that by restricting the space of possible label sequences and introducing certain long-range features, the performance of the modified linear-chain CRF model can perform significantly better than standard linear-chain CRFs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "Currently our work is only based on evaluation on two data sets and on two general relations. In the future we plan to evaluate the methods on other general relations to test its robustness. We also plan to explore how this new relation extraction task can be used within other NLP or text mining applications.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "Automatic Content Extraction http://www.itl. nist.gov/iad/mig/tests/ace/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.mysmu.edu/faculty/ jingjiang/data/IJCNLP2011.zip", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Since many personal/social relations are asymmetric, ideally we should assign ARG-1 and ARG-2 based on their semantic meanings rather than their positions. Here we take a simple approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://crfpp.sourceforge.net/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This material is based on research sponsored by the Air Force Research Laboratory, under agreement number FA2386-09-1-4123. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the Air Force Research Laboratory or the U.S. Government.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Snowball: Extracting relations from large plain-text collections", "authors": [ { "first": "Eugene", "middle": [], "last": "Agichtein", "suffix": "" }, { "first": "Luis", "middle": [], "last": "Gravano", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the Fifth ACM Conference on Digital Libraries", "volume": "", "issue": "", "pages": "85--94", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eugene Agichtein and Luis Gravano. 2000. Snow- ball: Extracting relations from large plain-text col- lections. In Proceedings of the Fifth ACM Confer- ence on Digital Libraries, pages 85-94, June.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The tradeoffs between open and traditional relation extraction", "authors": [ { "first": "Michele", "middle": [], "last": "Banko", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Etzioni", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "28--36", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michele Banko and Oren Etzioni. 2008. The tradeoffs between open and traditional relation extraction. In Proceedings of the 46th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 28-36.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A shortest path dependency kernel for relation extraction", "authors": [ { "first": "Razvan", "middle": [], "last": "Bunescu", "suffix": "" }, { "first": "Raymond", "middle": [], "last": "Mooney", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Human Language Technology Conference and the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "724--731", "other_ids": {}, "num": null, "urls": [], "raw_text": "Razvan Bunescu and Raymond Mooney. 2005. A shortest path dependency kernel for relation extrac- tion. In Proceedings of the Human Language Tech- nology Conference and the Conference on Empiri- cal Methods in Natural Language Processing, pages 724-731, October.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Integrating probabilistic extraction models and data mining to discover relations and patterns in text", "authors": [ { "first": "Aron", "middle": [], "last": "Culotta", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Betz", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "296--303", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aron Culotta, Andrew McCallum, and Jonathan Betz. 2006. Integrating probabilistic extraction models and data mining to discover relations and patterns in text. In Proceedings of the Human Language Tech- nology Conference of the North American Chapter of the Association for Computational Linguistics, pages 296-303, June.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Incorporating non-local information into information extraction systems by gibbs sampling", "authors": [ { "first": "Jenny", "middle": [ "Rose" ], "last": "Finkel", "suffix": "" }, { "first": "Trond", "middle": [], "last": "Grenager", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "363--370", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jenny Rose Finkel, Trond Grenager, and Christo- pher D. Manning. 2005. Incorporating non-local information into information extraction systems by gibbs sampling. In Proceedings of the 43rd Annual Meeting of the Association for Computational Lin- guistics, pages 363-370, June.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Discovering relations among named entities from large corpora", "authors": [ { "first": "Takaaki", "middle": [], "last": "Hasegawa", "suffix": "" }, { "first": "Satoshi", "middle": [], "last": "Sekine", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 42nd Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "415--422", "other_ids": {}, "num": null, "urls": [], "raw_text": "Takaaki Hasegawa, Satoshi Sekine, and Ralph Grish- man. 2004. Discovering relations among named entities from large corpora. In Proceedings of the 42nd Meeting of the Association for Computational Linguistics, pages 415-422, July.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Learning 5000 relational extractors", "authors": [ { "first": "Raphael", "middle": [], "last": "Hoffmann", "suffix": "" }, { "first": "Congle", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Daniel", "middle": [ "S" ], "last": "Weld", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "286--295", "other_ids": {}, "num": null, "urls": [], "raw_text": "Raphael Hoffmann, Congle Zhang, and Daniel S. Weld. 2010. Learning 5000 relational extractors. In Proceedings of the 48th Annual Meeting of the As- sociation for Computational Linguistics, pages 286- 295, July.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", "authors": [ { "first": "John", "middle": [ "D" ], "last": "Lafferty", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "Fernando", "middle": [ "C N" ], "last": "Pereira", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 18th International Conference on Machine Learning", "volume": "", "issue": "", "pages": "282--289", "other_ids": {}, "num": null, "urls": [], "raw_text": "John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling se- quence data. In Proceedings of the 18th Interna- tional Conference on Machine Learning, pages 282- 289, June.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Text chunking using transformation-based learning", "authors": [ { "first": "A", "middle": [], "last": "Lance", "suffix": "" }, { "first": "Mitchell", "middle": [ "P" ], "last": "Ramshaw", "suffix": "" }, { "first": "", "middle": [], "last": "Marcus", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the Third ACL Workshop on Very Large Corpora", "volume": "", "issue": "", "pages": "82--94", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lance A. Ramshaw and Mitchell P. Marcus. 1995. Text chunking using transformation-based learning. In Proceedings of the Third ACL Workshop on Very Large Corpora, pages 82-94.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "URES : An unsupervised Web relation extraction system", "authors": [ { "first": "Benjamin", "middle": [], "last": "Rosenfeld", "suffix": "" }, { "first": "Ronen", "middle": [], "last": "Feldman", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "667--674", "other_ids": {}, "num": null, "urls": [], "raw_text": "Benjamin Rosenfeld and Ronen Feldman. 2006. URES : An unsupervised Web relation extraction system. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computa- tional Linguistics, pages 667-674, July.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Semi-Markov conditional random fields for information extraction", "authors": [ { "first": "Sunita", "middle": [], "last": "Sarawagi", "suffix": "" }, { "first": "William", "middle": [ "W" ], "last": "Cohen", "suffix": "" } ], "year": 2005, "venue": "Advances in Neural Information Processing Systems", "volume": "17", "issue": "", "pages": "1185--1192", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sunita Sarawagi and William W. Cohen. 2005. Semi- Markov conditional random fields for information extraction. In Advances in Neural Information Pro- cessing Systems 17, pages 1185-1192.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Preemptive information extraction using unrestricted relation discovery", "authors": [ { "first": "Yusuke", "middle": [], "last": "Shinyama", "suffix": "" }, { "first": "Satoshi", "middle": [], "last": "Sekine", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "304--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yusuke Shinyama and Satoshi Sekine. 2006. Preemp- tive information extraction using unrestricted rela- tion discovery. In Proceedings of the Human Lan- guage Technology Conference of the North Ameri- can Chapter of the Association for Computational Linguistics, pages 304-311, June.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Open information extraction using Wikipedia", "authors": [ { "first": "Fei", "middle": [], "last": "Wu", "suffix": "" }, { "first": "S", "middle": [], "last": "Daniel", "suffix": "" }, { "first": "", "middle": [], "last": "Weld", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "118--127", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fei Wu and Daniel S. Weld. 2010. Open information extraction using Wikipedia. In Proceedings of the 48th Annual Meeting of the Association for Compu- tational Linguistics, pages 118-127, July.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Kernel methods for relation extraction", "authors": [ { "first": "Dmitry", "middle": [], "last": "Zelenko", "suffix": "" }, { "first": "Chinatsu", "middle": [], "last": "Aone", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Richardella", "suffix": "" } ], "year": 2003, "venue": "Journal of Machine Learning Research", "volume": "3", "issue": "", "pages": "1083--1106", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2003. Kernel methods for relation ex- traction. Journal of Machine Learning Research, 3:1083-1106.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Exploring syntactic features for relation extraction using a convolution tree kernel", "authors": [ { "first": "Min", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jie", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Su", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "288--295", "other_ids": {}, "num": null, "urls": [], "raw_text": "Min Zhang, Jie Zhang, and Jian Su. 2006. Explor- ing syntactic features for relation extraction using a convolution tree kernel. In Proceedings of the Hu- man Language Technology Conference of the North American Chapter of the Association for Computa- tional Linguistics, pages 288-295, June.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Exploring various knowledge in relation extraction", "authors": [ { "first": "Guodong", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Su", "suffix": "" }, { "first": "Jie", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Min", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "427--434", "other_ids": {}, "num": null, "urls": [], "raw_text": "GuoDong Zhou, Jian Su, Jie Zhang, and Min Zhang. 2005. Exploring various knowledge in relation ex- traction. In Proceedings of the 43rd Annual Meet- ing of the Association for Computational Linguis- tics, pages 427-434, June.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "Performance of LC-CRF and M-CRF-3 as the training data size increases.", "num": null, "type_str": "figure" }, "TABREF0": { "text": "ARG-1 , a vice president at ARG-2 , which ... a", "html": null, "type_str": "table", "content": "
RelationCandidate Relation InstanceRelation Descriptor
Employment... said
", "num": null }, "TABREF2": { "text": "Number of instances in each data set. Positive instances are those that have an explicit relation descriptor. The last column shows the number of distinct relation descriptors.", "html": null, "type_str": "table", "content": "", "num": null } } } }