{ "paper_id": "D09-1014", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:38:14.392951Z" }, "title": "Generalized Expectation Criteria for Bootstrapping Extractors using Record-Text Alignment", "authors": [ { "first": "Kedar", "middle": [], "last": "Bellare", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Massachusetts Amherst", "location": { "postCode": "01003", "region": "MA" } }, "email": "kedarb@cs.umass.edu" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Massachusetts Amherst", "location": { "postCode": "01003", "region": "MA" } }, "email": "mccallum@cs.umass.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Traditionally, machine learning approaches for information extraction require human annotated data that can be costly and time-consuming to produce. However, in many cases, there already exists a database (DB) with schema related to the desired output, and records related to the expected input text. We present a conditional random field (CRF) that aligns tokens of a given DB record and its realization in text. The CRF model is trained using only the available DB and unlabeled text with generalized expectation criteria. An annotation of the text induced from inferred alignments is used to train an information extractor. We evaluate our method on a citation extraction task in which alignments between DBLP database records and citation texts are used to train an extractor. Experimental results demonstrate an error reduction of 35% over a previous state-of-the-art method that uses heuristic alignments.", "pdf_parse": { "paper_id": "D09-1014", "_pdf_hash": "", "abstract": [ { "text": "Traditionally, machine learning approaches for information extraction require human annotated data that can be costly and time-consuming to produce. However, in many cases, there already exists a database (DB) with schema related to the desired output, and records related to the expected input text. We present a conditional random field (CRF) that aligns tokens of a given DB record and its realization in text. The CRF model is trained using only the available DB and unlabeled text with generalized expectation criteria. An annotation of the text induced from inferred alignments is used to train an information extractor. We evaluate our method on a citation extraction task in which alignments between DBLP database records and citation texts are used to train an extractor. Experimental results demonstrate an error reduction of 35% over a previous state-of-the-art method that uses heuristic alignments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "A substantial portion of information on the Web consists of unstructured and semi-structured text. Information extraction (IE) systems segment and label such text to populate a structured database that can then be queried and mined efficiently.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we mainly deal with information extraction from text fragments that closely resemble structured records. Examples of such texts include citation strings in research papers, contact addresses on person homepages and apartment listings in classified ads. Pattern matching and rule-based approaches for IE (Brin, 1998; Agichtein and Gravano, 2000; Etzioni et al., 2005) that only use specific patterns, and delimiter and font-based cues for segmentation are prone to failure on such data because these cues are generally not broadly reliable. Statistical machine learning methods such as hidden Markov models (HMMs) (Rabiner, 1989; Seymore et al., 1999; Freitag and McCallum, 1999) and conditional random fields (CRFs) (Lafferty et al., 2001; Peng and McCallum, 2004; Sarawagi and Cohen, 2005) have become popular approaches to address the text extraction problem. However, these methods require labeled training data, such as annotated text, which is often scarce and expensive to produce.", "cite_spans": [ { "start": 318, "end": 330, "text": "(Brin, 1998;", "ref_id": "BIBREF4" }, { "start": 331, "end": 359, "text": "Agichtein and Gravano, 2000;", "ref_id": "BIBREF1" }, { "start": 360, "end": 381, "text": "Etzioni et al., 2005)", "ref_id": "BIBREF9" }, { "start": 628, "end": 643, "text": "(Rabiner, 1989;", "ref_id": "BIBREF22" }, { "start": 644, "end": 665, "text": "Seymore et al., 1999;", "ref_id": "BIBREF27" }, { "start": 666, "end": 693, "text": "Freitag and McCallum, 1999)", "ref_id": "BIBREF10" }, { "start": 731, "end": 754, "text": "(Lafferty et al., 2001;", "ref_id": "BIBREF13" }, { "start": 755, "end": 779, "text": "Peng and McCallum, 2004;", "ref_id": "BIBREF21" }, { "start": 780, "end": 805, "text": "Sarawagi and Cohen, 2005)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In many cases, however, there already exists a database with schema related to the desired output, and records that are imperfectly rendered in the available unlabeled text. This database can serve as a source of significant supervised guidance to machine learning methods. Previous work on using databases to train information extractors has taken one of three simpler approaches. In the first, a separate language model is trained on each column of the database and these models are then used to segment and label a given text sequence (Agichtein and Ganti, 2004; Canisius and Sporleder, 2007) . However, this approach does not model context, errors or different formats of fields in text, and requires large number of database entries to learn an accurate language model. The second approach (Sarawagi and Cohen, 2004; Michelson and Knoblock, 2005; Mansuri and Sarawagi, 2006) uses database or dictionary lookups in combination with similarity measures to add features to the text sequence. Although these features are very informative, learning algorithms still require annotated data to make use of them. The final approach heuristically labels texts using matching records and learns extractors from these annotations (Ramakrishnan and Mukherjee, 2004; Bellare and McCallum, 2007; Michelson and Knoblock, 2008) . Heuris-tic labeling decisions, however, are made independently without regard for the Markov dependencies among labels in text and are sensitive to subtle changes in text.", "cite_spans": [ { "start": 538, "end": 565, "text": "(Agichtein and Ganti, 2004;", "ref_id": "BIBREF0" }, { "start": 566, "end": 595, "text": "Canisius and Sporleder, 2007)", "ref_id": "BIBREF6" }, { "start": 795, "end": 821, "text": "(Sarawagi and Cohen, 2004;", "ref_id": "BIBREF24" }, { "start": 822, "end": 851, "text": "Michelson and Knoblock, 2005;", "ref_id": "BIBREF18" }, { "start": 852, "end": 879, "text": "Mansuri and Sarawagi, 2006)", "ref_id": "BIBREF16" }, { "start": 1224, "end": 1258, "text": "(Ramakrishnan and Mukherjee, 2004;", "ref_id": "BIBREF23" }, { "start": 1259, "end": 1286, "text": "Bellare and McCallum, 2007;", "ref_id": "BIBREF2" }, { "start": 1287, "end": 1316, "text": "Michelson and Knoblock, 2008)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Here we propose a method that automatically induces a labeling of an input text sequence using a word alignment with a matching database record. This induced labeling is then used to train a text extractor. Our approach has several advantages over previous methods. First, we are able to model field ordering and context around fields by learning an extractor from annotations of the text itself. Second, a probabilistic model for word alignment can exploit dependencies among alignments, and is also robust to errors, formatting differences, and missing fields in text and the record.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our word alignment model is a conditional random field (CRF) (Lafferty et al., 2001 ) that generates alignments between tokens of a text sequence and a matching database record. The structure of the graphical model resembles IBM Model 1 (Brown et al., 1993) in which each target (record) word is assigned one or more source (text) words. The alignment is generated conditioned on both the record and text sequence, and therefore supports large sets of rich and nonindependent features of the sequence pairs. Our model is trained without the need for labeled word alignments by using generalized expectation (GE) criteria (Mann and McCallum, 2008) that penalize the divergence of specific model expectations from target expectations. Model parameters are estimated by minimizing this divergence. To limit over-fitting we include a L 2 -regularization term in the objective. The model expectations in GE criteria are taken with respect to a set of alignment latent variables that are either specific to each sequence pair (local) or summarizing the entire data set (global). This set is constructed by including all alignment variables a that satisfy a certain binary feature (e.g., f (a, x 1 , y 1 , x 2 ) = 1, for labeled record (x 1 , y 1 ), and text sequence x 2 ). One example global criterion is that \"an alignment exists between two orthographically similar 1 words 95% of the time.\" Here the criterion has a target expectation of 95% and is defined over alignments", "cite_spans": [ { "start": 61, "end": 83, "text": "(Lafferty et al., 2001", "ref_id": "BIBREF13" }, { "start": 237, "end": 257, "text": "(Brown et al., 1993)", "ref_id": "BIBREF5" }, { "start": 621, "end": 646, "text": "(Mann and McCallum, 2008)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "{a = i, j | x 1 [i] \u223c x 2 [j], \u2200x 1 , x 2 }.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Another criterion for extraction can be \"the word 'EMNLP' is always aligned with the record label booktitle\". This criterion has a target of 100% and defined for", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "{a = i, j | y 1 [i] = booktitle \u2227 x 2 [j] = 'EMNLP', \u2200y 1 , x 2 }.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "One-to-one correspondence between words in the sequence pair can be specified as collection of local expectation constraints. Since we directly encode prior knowledge of how alignments behave in our criteria, we obtain sufficiently accurate alignments with little supervision.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We apply our method to the task of citation extraction. The input to our training algorithm is a set of matching DBLP 2 -record/citation-text pairs and global GE criteria 3 of the following two types: (1) alignment criteria that consider features of mapping between record and text words, and, (2) extraction criteria that consider features of the schema label assigned to a text word. In our experiments, the parallel record-text pairs are collected manually but this process can be automated using systems that match text sequences to records in the DB (Michelson and Knoblock, 2005; Michelson and Knoblock, 2008) . Such systems achieve very high accuracy close to 90% F1 on semi-structured domains similar to ours. 4 Our trained alignment model can be used to directly align new record-text pairs to create a labeling of the texts. Empirical results demonstrate a 20.6% error reduction in token labeling accuracy compared to a strong baseline method that employs a set of high-precision alignments. Furthermore, we provide a 63.8% error reduction compared to IBM Model 4 (Brown et al., 1993) . Alignments learned by our model are used to train a linear-chain CRF extractor. We obtain an error reduction of 35.1% over a previous state-of-the-art extraction method that uses heuristically generated alignments.", "cite_spans": [ { "start": 555, "end": 585, "text": "(Michelson and Knoblock, 2005;", "ref_id": "BIBREF18" }, { "start": 586, "end": 615, "text": "Michelson and Knoblock, 2008)", "ref_id": "BIBREF19" }, { "start": 718, "end": 719, "text": "4", "ref_id": null }, { "start": 1074, "end": 1094, "text": "(Brown et al., 1993)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Here we provide a brief description of the recordtext alignment task. For the sake of clarity and space, we describe our approach on a fictional restaurant address data set. The input to our system is a database (DB) consisting of records (possibly containing errors) and corresponding texts that are realizations of these DB records. An example of a matching record-text pair is shown in Table 1 . This example displays the differences between the record and text: (1) spelling errors: katsu \u2192 katzu, (2) word insertions (restaurant), deletions (1972), substitutions (angeles \u2192 feliz), (3) abbreviations (avenue \u2192 ave.), (4) missing fields in text (phone=665-1891), and (5) extra fields in text (state=california). These discrepancies plus the unknown ordering of fields within text can be addressed through word alignment. An example word alignment between the record and text is shown in Table 2 . Tokenization of record/text string is based on whitespace characters. We add a special *null* token at the field boundaries for each label in the schema to model word insertions. The record sequence is obtained by concatenating individual fields according to the DB schema order. As in statistical word alignment, we assume the DB record to be our source and the text to be our target. The induced labeling of the text is given by (name, address, address, address, city, city, state) which can be used to train an information extractor. In the next section, we present our approach to address this task.", "cite_spans": [], "ref_spans": [ { "start": 389, "end": 396, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 891, "end": 898, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Record-Text Alignment", "sec_num": "2" }, { "text": "We first define notation that will be used throughout this section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "Let (x 1 , y 1 ) be a database record with token sequence", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "x 1 = x 1 [1], x 1 [2], . . . , x 1 [m] and label sequence y 1 = y 1 [1], y 1 [2], . . . , y 1 [m] with y 1 [ * ] \u2208 Y", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "where Y is the database schema.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "Let", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "x 2 = x 2 [1], x 2 [2], . . . , x 2 [n]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "be the text sequence. Let a = a 1 , a 2 , . . . , a n be an alignment sequence of same length as the target text sequence. The alignment a i = j assigns the DB token-label pair", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "(x 1 [j], y 1 [j]) to the text token x 2 [i].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "Our conditional random field (CRF) for alignment has a graphical model structure that resembles that of IBM Model 1 (Brown et al., 1993) . The CRF is an undirected graphical model that defines a probability distribution over alignment sequences a conditioned on the inputs (x 1 , y 1 , x 2 ) as:", "cite_spans": [ { "start": 116, "end": 136, "text": "(Brown et al., 1993)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Conditional Random Field for Alignment", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p \u0398 (a|x 1 , y 1 , x 2 ) = exp( P n t=1 \u0398 f (at,x 1 ,y 1 ,x 2 ,t)) Z \u0398 (x 1 ,y 1 ,x 2 ) ,", "eq_num": "(1)" } ], "section": "Conditional Random Field for Alignment", "sec_num": "3.1" }, { "text": "where f (a t , x 1 , y 1 , x 2 , t) are feature functions defined over the alignments and inputs, \u0398 are the model parameters and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional Random Field for Alignment", "sec_num": "3.1" }, { "text": "Z \u0398 (x 1 , y 1 , x 2 ) = a exp( n t=1 \u0398 f (a t , x 1 , y 1 , x 2 , t))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional Random Field for Alignment", "sec_num": "3.1" }, { "text": "is the partition function.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional Random Field for Alignment", "sec_num": "3.1" }, { "text": "The feature vector f (a t , x 1 , y 1 , x 2 , t) is the concatenation of two types of feature functions: (1) alignment features f align (a t , x 1 , x 2 , t) defined on source-target tokens, and, (2) extraction features f extr (a t , y 1 , x 2 , t) defined on source labels and target text. To obtain the probability of an alignment in a particular position t we marginalize out the alignments over the rest of the positions {1, . . . , n}\\{t},", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional Random Field for Alignment", "sec_num": "3.1" }, { "text": "p \u0398 (a t |x 1 , y 1 , x 2 ) = {a[1...n]}\\{at} p \u0398 (a|x 1 , y 1 , x 2 ) = exp(\u0398 f (a t , x 1 , y 1 , x 2 , t)) exp( a \u0398 f (a , x 1 , y 1 , x 2 , t)) (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional Random Field for Alignment", "sec_num": "3.1" }, { "text": "Furthermore, the marginal over label y t assigned to the text token x 2 [t] at time step t during alignment is given by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional Random Field for Alignment", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p \u0398 (y t |x 2 ) = {at|y 1 [at]=yt} p \u0398 (a t |x 1 , y 1 , x 2 ),", "eq_num": "(3)" } ], "section": "Conditional Random Field for Alignment", "sec_num": "3.1" }, { "text": "where {a t | y 1 [a t ] = y t } is the set of alignments that result in a labeling y t for token", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional Random Field for Alignment", "sec_num": "3.1" }, { "text": "x 2 [t]. Hence- forth, we abbreviate p \u0398 (a t |x 1 , y 1 , x 2 ) to p \u0398 (a t ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional Random Field for Alignment", "sec_num": "3.1" }, { "text": "The gradient of p \u0398 (a t ) with respect to parameters \u0398 is given by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional Random Field for Alignment", "sec_num": "3.1" }, { "text": "\u2202p \u0398 (a t ) \u2202\u0398 = p \u0398 (a t ) f (a t , x 1 , y 1 , x 2 , t) \u2212E p \u0398 (a) f (a, x 1 , y 1 , x 2 , t) ,(4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional Random Field for Alignment", "sec_num": "3.1" }, { "text": "where the expectation term in the above equation sums over all alignments a at position t. We use the Baum-Welch and Viterbi algorithms to compute marginal probabilities and best alignment sequences respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional Random Field for Alignment", "sec_num": "3.1" }, { "text": "Let", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Expectation Criteria and Parameter Estimation", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "D = (x (1) 1 , y (1) 1 , x", "eq_num": "(1)" } ], "section": "Expectation Criteria and Parameter Estimation", "sec_num": "3.2" }, { "text": "2 ), . . . , (x", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Expectation Criteria and Parameter Estimation", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "(K) 1 , y (K) 1 , x", "eq_num": "(K)" } ], "section": "Expectation Criteria and Parameter Estimation", "sec_num": "3.2" }, { "text": "2 ) be a data set of K record-text pairs gathered manually or automatically through matching (Michelson and Knoblock, 2005; Michelson and Knoblock, 2008) . A global expectation criterion is defined on the set of alignment latent variables", "cite_spans": [ { "start": 93, "end": 123, "text": "(Michelson and Knoblock, 2005;", "ref_id": "BIBREF18" }, { "start": 124, "end": 153, "text": "Michelson and Knoblock, 2008)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Expectation Criteria and Parameter Estimation", "sec_num": "3.2" }, { "text": "A f = {a|f (a, x (i) 1 , y (i) 1 , x (i)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Expectation Criteria and Parameter Estimation", "sec_num": "3.2" }, { "text": "2 ) = 1, \u2200i = 1 . . . K} on the entire data set that satisfy a given binary feature f (a, x 1 , y 1 , x 2 ). Similarly a local expectation criterion is defined only for a specific instance (x", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Expectation Criteria and Parameter Estimation", "sec_num": "3.2" }, { "text": "(i) 1 , y (i) 1 , x (i) 2 ) with the set A f = {a|f (a, x (i) 1 , y (i) 1 , x (i)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Expectation Criteria and Parameter Estimation", "sec_num": "3.2" }, { "text": "2 ) = 1}. For a feature function f , a target expectation p, and, a weight w, our criterion minimizes the squared divergence", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Expectation Criteria and Parameter Estimation", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2206(f, p, w, \u0398) = w E p \u0398 (A f ) |A f | \u2212 p 2 ,", "eq_num": "(5)" } ], "section": "Expectation Criteria and Parameter Estimation", "sec_num": "3.2" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Expectation Criteria and Parameter Estimation", "sec_num": "3.2" }, { "text": "E p \u0398 (A f ) = a\u2208A f p \u0398 (a)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Expectation Criteria and Parameter Estimation", "sec_num": "3.2" }, { "text": "is the sum of marginal probabilities given by Equation (2) and |A f | is the size of the variable set. The weight w influences the importance of satisfying a given expectation criterion. Equation 5is an instance of generalized expectation criteria (Mann and Mc-Callum, 2008 ) that penalizes the divergence of a specific model expectation from a given target value. The gradient of the divergence with respect to \u0398 is given by,", "cite_spans": [ { "start": 248, "end": 273, "text": "(Mann and Mc-Callum, 2008", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Expectation Criteria and Parameter Estimation", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2202\u2206(f, p, w, \u0398) \u2202\u0398 = 2w E p \u0398 (A f ) |A f | \u2212 p \u00d7 \uf8ee \uf8f0 1 |A f | a\u2208A f \u2202p \u0398 (a) \u2202\u0398 \u2212 p \uf8f9 \uf8fb ,", "eq_num": "(6)" } ], "section": "Expectation Criteria and Parameter Estimation", "sec_num": "3.2" }, { "text": "where the gradient \u2202p \u0398 (a)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Expectation Criteria and Parameter Estimation", "sec_num": "3.2" }, { "text": "is given by Eq. (4). Given expectation criteria C = F, P, W with a set of binary feature functions F = f 1 , . . . , f l , target expectations P = p 1 , . . . , p l and weights W = w 1 , . . . , w l , we maximize the objective", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2202\u0398", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "O(\u03b8; D, C) = max \u0398 \u2212 l i=1 \u2206(f i , p i , w i , \u0398)\u2212 ||\u0398|| 2 2 ,", "eq_num": "(7)" } ], "section": "\u2202\u0398", "sec_num": null }, { "text": "where ||\u0398|| 2 /2 is the regularization term added to limit over-fitting. Hence the gradient of the objective is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2202\u0398", "sec_num": null }, { "text": "\u2202O(\u03b8; D, C) \u2202\u0398 = \u2212 l i=1 \u2202\u2206(f i , p i , w i , \u0398) \u2202\u0398 \u2212 \u0398.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2202\u0398", "sec_num": null }, { "text": "We maximize our objective (Equation 7) using the L-BFGS algorithm. It is sometimes necessary to restart maximization after resetting the Hessian calculation in L-BFGS due to non-convexity of our objective. 5 Also, non-convexity may lead to a local instead of a global maximum. Our experiments show that local maxima do not adversely affect performance since our accuracy is within 4% of a model trained with gold-standard labels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2202\u0398", "sec_num": null }, { "text": "The alignment CRF (AlignCRF) model described in Section 3.1 is able to predict labels for a text sequence given a matching DB record. However, without corresponding records for texts the model does not perform well as an extractor because it has learned to rely on the DB record and alignment features (Sutton et al., 2006) . Hence, we train a separate linear-chain CRF on the alignmentinduced labels for evaluation as an extractor. The extraction CRF (ExtrCRF) employs a fully-connected state machine with a unique state per label y \u2208 Y in the database schema. The CRF induces a conditional probability distribution over label sequences y = y 1 , . . . , y n and input text sequences", "cite_spans": [ { "start": 302, "end": 323, "text": "(Sutton et al., 2006)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Linear-chain CRF for Extraction", "sec_num": "3.3" }, { "text": "x = x 1 , . . . , x n as p \u039b (y|x) = exp n t=1 \u039b g(y t\u22121 , y t , x, t) Z \u039b (x) .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear-chain CRF for Extraction", "sec_num": "3.3" }, { "text": "(8) In comparison to our earlier zero-order AlignCRF model, our ExtrCRF is a first-order model. All the feature functions in this model g(y t\u22121 , y t , x, t) are a conjunction of the label pair (y t\u22121 , y t ) and input observational features. Z \u039b (x) in the equation above is the partition function. Inference in the model is performed using the Viterbi algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear-chain CRF for Extraction", "sec_num": "3.3" }, { "text": "Given expectation criteria C and data set", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear-chain CRF for Extraction", "sec_num": "3.3" }, { "text": "D = (x (1) 1 , y (1) 1 , x (1) 2 ), . . . , (x (K) 1 , y (K) 1 , x (K)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear-chain CRF for Extraction", "sec_num": "3.3" }, { "text": "2 ) , we first estimate the parameters \u0398 of AlignCRF model as described in Section 3.2. Next, for all text sequences x 2 ), \u2200t using Equation 3. To estimate parameters \u039b we minimize the KL-divergence between", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear-chain CRF for Extraction", "sec_num": "3.3" }, { "text": "p \u0398 (y|x) = n t=1 p \u0398 (y t |x) and p \u039b (y|x) for all sequences x, KL(p \u0398 p \u039b ) = y p \u0398 (y|x) log( p \u0398 (y|x) p \u039b (y|x) ) = H(p \u0398 (y|x)) \u2212 t,y t\u22121 ,yt E p \u0398 (y t\u22121 ,yt) [\u039b g(y t\u22121 , y t , x, t)] + log(Z \u039b (x)). (9)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear-chain CRF for Extraction", "sec_num": "3.3" }, { "text": "The gradient of the above equation is given by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear-chain CRF for Extraction", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2202KL \u2202\u039b = t,y t\u22121 ,yt E p \u039b (y t\u22121 ,yt|x) [ g(y t\u22121 , y t , x, t)] \u2212E p \u0398 (y t\u22121 ,yt|x) [ g(y t\u22121 , y t , x, t)].", "eq_num": "(10)" } ], "section": "Linear-chain CRF for Extraction", "sec_num": "3.3" }, { "text": "Both the expectations can be computed using the Baum-Welch algorithm. The parameters \u039b are estimated for a given data set D and learned parameters \u0398 by optimizing the objective", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear-chain CRF for Extraction", "sec_num": "3.3" }, { "text": "O(\u039b; D, \u0398) = min \u039b K i=1 KL(p \u0398 (y|x (i) 2 ) p \u039b (y|x (i) 2 ) + \u039b 2 /2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear-chain CRF for Extraction", "sec_num": "3.3" }, { "text": "The objective is minimized using L-BFGS. Since the objective is convex we are guaranteed to obtain a global minima.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear-chain CRF for Extraction", "sec_num": "3.3" }, { "text": "In this section, we present details about the application of our method to citation extraction task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Data set. We collected a set of 260 random records from the DBLP bibliographic database. The schema of DBLP has the following labels {author, editor, address, title, booktitle, pages, year, journal, volume, number, month, url, ee, cdrom, school, publisher, note, isbn, chapter, se-ries}. The complexity of our alignment model depends on the number of schema labels and number of tokens in the DB record. We reduced the number of schema labels by: (1) mapping the labels address, booktitle, journal and school to venue, (2) mapping month and year to date, and (3) dropping the fields url, ee, cdrom, note, isbn and chapter, since they never appeared in citation texts. We also added the other label O for fields in text that are not represented in the database. Therefore, our final DB schema is {author, title, date, venue, volume, number, pages, editor, publisher, series, O}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "For each DBLP record we searched on the web for matching citation texts using the first author's last name and words in the title. Each citation text found is manually labeled for evaluation purposes. An example of a matching DBLP record-citation text pair is shown in Table 3 . Our data set 6 contains 522 record-text pairs for 260 DBLP entries.", "cite_spans": [], "ref_spans": [ { "start": 269, "end": 276, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Features and Constraints. We use a variety of rich, non-independent features in our models to optimize system performance. The input features in our models are of the following two types: (a) Extraction features in the AlignCRF model (f (a t , y 1 , x 2 , t)) and ExtrCRF model (g(y t\u22121 , y t , x, t)) are conjunctions of assigned labels and observational tests on text sequence at time step t. The following observational tests are used: (1) regular expressions to detect tokens containing all characters (ALLCHAR), all digits (ALLDIGITS) or both digits and characters (AL-PHADIGITS), (2) number of characters or digits in the token (NUMCHAR=3, NUMDIGITS=1), (3) domain-specific patterns for date and pages, (4) token identity, suffixes, prefixes and character ngrams, (5) presence of a token in lexicons such as \"last names,\" \"publisher names,\" \"cities,\" (6) lexicon features within a window of 10, (7) regular 6 The data set can be found at http://www.cs.umass.edu/\u223ckedarb/dbie cite data.sgml. (f (a t , x 1 , x 2 , t) ) that operate on the aligned source token x 1 [a t ] and target token x 2 [t]. Again the observational tests used for alignment are: (1) exact token match tests whether the source-target tokens are string identical, (2) approximate token match produces a binary feature after binning the Jaro-Winkler edit distance (Cohen et al., 2003) between the tokens, (3) substring token match tests whether one token is a substring of the other, (4) prefix token match returns true if the prefixes match for lengths {1, 2, 3, 4}, (5) suffix token match returns true if the prefixes match for lengths {1, 2, 3, 4}, and (6) exact and approximate token matches at offsets {\u22121, \u22121} and {+1, +1} around the alignment.", "cite_spans": [ { "start": 913, "end": 914, "text": "6", "ref_id": null }, { "start": 1338, "end": 1358, "text": "(Cohen et al., 2003)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 997, "end": 1021, "text": "(f (a t , x 1 , x 2 , t)", "ref_id": null } ], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Thus, a conditional model lets us use these arbitrary helpful features that cannot be exploited tractably in a generative model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "DBLP record", "sec_num": null }, { "text": "As is common practice (Haghighi and Klein, 2006; Mann and McCallum, 2008) , we simulate user-specified expectation criteria through statistics on manually labeled citation texts. For extraction criteria, we select for each label, the top N extraction features ordered by mutual information (MI) with that label. Also, we aggregate the alignment features of record tokens whose alignment with a target text token results in a correct label assignment. The top N alignment features that have maximum MI with this correct labeling are selected as alignment criteria. We bin target expectations of these criteria into 11 bins as [0.05, 0.1, 0.2, 0.3, . . . , 0.9, 0.95]. 7 In our experiments, we set N = 10 and use a fixed weight w = 10.0 for all expectation criteria (no tuning of parameters was performed). Table 4 shows a sample of GE criteria used in our experiments. 8 7 Mann and McCallum (2008) note that GE criteria are robust to deviation of specified targets from actual expectations.", "cite_spans": [ { "start": 22, "end": 48, "text": "(Haghighi and Klein, 2006;", "ref_id": "BIBREF12" }, { "start": 49, "end": 73, "text": "Mann and McCallum, 2008)", "ref_id": "BIBREF15" }, { "start": 872, "end": 896, "text": "Mann and McCallum (2008)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 805, "end": 812, "text": "Table 4", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "DBLP record", "sec_num": null }, { "text": "8 A complete list of expectation criteria is available at http://www.cs.umass.edu/\u223ckedarb/dbie expts.txt. Experimental Setup. Our experiments use a 3:1 split of the data for training and testing. We repeat the experiment 20 times with different random splits of the data. We train the AlignCRF model using the training data and the automatically created expectation criteria (Section 3.2). We evaluate our alignment model indirectly in terms of token labeling accuracy (i.e., percentage of correctly labeled tokens in test citation data) since we do not have annotated alignments. The alignment model is then used to train a ExtrCRF model as described in Section 3.3. Again, we use token labeling accuracy for evaluation. We also measure F1 performance as the harmonic mean of precision and recall for each label.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "DBLP record", "sec_num": null }, { "text": "We compare our method against alternate approaches that either learn alignment or extraction models from training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alternate approaches", "sec_num": "4.1" }, { "text": "Alignment approaches. We use GIZA++ (Och and Ney, 2003) to train generative directed alignment models: HMM and IBM Model4 (Brown et al., 1993) from training record-text pairs. These models are currently being used in state-of-the-art machine translation systems. Alignments between matching DB records and text sequences are then used for labeling at test time.", "cite_spans": [ { "start": 36, "end": 55, "text": "(Och and Ney, 2003)", "ref_id": "BIBREF20" }, { "start": 122, "end": 142, "text": "(Brown et al., 1993)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Alternate approaches", "sec_num": "4.1" }, { "text": "Extraction approaches. The first alternative (DB-CRF) trains a linear-chain CRF for extraction on fields of the database entries only. Each field of the record is treated as a separate labeled text sequence. Given an unlabeled text sequence, it is segmented and labeled using the Viterbi algorithm. This method is an enhanced representative for (Agichtein and Ganti, 2004) in which a language model is trained for each column of the DB. Another alternative technique constructs partially annotated text data using the matching records and a labeling function. The labeling function employs high-precision alignment rules to assign labels to text tokens using labeled record tokens. We use exact and approximate token matching rules to create a partially labeled sequence, skipping tokens that cannot be unambiguously labeled. In our experiments, we achieve a precision of 97% and a recall of 70% using these rules. Given a partially annotated citation text, we train a linear-chain CRF by maximizing the marginal likelihood of the observed labels. This marginal CRF training method (Bellare and Mc-Callum, 2007 ) (M-CRF) was the previous stateof-the-art on this data set. Additionally, if a matching record is available for a test citation text, we can partially label tokens and use constrained Viterbi decoding with labeled positions fixed at their observed values (M+R-CRF approach).", "cite_spans": [ { "start": 345, "end": 372, "text": "(Agichtein and Ganti, 2004)", "ref_id": "BIBREF0" }, { "start": 1082, "end": 1110, "text": "(Bellare and Mc-Callum, 2007", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Alternate approaches", "sec_num": "4.1" }, { "text": "Our third approach is similar to (Mann and Mc-Callum, 2008) . We create extraction expectation criteria from labeled text sequences in the training data and uses these criteria to learn a linear-chain CRF for extraction (MM08). The performance achieved by this approach is an upper bound on methods that: (1) use labeled training records to create extraction criteria, and, (2) only use extraction criteria without any alignment criteria.", "cite_spans": [ { "start": 33, "end": 59, "text": "(Mann and Mc-Callum, 2008)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Alternate approaches", "sec_num": "4.1" }, { "text": "Finally, we train a supervised linear-chain CRF (GS-CRF) using the labeled text sequences from the training set. This represents an upper bound on the performance that can be achieved on our task. All the extraction methods have access to the same features as the ExtrCRF model. 4) with an error reduction of 63.8%. Our conjecture is that Model4 is getting stuck in sub-optimal local maxima during EM training since our training set only contains hundreds of parallel record-text pairs. This problem may be alleviated by training on a large parallel corpus. Additionally, our alignment model is superior to Model4 since it leverages rich non-independent features of input sequence pairs. Table 6 shows the performance of various extraction methods. Except M+R-CRF, all extraction approaches, do not use any record information at test time. In comparison to the previous stateof-the-art M-CRF, the ExtrCRF method provides an error reduction of 35.1%. ExtrCRF also produces an error reduction of 21.7% compared to M+R-CRF without the use of matching records. These reductions are significant at level p = 0.005 using the two-tailed t-test. Training only on DB records is not helpful for extraction as we do not learn the transition structure 9 and additional context information 10 in text. This explains the low accuracy of the DB-CRF method. Furthermore, the MM08 approach (Mann and McCallum, 2008) Note that we do not observe a decrease in performance of ExtrCRF over AlignCRF although we are not using the test records during decoding. This is because: (1) a first-order model in Extr-CRF improves performance compared to a zeroorder model in AlignCRF and (2) the use of noisy DB records in the test set for alignment often increases extraction error.", "cite_spans": [ { "start": 1373, "end": 1398, "text": "(Mann and McCallum, 2008)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 279, "end": 281, "text": "4)", "ref_id": null }, { "start": 688, "end": 695, "text": "Table 6", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "Alternate approaches", "sec_num": "4.1" }, { "text": "Both our models have a high F1 value for the other label O because we provide our algorithm with constraints for the label O. In contrast, since there is no realization of the O field in the DB records, both M-CRF and M+R-CRF methods fail to label such tokens correctly. Our alignment model trained using expectation criteria achieves a performance of 92.7% close to gold-standard training (GS-CRF) (96.5%). Furthermore, Ex-trCRF obtains an accuracy of 92.8% similar to AlignCRF without access to DB records due to better modeling of transition structure and context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.2" }, { "text": "Recent research in information extraction (IE) has focused on reducing the labeling effort needed to train supervised IE systems. For instance, Grenager et al. (2005) perform unsupervised HMM learning for field segmentation, and bias the model to prefer self-transitions and transi-tions on boundary tokens. Unfortunately, such unsupervised IE approaches do not attain performance close to state-of-the-art supervised methods. Semi-supervised approaches that learn a model with only a few constraints specifying prior knowledge have generated much interest. Haghighi and Klein (2006) assign each label in the model certain prototypical features and train a Markov random field for sequence tagging from these labeled features. In contrast, our method uses GE criteria (Mann and McCallum, 2008) allowing soft-labeling of features with target expectation values -to train conditional models with complex and non-independent input features. Additionally, in comparison to previous methods, an information extractor trained from our record-text alignments achieves accuracy of 93% making it useful for real-world applications. Chang et al. (2007) use beam search for decoding unlabeled text with soft and hard constraints, and train a model with top-K decoded label sequences. However, this model requires large number of labeled examples (e.g., 300 annotated citations) to bootstrap itself. Active learning is another popular approach for reducing annotation effort. Settles and Craven (2008) provide a comparison of various active learning strategies for sequence labeling tasks. We have shown, however, that in domains where a database can provide significant supervision, one can bootstrap accurate extractors with very little human effort.", "cite_spans": [ { "start": 144, "end": 166, "text": "Grenager et al. (2005)", "ref_id": "BIBREF11" }, { "start": 558, "end": 583, "text": "Haghighi and Klein (2006)", "ref_id": "BIBREF12" }, { "start": 768, "end": 793, "text": "(Mann and McCallum, 2008)", "ref_id": "BIBREF15" }, { "start": 1123, "end": 1142, "text": "Chang et al. (2007)", "ref_id": "BIBREF7" }, { "start": 1464, "end": 1489, "text": "Settles and Craven (2008)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "Another area of research, related to the task described in our paper, is learning extractors from database records. These records are also known as field books and reference sets in literature (Canisius and Sporleder, 2007; Michelson and Knoblock, 2008) . Both Agichtein and Ganti (2004) and Canisius and Sporleder (2007) train a language model for each database column. The language modeling approach is sensitive to word re-orderings in text and other variability present in real-world text (e.g., abbreviation). We allow for word and field re-orderings through alignments and model complex transformations through feature functions. Michelson and Knoblock (2008) extract information from unstructured texts using a rule-based approach to align segments of text with fields in a DB record. Our probabilistic alignment approach is more robust and uses rich features of the alignment to obtain high performance.", "cite_spans": [ { "start": 193, "end": 223, "text": "(Canisius and Sporleder, 2007;", "ref_id": "BIBREF6" }, { "start": 224, "end": 253, "text": "Michelson and Knoblock, 2008)", "ref_id": "BIBREF19" }, { "start": 261, "end": 287, "text": "Agichtein and Ganti (2004)", "ref_id": "BIBREF0" }, { "start": 292, "end": 321, "text": "Canisius and Sporleder (2007)", "ref_id": "BIBREF6" }, { "start": 636, "end": 665, "text": "Michelson and Knoblock (2008)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "Recently, Snyder and Barzilay (2007) and Liang et al. (2009) have explored record-text matching in domains with unstructured texts. Unlike a semistructured text sequence obtained by noisily concatenating fields from a single record, an unstructured sequence may contain fields from multiple records embedded in large amounts of extraneous text. Hence, the problems of record-text matching and word alignment are significantly harder in unstructured domains. Snyder and Barzilay (2007) achieve a state-of-the-art performance of 80% F1 on matching multiple NFL database records to sentences in the news summary of a football game. Their algorithm is trained using supervised machine learning and learns alignments at the level of sentences and DB records. In contrast, this paper presents a semi-supervised learning algorithm for learning token-level alignments between records and texts. Liang et al. (2009) describe a model that simultaneously performs record-text matching and word alignment in unstructured domains. Their model is trained in an unsupervised fashion using EM. It may be possible to further improve their model performance by incorporating prior knowledge in the form of expectation criteria.", "cite_spans": [ { "start": 10, "end": 36, "text": "Snyder and Barzilay (2007)", "ref_id": null }, { "start": 41, "end": 60, "text": "Liang et al. (2009)", "ref_id": "BIBREF14" }, { "start": 458, "end": 484, "text": "Snyder and Barzilay (2007)", "ref_id": null }, { "start": 887, "end": 906, "text": "Liang et al. (2009)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "Traditionally, generative word alignment models have been trained on massive parallel corpora (Brown et al., 1993) . Recently, discriminative alignment methods trained using annotated alignments on small parallel corpora have achieved superior performance. Taskar et al. (2005) train a discriminative alignment model from annotated alignments using a large-margin method. Labeled alignments are also used by Blunsom and Cohn (2006) to train a CRF word alignment model. Our method is trained using a small number of easily specified expectation criteria thus avoiding tedious and expensive human labeling of alignments. An alternate method of learning alignment models is proposed by McCallum et al. (2005) in which the training set consists of sequence pairs classified as match or mismatch. Alignments are learned to identify the class of a given sequence pair. However, this method relies on carefully selected negative examples to produce high-accuracy alignments. Our method produces good alignments as we directly encode prior knowledge about alignments.", "cite_spans": [ { "start": 94, "end": 114, "text": "(Brown et al., 1993)", "ref_id": "BIBREF5" }, { "start": 257, "end": 277, "text": "Taskar et al. (2005)", "ref_id": "BIBREF31" }, { "start": 408, "end": 431, "text": "Blunsom and Cohn (2006)", "ref_id": "BIBREF3" }, { "start": 683, "end": 705, "text": "McCallum et al. (2005)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "Information extraction is an important first step in data mining applications. Earlier approaches for learning reliable extractors have relied on manually annotated text corpora. This paper presents a novel approach for training extractors using alignments between texts and existing database records. Our approach achieves performance close to supervised training with very little supervision.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "In the future, we plan to surpass supervised accuracy by applying our method to millions of parallel record-text pairs collected automatically using matching. We also want to explore the addition of Markov dependencies into our alignment model and other constraints such as monotonicity and one-to-one correspondence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "Two words are orthographically similar if they have low edit distance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.informatik.uni-trier.de/\u223cley/db/ 3 Expectation criteria used in our experiments are listed at http://www.cs.umass.edu/\u223ckedarb/dbie expts.txt.4 To obtain more accurate record-text pairs, matching methods can be tuned for high precision at the expense of recall.Alternatively, humans can cheaply provide match/mismatch labels on automatically matched pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Our L-BFGS optimization procedure checks whether the approximate Hessian computed from cached gradient vectors is positive semi-definite. The optimization is restarted if this check fails.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In general, the editor field follows the title field while the author field precedes it.10 The token \"Vol.\" generally precedes the volume field in text. Similarly, tokens \"pp\" and \"pages\" occur before the pages field.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported in part by the Center for Intelligent Information Retrieval and in part by The Central Intelligence Agency, the National Security Agency and National Science Foundation under NSF grant #IIS-0326249. Any opinions, findings and conclusions or recommendations expressed in this material are the authors' and do not necessarily reflect those of the sponsor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Mining reference tables for automatic text segmentation", "authors": [ { "first": "Eugene", "middle": [], "last": "Agichtein", "suffix": "" }, { "first": "Venkatesh", "middle": [], "last": "Ganti", "suffix": "" } ], "year": 2004, "venue": "KDD", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eugene Agichtein and Venkatesh Ganti. 2004. Mining reference tables for automatic text segmentation. In KDD.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Snowball: Extracting relations from large plain-text collections", "authors": [ { "first": "Eugene", "middle": [], "last": "Agichtein", "suffix": "" }, { "first": "Luis", "middle": [], "last": "Gravano", "suffix": "" } ], "year": 2000, "venue": "ICDL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eugene Agichtein and Luis Gravano. 2000. Snow- ball: Extracting relations from large plain-text col- lections. In ICDL.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Learning extractors from unlabeled text using relevant databases", "authors": [ { "first": "Kedar", "middle": [], "last": "Bellare", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2007, "venue": "IIWeb workshop at AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kedar Bellare and Andrew McCallum. 2007. Learn- ing extractors from unlabeled text using relevant databases. In IIWeb workshop at AAAI 2007.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Discriminative word alignment with conditional random fields", "authors": [ { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Cohn", "suffix": "" } ], "year": 2006, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Phil Blunsom and Trevor Cohn. 2006. Discriminative word alignment with conditional random fields. In ACL.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Extracting patterns and relations from the world wide web", "authors": [ { "first": "", "middle": [], "last": "Sergey Brin", "suffix": "" } ], "year": 1998, "venue": "EDBT Workshop", "volume": "", "issue": "", "pages": "172--183", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sergey Brin. 1998. Extracting patterns and relations from the world wide web. In EDBT Workshop, pages 172-183.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The mathematics of statistical machine translation: parameter estimation", "authors": [ { "first": "Peter", "middle": [], "last": "Brown", "suffix": "" }, { "first": "J", "middle": [ "Della" ], "last": "Vincent", "suffix": "" }, { "first": "Stephen", "middle": [ "A" ], "last": "Pietra", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Della Pietra", "suffix": "" }, { "first": "", "middle": [], "last": "Mercer", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "", "pages": "263--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert Mercer. 1993. The mathematics of statistical machine translation: parameter estima- tion. Computational Linguistics, 19:263-311.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Bootstrapping information extraction from field books", "authors": [ { "first": "Sander", "middle": [], "last": "Canisius", "suffix": "" }, { "first": "Caroline", "middle": [], "last": "Sporleder", "suffix": "" } ], "year": 2007, "venue": "EMNLP-CoNLL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sander Canisius and Caroline Sporleder. 2007. Boot- strapping information extraction from field books. In EMNLP-CoNLL.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Guiding semi-supervision with constraint-driven learning", "authors": [ { "first": "M", "middle": [], "last": "Chang", "suffix": "" }, { "first": "L", "middle": [], "last": "Ratinov", "suffix": "" }, { "first": "D", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2007, "venue": "ACL", "volume": "", "issue": "", "pages": "280--287", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Chang, L. Ratinov, and D. Roth. 2007. Guiding semi-supervision with constraint-driven learning. In ACL, pages 280-287.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A comparison of string distance metrics for name-matching tasks", "authors": [ { "first": "William", "middle": [], "last": "Cohen", "suffix": "" }, { "first": "Pradeep", "middle": [], "last": "Ravikumar", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Fienberg", "suffix": "" } ], "year": 2003, "venue": "IJCAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "William Cohen, Pradeep Ravikumar, and Stephen Fien- berg. 2003. A comparison of string distance metrics for name-matching tasks. In IJCAI.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Unsupervised named-entity extraction from the Web: An experimental study. Artificial Intelligence", "authors": [ { "first": "O", "middle": [], "last": "Etzioni", "suffix": "" }, { "first": "M", "middle": [], "last": "Cafarella", "suffix": "" }, { "first": "D", "middle": [], "last": "Downey", "suffix": "" }, { "first": "A.-M", "middle": [], "last": "Popescu", "suffix": "" }, { "first": "T", "middle": [], "last": "Shaked", "suffix": "" }, { "first": "S", "middle": [], "last": "Soderland", "suffix": "" }, { "first": "D", "middle": [ "S" ], "last": "Weld", "suffix": "" }, { "first": "A", "middle": [], "last": "Yates", "suffix": "" } ], "year": 2005, "venue": "", "volume": "165", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "O. Etzioni, M. Cafarella, D. Downey, A.-M. Popescu, T. Shaked, S. Soderland, D. S. Weld, and A. Yates. 2005. Unsupervised named-entity extraction from the Web: An experimental study. Artificial Intelli- gence, 165.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Information extraction with HMM and shrinkage", "authors": [ { "first": "D", "middle": [], "last": "Freitag", "suffix": "" }, { "first": "A", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 1999, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Freitag and A. McCallum. 1999. Information ex- traction with HMM and shrinkage. In AAAI.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Unsupervised learning of field segmentation models for information extraction", "authors": [ { "first": "T", "middle": [], "last": "Grenager", "suffix": "" }, { "first": "D", "middle": [], "last": "Klein", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2005, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Grenager, D. Klein, and C. D. Manning. 2005. Un- supervised learning of field segmentation models for information extraction. In ACL.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Prototype-driven learning for sequence models", "authors": [ { "first": "Aria", "middle": [], "last": "Haghighi", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2006, "venue": "HLT-NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aria Haghighi and Dan Klein. 2006. Prototype-driven learning for sequence models. In HLT-NAACL.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", "authors": [ { "first": "John", "middle": [], "last": "Lafferty", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "Fernando C N", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2001, "venue": "ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Lafferty, Andrew McCallum, and Fernando C N Pereira. 2001. Conditional random fields: Prob- abilistic models for segmenting and labeling se- quence data. In ICML, page 282.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Learning semantic correspondences with less supervision", "authors": [ { "first": "P", "middle": [], "last": "Liang", "suffix": "" }, { "first": "M", "middle": [ "I" ], "last": "Jordan", "suffix": "" }, { "first": "D", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2009, "venue": "Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Liang, M. I. Jordan, and D. Klein. 2009. Learning semantic correspondences with less supervision. In Association for Computational Linguistics (ACL).", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Generalized expectation criteria for semi-supervised learning of conditional random fields", "authors": [ { "first": "Gideon", "middle": [ "S" ], "last": "Mann", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL'08", "volume": "", "issue": "", "pages": "870--878", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gideon S. Mann and Andrew McCallum. 2008. Generalized expectation criteria for semi-supervised learning of conditional random fields. In Proceed- ings of ACL'08, pages 870-878.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Integrating unstructured data into relational databases", "authors": [ { "first": "I", "middle": [ "R" ], "last": "Mansuri", "suffix": "" }, { "first": "S", "middle": [], "last": "Sarawagi", "suffix": "" } ], "year": 2006, "venue": "ICDE", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "I. R. Mansuri and S. Sarawagi. 2006. Integrating un- structured data into relational databases. In ICDE.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A conditional random field for discriminatively-trained finite-state string edit distance", "authors": [ { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "Kedar", "middle": [], "last": "Bellare", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2005, "venue": "UAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew McCallum, Kedar Bellare, and Fernando Pereira. 2005. A conditional random field for discriminatively-trained finite-state string edit dis- tance. In UAI.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Semantic annotation of unstructured and ungrammatical text", "authors": [ { "first": "Matthew", "middle": [], "last": "Michelson", "suffix": "" }, { "first": "Craig", "middle": [ "A" ], "last": "Knoblock", "suffix": "" } ], "year": 2005, "venue": "IJCAI", "volume": "", "issue": "", "pages": "1091--1098", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Michelson and Craig A. Knoblock. 2005. Se- mantic annotation of unstructured and ungrammati- cal text. In IJCAI, pages 1091-1098.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Creating relational data from unstructured and ungrammatical data sources", "authors": [ { "first": "Matthew", "middle": [], "last": "Michelson", "suffix": "" }, { "first": "Craig", "middle": [ "A" ], "last": "Knoblock", "suffix": "" } ], "year": 2008, "venue": "JAIR", "volume": "31", "issue": "", "pages": "543--590", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Michelson and Craig A. Knoblock. 2008. Creating relational data from unstructured and un- grammatical data sources. JAIR, 31:543-590.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A systematic comparison of various statistical alignment models", "authors": [ { "first": "Josef", "middle": [], "last": "Franz", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Och", "suffix": "" }, { "first": "", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2003, "venue": "Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och and Hermann Ney. 2003. A sys- tematic comparison of various statistical alignment models. Computational Linguistics, 29.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Accurate information extraction from research papers using conditional random fields", "authors": [ { "first": "Fuchun", "middle": [], "last": "Peng", "suffix": "" }, { "first": "A", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2004, "venue": "HLT-NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fuchun Peng and A. McCallum. 2004. Accurate infor- mation extraction from research papers using condi- tional random fields. In HLT-NAACL.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "A tutorial on hidden markov models and selected applications in speech processing", "authors": [ { "first": "Lawrence", "middle": [ "R" ], "last": "Rabiner", "suffix": "" } ], "year": 1989, "venue": "IEEE", "volume": "17", "issue": "", "pages": "257--286", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lawrence R. Rabiner. 1989. A tutorial on hidden markov models and selected applications in speech processing. IEEE, 17:257-286.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Taming the unstructured: Creating structured content from partially labeled schematic text sequences", "authors": [ { "first": "Sarit", "middle": [], "last": "Sridhar Ramakrishnan", "suffix": "" }, { "first": "", "middle": [], "last": "Mukherjee", "suffix": "" } ], "year": 2004, "venue": "CoopIS/DOA/ODBASE", "volume": "2", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sridhar Ramakrishnan and Sarit Mukherjee. 2004. Taming the unstructured: Creating structured con- tent from partially labeled schematic text sequences. In CoopIS/DOA/ODBASE, volume 2, page 909.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Exploiting dictionaries in named entity extraction: combining semi-markov extraction processes and data integration methods", "authors": [ { "first": "Sunita", "middle": [], "last": "Sarawagi", "suffix": "" }, { "first": "William", "middle": [ "W" ], "last": "Cohen", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sunita Sarawagi and William W. Cohen. 2004. Ex- ploiting dictionaries in named entity extraction: combining semi-markov extraction processes and data integration methods. In KDD, page 89.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Semi-Markov conditional random fields for information extraction", "authors": [ { "first": "Sunita", "middle": [], "last": "Sarawagi", "suffix": "" }, { "first": "William", "middle": [ "W" ], "last": "Cohen", "suffix": "" } ], "year": 2005, "venue": "NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sunita Sarawagi and William W. Cohen. 2005. Semi- Markov conditional random fields for information extraction. In NIPS.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "An analysis of active learning strategies for sequence labeling tasks", "authors": [ { "first": "Burr", "middle": [], "last": "Settles", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Craven", "suffix": "" } ], "year": 2008, "venue": "EMNLP", "volume": "", "issue": "", "pages": "1070--1079", "other_ids": {}, "num": null, "urls": [], "raw_text": "Burr Settles and Mark Craven. 2008. An analysis of active learning strategies for sequence labeling tasks. In EMNLP, pages 1070-1079.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Learning hidden markov model structure for information extraction", "authors": [ { "first": "K", "middle": [], "last": "Seymore", "suffix": "" }, { "first": "A", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "R", "middle": [], "last": "Rosenfeld", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the AAAI Workshop on ML for IE", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Seymore, A. McCallum, and R. Rosenfeld. 1999. Learning hidden markov model structure for infor- mation extraction. In Proceedings of the AAAI Workshop on ML for IE.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Database-text alignment via structured multi-label classification", "authors": [], "year": null, "venue": "IJCAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Database-text alignment via structured multi-label classification. In IJCAI.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Reducing weight undertraining in structured discriminative learning", "authors": [ { "first": "Charles", "middle": [], "last": "Sutton", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Sindelar", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2006, "venue": "HLT-NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charles Sutton, Michael Sindelar, and Andrew McCal- lum. 2006. Reducing weight undertraining in struc- tured discriminative learning. In HLT-NAACL.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "A discriminative matching approach to word alignment", "authors": [ { "first": "Ben", "middle": [], "last": "Taskar", "suffix": "" }, { "first": "Simon", "middle": [], "last": "Lacoste-Julien", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2005, "venue": "HLT-EMNLP", "volume": "", "issue": "", "pages": "73--80", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ben Taskar, Simon Lacoste-Julien, and Dan Klein. 2005. A discriminative matching approach to word alignment. In HLT-EMNLP, pages 73-80.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "text": "= 1 . . . K we compute the marginal probabilities of the labels p \u0398 (y t |x (i)", "num": null }, "TABREF1": { "type_str": "table", "html": null, "text": "An example of a matching record-text pair for restaurant addresses.", "num": null, "content": "" }, "TABREF3": { "type_str": "table", "html": null, "text": "", "num": null, "content": "
: Example of a word alignment.repre-
sents aligned tokens. Vertical text at the bottom
are the text tokens. Horizontal text on the left are
tokens from the DB record with labels shown in
braces.
" }, "TABREF4": { "type_str": "table", "html": null, "text": "Chengzhi Li] author [Edward W. Knightly] author [Coordinated Network Scheduling: A Framework for End-to-End Services.] title [69-]pages [2000] date [ICNP]venue [C. Li] author [and] O [E. Knightly.] author [Coordinated network scheduling: A framework for end-to-end services.] title [In Proceedings of IEEE ICNP]venue ['00,] date [Osaka, Japan,]venue [November 2000.] date", "num": null, "content": "" }, "TABREF5": { "type_str": "table", "html": null, "text": "Example of matching record-text pair found on the web. Alignment features in the AlignCRF model", "num": null, "content": "
expression features within a window of 10, and (8)
token identity features within a window of 3.
(b)
" }, "TABREF7": { "type_str": "table", "html": null, "text": "Sample of expectation criteria used by our model.", "num": null, "content": "" }, "TABREF8": { "type_str": "table", "html": null, "text": "shows the results of various alignment algorithms applied to the record-text data set. Alignment methods use the matching record to perform labeling of a test citation text. The Align-CRF model outperforms the best generative align-", "num": null, "content": "
HMM Model4 AlignCRF
accuracy 78.5% 79.8%92.7%
author92.794.999.0
title93.395.197.3
date69.566.381.9
venue73.373.191.2
volume50.049.278.5
number53.566.368.0
pages38.244.188.2
editor22.821.578.1
publisher29.731.072.6
series77.477.374.6
O49.658.885.7
" }, "TABREF9": { "type_str": "table", "html": null, "text": "", "num": null, "content": "
: Token-labeling accuracy and per-label F1
for different alignment methods. These methods
all use matching DB records at test time. Bold-
faced numbers indicate the best performing model.
HMM, Model4: generative alignment models
from GIZA++, AlignCRF: alignment model from
this paper.
" }, "TABREF11": { "type_str": "table", "html": null, "text": "Token-labeling accuracy and per-label F1 for different extraction methods. Except M+R-CRF \u2020 , all other approaches do not use any records at test time. Bold-faced numbers indicate the best performing model. DB-CRF: CRF trained on DB fields. M+R-CRF, M-CRF: CRFs trained from heuristic alignments. ExtrCRF: Extraction model presented in this paper. GS-CRF: CRF trained on human annotated citation texts.", "num": null, "content": "
alignment criteria during training. Hence, align-
ment information is crucial for obtaining high ac-
curacy.
" } } } }