{ "paper_id": "D09-1011", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:39:27.936725Z" }, "title": "Graphical Models over Multiple Strings *", "authors": [ { "first": "Markus", "middle": [], "last": "Dreyer", "suffix": "", "affiliation": {}, "email": "markus@cs.jhu.edu" }, { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We study graphical modeling in the case of stringvalued random variables. Whereas a weighted finite-state transducer can model the probabilistic relationship between two strings, we are interested in building up joint models of three or more strings. This is needed for inflectional paradigms in morphology, cognate modeling or language reconstruction, and multiple-string alignment. We propose a Markov Random Field in which each factor (potential function) is a weighted finite-state machine, typically a transducer that evaluates the relationship between just two of the strings. The full joint distribution is then a product of these factors. Though decoding is actually undecidable in general, we can still do efficient joint inference using approximate belief propagation; the necessary computations and messages are all finitestate. We demonstrate the methods by jointly predicting morphological forms.", "pdf_parse": { "paper_id": "D09-1011", "_pdf_hash": "", "abstract": [ { "text": "We study graphical modeling in the case of stringvalued random variables. Whereas a weighted finite-state transducer can model the probabilistic relationship between two strings, we are interested in building up joint models of three or more strings. This is needed for inflectional paradigms in morphology, cognate modeling or language reconstruction, and multiple-string alignment. We propose a Markov Random Field in which each factor (potential function) is a weighted finite-state machine, typically a transducer that evaluates the relationship between just two of the strings. The full joint distribution is then a product of these factors. Though decoding is actually undecidable in general, we can still do efficient joint inference using approximate belief propagation; the necessary computations and messages are all finitestate. We demonstrate the methods by jointly predicting morphological forms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "This paper considers what happens if a graphical model's variables can range over strings of unbounded length, rather than over the typical finite domains such as booleans, words, or tags. Variables that are connected in the graphical model are related by some weighted finite-state transduction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview", "sec_num": "1" }, { "text": "Graphical models have become popular in machine learning as a principled way to work with collections of interrelated random variables. Most often they are used as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview", "sec_num": "1" }, { "text": "1. Build: Manually specify the n variables of interest; their domains; and the possible direct interactions among them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview", "sec_num": "1" }, { "text": "Train: Train this model's parameters \u03b8 to obtain a specific joint probability distribution p(V 1 , . . . , V n ) over the n variables.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "3. Infer: Use this joint distribution to predict the values of various unobserved variables from observed ones.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "Note that 1. requires intuitions about the domain; 2. requires some choice of training procedure; and 3. requires a choice of exact or approximate inference algorithm. Our graphical models over strings are natural objects to investigate. We motivate them with some natural applications in computational linguistics (section 2). We then give our formalism: a Markov Random Field whose potential functions are rational weighted languages and relations (section 3). Next, we point out that inference is in general undecidable, and explain how to do approximate inference using message-passing algorithms such as belief propagation (section 4). The messages are represented as weighted finite-state machines.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "Finally, we report on some initial experiments using these methods (section 7). We use incomplete data to train a joint model of morphological paradigms, then use the trained model to complete the data by predicting unseen forms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "The problem of mapping between different forms and representations of strings is ubiquitous in natural language processing and computational linguistics. This is typically done between string pairs, where a pronunciation is mapped to its spelling, an inflected form to its lemma, a spelling variant to its canonical spelling, or a name is transliterated from one alphabet into another. However, many problems involve more than just two strings:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation", "sec_num": "2" }, { "text": "\u2022 in morphology, the inflected forms of a (possibly irregular) verb are naturally considered together as a whole morphological paradigm in which different forms reinforce one another; \u2022 mapping an English word to its foreign transliteration may be easier when one considers the orthographic and phonological forms of both words; \u2022 similar cognates in multiple languages are naturally described together, in orthographic or phonological representations, or both;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation", "sec_num": "2" }, { "text": "\u2022 modern and ancestral word forms form a phylogenetic tree in historical linguistics;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation", "sec_num": "2" }, { "text": "\u2022 in bioinformatics and in system combination, multiple sequences need to be aligned in order to identify regions of similarity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation", "sec_num": "2" }, { "text": "We propose a unified model for multiple strings that is suitable for all the problems mentioned above. It is robust and configurable and can make use of task-specific overlapping features. It learns from observed and unobserved, or latent, information, making it useful in supervised, semisupervised, and unsupervised settings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation", "sec_num": "2" }, { "text": "A Markov Random Field (MRF) is a joint model of a set of random variables,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formal Modeling Approach 3.1 Variables", "sec_num": "3" }, { "text": "V = {V 1 , . . . , V n }.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formal Modeling Approach 3.1 Variables", "sec_num": "3" }, { "text": "We assume that all variables are string-valued, i.e. the value of V i may be any string \u2208 \u03a3 * i , where \u03a3 i is some finite alphabet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formal Modeling Approach 3.1 Variables", "sec_num": "3" }, { "text": "We may use meaningful names for the integers i, such as V 2SA for the 2nd singular past form of a verb.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formal Modeling Approach 3.1 Variables", "sec_num": "3" }, { "text": "The assumption that all variables are stringvalued is not crucial; it merely simplifies our presentation. It is, however, sufficient for many practical purposes, since most other discrete objects can be easily encoded as strings. For example, if V 1 is a part of speech tag, it may be encoded as a length-1 string over the finite alphabet", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formal Modeling Approach 3.1 Variables", "sec_num": "3" }, { "text": "\u03a3 1 def = {Noun, Verb, . . .}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formal Modeling Approach 3.1 Variables", "sec_num": "3" }, { "text": "A Markov Random Field defines a probability for each assignment A of values to the variables in V:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Factors", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(A) def = 1 Z m j=1 F j (A)", "eq_num": "(1)" } ], "section": "Factors", "sec_num": "3.2" }, { "text": "This distribution over assignments is specified by the collection of factors F j : A \u2192 R \u22650 . Each factor (or potential function) is a function that depends on only a subset of A. Fig. 1 displays an undirected factor graph, in which each factor is connected to the variables that it depends on. F 1 , F 3 , F 5 in this example are unary factors because each one scores the value of a single variable, while F 2 , F 4 , F 6 are binary factors.", "cite_spans": [], "ref_spans": [ { "start": 180, "end": 186, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "Factors", "sec_num": "3.2" }, { "text": "F2 F6 F5 F3 F1 F4 Vinf V2SA V3SE", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Factors", "sec_num": "3.2" }, { "text": "Figure 1: Example of a factor graph. Black boxes represent factors, circles represent variables (infinitive, 2nd past, and 3rd present-tense forms of the same verb; different samples from the MRF correspond to different verbs). Binary factors evaluate how well one string can be transduced into another, summing over all transducer paths (i.e., alignments, which are not observed in training).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Factors", "sec_num": "3.2" }, { "text": "In our setting, we will assume that each unary factor is specified by a weighted finite-state automaton (WFSA) whose weights fall in the semiring (R \u22650 , +, \u00d7). Thus the score F 3 (. . . , V 2SA = x, . . .) is the total weight of all paths in the F 3 's WFSA that accept the string x \u2208 \u03a3 *", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Factors", "sec_num": "3.2" }, { "text": "2SA . Each path's weight is the product of its component arcs' weights, which are non-negative.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Factors", "sec_num": "3.2" }, { "text": "Similarly, we assume that each binary factor is specified by a weighted finite-state transducer (WFST). Such a model is essentially a generalization of stochastic edit distance (Ristad and Yianilos, 1996) in which the edit probabilities can be made sensitive to a finite summary of context.", "cite_spans": [ { "start": 177, "end": 204, "text": "(Ristad and Yianilos, 1996)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Factors", "sec_num": "3.2" }, { "text": "Formally, a WFST is an automaton that resembles a weighted FSA, but it nondeterministically reads two strings x, y in parallel from left to right. The score of (x, y) is given by the total weight of all accepting paths in the WFST that map x to y. For example, different paths may consider various monotonic alignments of x with y, and we sum over these mutually exclusive possibilities. 1 A factor might depend on k > 2 variables. This requires a k-tape weighted finite-state machine (WFSM), an obvious generalization where each path reads k strings in some alignment. 2 To ensure that Z is finite in equation 1, we can require each factor to be a \"proper\" WFSM, i.e., its accepting paths have finite total weight (even if the WFSM is cyclic, with infinitely many paths).", "cite_spans": [ { "start": 388, "end": 389, "text": "1", "ref_id": null }, { "start": 570, "end": 571, "text": "2", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Factors", "sec_num": "3.2" }, { "text": "Our probability model has trainable parameters: a vector of feature weights \u03b8 \u2208 R. Each arc in each WFSM has a real-valued weight that depends on \u03b8. Thus, tuning \u03b8 during training will change the arc weights, hence the path weights, the factor functions, and the whole probability distribution p(A).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameters", "sec_num": "3.3" }, { "text": "Designing the probability model includes specifying the topology and weights of each WFSM. Eisner (2002) explains how to specify and train such parameterized WFSMs. Typically, the weight of an arc is a simple sum like \u03b8 12 + \u03b8 55 + \u03b8 72 , where \u03b8 12 is included on all arcs that share feature 12. However, more interesting parameterizations arise if the WFSM is constructed by operations such as transducer composition, or from a weighted regular expression.", "cite_spans": [ { "start": 91, "end": 104, "text": "Eisner (2002)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Parameters", "sec_num": "3.3" }, { "text": "Factored finite-state string models (1) were originally suggested by the second author, in Kempe et al. (2004) . That paper showed that even in the unweighted case, such models could be used to encode relations that could not be recognized by any k-tape FSM. We offer a more linguistic example as a small puzzle. We invite the reader to specify a factored model (consisting of three FSTs as in Fig. 1 ) that assigns positive probability to just those triples of character strings (x, y, z) that have the form (red ball, ball red, red), (white house, house white, white), etc. This uses the auxiliary variable Z to help encode a relation between X and Y that swaps words of unbounded length. By contrast, no FSM can accomplish such unbounded swapping, even with 3 or more tapes. Such extra power might be linguistically useful. Troublingly, however, Kempe et al. (2004) also observed that the framework is powerful enough to express computationally undecidable problems. 3 This implies that to work with arbitrary models, we will need approximate methods. 4 Fortunately, the graphical models community has already de-3 Consider a simple model with two variables and two bi-", "cite_spans": [ { "start": 91, "end": 110, "text": "Kempe et al. (2004)", "ref_id": "BIBREF7" }, { "start": 849, "end": 868, "text": "Kempe et al. (2004)", "ref_id": "BIBREF7" }, { "start": 970, "end": 971, "text": "3", "ref_id": null }, { "start": 1055, "end": 1056, "text": "4", "ref_id": null } ], "ref_spans": [ { "start": 394, "end": 400, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "Power of the formalism", "sec_num": "3.4" }, { "text": "nary factors: p(V1, V2) def = 1 Z \u2022 F1(V1, V2) \u2022 F2(V1, V2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Power of the formalism", "sec_num": "3.4" }, { "text": "). Suppose F1 is 1 or 0 according to whether its arguments are equal. Under this model, p( ) < 1 iff there exists a string x = that can be transduced to itself by the unweighted transducer F2. This question can be used to encode any instance of Post's Correspondence Problem, so is undecidable. 4 Notice that the simplest approximation to cure undecidability would be to impose an arbitrary maximum on string length, so that the random variables have a finite domain, just as in most discrete graphical models. veloped many such methods, to deal with the computational intractability (if not undecidability) of exact inference.", "cite_spans": [ { "start": 295, "end": 296, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Power of the formalism", "sec_num": "3.4" }, { "text": "V F U \u00b5 V \u2192F \u00b5 F \u2192U", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Power of the formalism", "sec_num": "3.4" }, { "text": "In this paper, we focus on how belief propagation (BP)-a simple well-known method for approximate inference in MRFs (Bishop, 2006) -can be used in our setting. BP in its general form has not yet been widely used in the NLP community. 5 However, it is just a generalization to arbitrary factor graphs of the familiar forward-backward algorithm (which operates only on chain-structured factor graphs). The algorithm becomes approximate (and may not even converge) when the factor graphs have cycles. (In that case it is more properly called \"loopy belief propagation.\")", "cite_spans": [ { "start": 116, "end": 130, "text": "(Bishop, 2006)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Approximate Inference", "sec_num": "4" }, { "text": "We first sketch how BP works in general. Each variable V in the graphical model maintains a belief about its value, in the form of a marginal distributionp V over the possible values of V . The final beliefs are the output of the algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Belief propagation", "sec_num": "4.1" }, { "text": "Beliefs arise from messages that are sent between the variables and factors along the edges of the factor graph. Variable V sends factor F a message", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Belief propagation", "sec_num": "4.1" }, { "text": "\u00b5 V \u2192F , which is an (unnormalized) probabil- ity distribution over V 's values v, computed by \u00b5 V \u2192F (v) := F \u2208N (V ),F =F \u00b5 F \u2192V (v) (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Belief propagation", "sec_num": "4.1" }, { "text": "where N is the set of neighbors of V in the graphical model. This message represents a consensus of V 's other neighboring factors concerning V 's value. It is how V tells F what its beliefp V would be if F were absent. Informally, it communicates to F : Here is what my value would be if it were up to my other neighboring factors F to determine.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Belief propagation", "sec_num": "4.1" }, { "text": "The factor F can then collect such incoming messages from neighboring variables and send its own message on to another neighbor U . Such a message \u00b5 F \u2192U suggests good values for U , in the form of an (unnormalized) distribution over U 's values u, computed by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Belief propagation", "sec_num": "4.1" }, { "text": "\u00b5 F \u2192U (u) := A s.t.A[U ]=u F (A) U \u2208N (F ),U =U \u00b5 U \u2192F (A[U ]) (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Belief propagation", "sec_num": "4.1" }, { "text": "where A is an assignment to all variables, and A[U ] is the value of variable U in that assignment. This message represents F 's prediction of U 's value based on its other neighboring variables U . Informally, via this message, F tells U : Here is what I would like your value to be, based on the messages that my other neighboring variables have sent me about their values, and how I would prefer you to relate to them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Belief propagation", "sec_num": "4.1" }, { "text": "Thus, each edge of the factor graph maintains two messages \u00b5 V \u2192F , \u00b5 F \u2192V . All messages are updated repeatedly, in some order, using the two equations above, until some stopping criterion is reached. 6 The beliefs are then computed:", "cite_spans": [ { "start": 202, "end": 203, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Belief propagation", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p V (v) def = F \u2208N (V ) \u00b5 F \u2192V (v)", "eq_num": "(4)" } ], "section": "Belief propagation", "sec_num": "4.1" }, { "text": "If variable V is observed, then the right-hand sides of equations (2) and (4) are modified to tell V that it must have the observed value v. This is done by multiplying in an extra message \u00b5 obs\u2192V that puts probability 1 on v 7 and 0 on other values. That affects other messages and beliefs. The final belief at each variable estimates its posterior marginal under the MRF (1), given all observations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Belief propagation", "sec_num": "4.1" }, { "text": "Both \u00b5 V \u2192F and \u00b5 F \u2192V are unnormalized distributions over the possible values of V -in our case, strings. A distribution over strings is naturally represented by a WFSA. Thus, belief propagation translates to our setting as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finite-state messages in BP", "sec_num": "4.2" }, { "text": "\u2022 Each message is a WFSA.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finite-state messages in BP", "sec_num": "4.2" }, { "text": "\u2022 Messages are typically initialized to a one-state WFSA that accepts all strings in \u03a3 * , each with weight 1. 8 \u2022 Taking a pointwise product of messages to V in equation 2corresponds to WFSA intersection. \u2022 If F in equation 3is binary, 9 then there is only one U . Then the outgoing message \u00b5 F \u2192U , a WFSA, is computed as domain(F \u2022 \u00b5 U \u2192F ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finite-state messages in BP", "sec_num": "4.2" }, { "text": "Here \u2022 composes the factor WFST with the incoming message WFSA, yielding a WFST that gives a joint distribution over (U, U ). The domain operator projects this WFST onto the U side to obtain a WFSA, which corresponds to marginalizing to obtain a distribution over U . \u2022 In general, F is a k-tape WFSM. Equation 3\"composes\" k \u2212 1 of its tapes with k \u2212 1 incoming messages \u00b5 U \u2192F , to construct a joint distribution over the k variables in N (F ), then projects onto the k th tape to marginalize over the k \u2212 1 U variables and get a distribution over U . All this can be accomplished by the WFSM generalized composition operator (Kempe et al., 2004) .", "cite_spans": [ { "start": 627, "end": 647, "text": "(Kempe et al., 2004)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Finite-state messages in BP", "sec_num": "4.2" }, { "text": "After projecting, it is desirable to determinize the WFSA. Otherwise, the summation in (3) is only implicit-the summands remain as distinct paths in the WFSA 10 -and thus the WFSAs would get larger and larger as BP proceeds. Unfortunately, determinizing a WFSA still does not guarantee a small result. In fact it can lead to exponential blowup, or even infinite blowup. 11 Thus, in practice we recommend against determinizing the messages, which may be inherently complex. To shrink a message, it is safer to approximate it with a small deterministic WFSA, as discussed in the next section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finite-state messages in BP", "sec_num": "4.2" }, { "text": "In our domain, it is possible for the finite-state messages to grow unboundedly in size as they flow around a cycle. After all, our messages are not just multinomial distributions over a fixed finite set. They are distributions over the infinite set \u03a3 * . A WFSA represents this in finite space, but more complex distributions require bigger WFSAs, with more distinct states and arc weights.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approximation of messages", "sec_num": "4.3" }, { "text": "Facing the same problem for distributions over the infinite set R, Sudderth et al. (2002) simplified each message \u00b5 V \u2192F , approximating a complex Gaussian mixture by using fewer components.", "cite_spans": [ { "start": 67, "end": 89, "text": "Sudderth et al. (2002)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Approximation of messages", "sec_num": "4.3" }, { "text": "We could act similarly, variationally approximating a large WFSA P with a smaller one Q. Choose a family of message approximations (such as bigram models) by specifying the topology for a (small) deterministic WFSA Q. Then choose Q's edge weights to minimize the KL divergence KL(P Q). This can be done in closed form. 12 Another possible procedure-used in the experiments of this paper-approximates \u00b5 V \u2192F by pruning it back to a finite set of most plausible strings. 13 Equation (2) requests an intersection of several WFSAs, e.g.,", "cite_spans": [ { "start": 319, "end": 321, "text": "12", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Approximation of messages", "sec_num": "4.3" }, { "text": "\u00b5 F 1 \u2192V \u2229 \u00b5 F 2 \u2192V \u2229 \u2022 \u2022 \u2022 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approximation of messages", "sec_num": "4.3" }, { "text": "List all strings that appear on any of the 1000best paths in any of these WFSAs, removing duplicates. LetQ be a uniform distribution over this combined list of plausible strings, represented as a determinized, minimized, acyclic WFSA. Now approximate the intersection of equation 2as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approximation of messages", "sec_num": "4.3" }, { "text": "((Q \u2229 \u00b5 F 1 \u2192V ) \u2229 \u00b5 F 2 \u2192V ) \u2229 \u2022 \u2022 \u2022 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approximation of messages", "sec_num": "4.3" }, { "text": "This is efficient to compute and has the same topology asQ.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approximation of messages", "sec_num": "4.3" }, { "text": "Any standard training method for MRFs will transfer naturally to our setting. In all cases we draw on Eisner (2002) , who showed how to train the parameters \u03b8 of a single WFST, F , to (locally) maximize the joint or conditional probability of fully or partially observed training data. This involves computing the gradient of that likelihood function with respect to \u03b8. 14 12 See Li et al. (2009, footnote 9) for a sketch of the construction, which finds locally normalized edge weights. Or if Q is large but parameterized by some compact parameter vector \u03c6, so we are only allowed to control its edge weights via \u03c6, then Li and Eisner (2009, section 6 ) explain how to minimize KL(P Q) by gradient descent. In both cases Q must be deterministic.", "cite_spans": [ { "start": 102, "end": 115, "text": "Eisner (2002)", "ref_id": "BIBREF5" }, { "start": 622, "end": 652, "text": "Li and Eisner (2009, section 6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Training the Model Parameters", "sec_num": "5" }, { "text": "We remark that if a factor F were specified by a synchronous grammar rather than a WFSM, then its outgoing messages would be weighted context-free languages. Exact intersection of these is undecidable, but they too can be approximated variationally by WFSAs, with the same methods. 13 We are also considering other ways of adaptively choosing the topology of WFSA approximations at runtime, particularly in conjunction with expectation propagation.", "cite_spans": [ { "start": 282, "end": 284, "text": "13", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Training the Model Parameters", "sec_num": "5" }, { "text": "14 The likelihood is usually non-convex; even when the two strings are observed (supervised training), their accepting", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training the Model Parameters", "sec_num": "5" }, { "text": "We must generalize this to train a product of WFSMs. Typically, training data for an MRF (1) consists of some fully or partially observed IID samples of the joint distribution p(V 1 , . . . V n ). It is well-known how to tune an MRF's parameters \u03b8 by stochastic gradient descent to locally maximize the probability of this training set, even though both the probability and its gradient are in general intractable to compute in an MRF. The gradient is a sum of quantities, one for each factor F j . While the summand for F j cannot be computed exactly, it can be estimated using the BP messages to F j . Roughly speaking, the gradient for F j is computed much as in supervised training (see above), but treating any message \u00b5 V i \u2192F j as an uncertain observation of V i -a form of noisy supervision. 15 Our concerns about training are the same as for any MRF. First of all, BP is approximate. Kulesza and Pereira (2008) warn that its estimates of the gradient can be misleading. Second, semisupervised training (which we will attempt below) is always difficult and prone to local optima. As in EM, a small number of supervised examples for some variable may be drowned out by many noisily reconstructed examples.", "cite_spans": [ { "start": 800, "end": 802, "text": "15", "ref_id": null }, { "start": 893, "end": 919, "text": "Kulesza and Pereira (2008)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Training the Model Parameters", "sec_num": "5" }, { "text": "Faster and potentially more stable approaches include the piecewise training methods of Sutton and McCallum (2008) , which train the factors independently or in small groups. In the semisupervised case, each factor can be trained on only the supervised forms available for it. It might be useful to reweight the trained factors (cf. Smith et al. (2005) ), or train the factors consecutively (cf. Fahlman and Lebiere (1990)), in a way that minimizes the loss of BP decoding on held-out data.", "cite_spans": [ { "start": 88, "end": 114, "text": "Sutton and McCallum (2008)", "ref_id": "BIBREF20" }, { "start": 333, "end": 352, "text": "Smith et al. (2005)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Training the Model Parameters", "sec_num": "5" }, { "text": "In principle, one could use a 100-tape WFSM to jointly model the 100 distinct forms of a typical Polish verb. In other words, the WFSM would describe the distribution of a random variable V = V 1 , . . . , V 100 , where each V i is a string. One would train the parameters of the WFSM on a sample of V , each sample being a fully or partially observed paradigm for some Polish verb. The resulting distribution could be used to infer missing forms for these or other verbs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-tape WFSMs", "sec_num": "6.1" }, { "text": "path through the WFST may be ambiguous and unobserved.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-tape WFSMs", "sec_num": "6.1" }, { "text": "As a simple example, either a morphological generator or a morphological analyzer might need the probability that krzycza\u0142oby is the neuter thirdperson singular conditional imperfective of krzycze\u0107, despite never having observed it in training. The model determines this probability based on other observed and hypothesized forms of krzycze\u0107, using its knowledge of how neuter thirdperson singular conditional imperfectives are related to these other forms in other verbs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-tape WFSMs", "sec_num": "6.1" }, { "text": "Unfortunately, such a 100-tape WFSM would be huge, with an astronomical number of arcs (each representing a possible 100-way edit operation). Our approach is to factor the problem into a number of (e.g.) pairwise relationships among the verb forms. Using a factored distribution has several benefits over the k-tape WFSM: (1) a smaller representation in memory, (2) a small number of parameters to learn, (3) efficient approximate computation that takes advantage of the factored structure, (4) the ability to reuse WFSAs and WF-STs previously developed for smaller problems, (5) additional modeling power.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-tape WFSMs", "sec_num": "6.1" }, { "text": "Some previous researchers have used factored joint models of several strings. To our knowledge, they have all chosen acyclic, directed graphical models. The acyclicity meant that exact inference was at least possible for them, if not necessarily efficient. The factors in these past models have been WFSTs (though typically simpler than the ones we will use).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Simpler graphical models on strings", "sec_num": "6.2" }, { "text": "Many papers have used cascades of probabilistic finite-state transducers. Such a cascade may be regarded as a directed graphical model with a linear-chain structure. Pereira and Riley (1997) built a speech recognizer in this way, relating acoustic to phonetic to lexical strings. Similarly, Knight and Graehl (1997) presented a generative cascade using 4 variables and 5 factors:", "cite_spans": [ { "start": 291, "end": 315, "text": "Knight and Graehl (1997)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Simpler graphical models on strings", "sec_num": "6.2" }, { "text": "p(w, e, j, k, o) def = p(w)\u2022p(e | w)\u2022p(j | e)\u2022p(k | j) \u2022p(o | k)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Simpler graphical models on strings", "sec_num": "6.2" }, { "text": "where e is an English word sequence, w its pronunciation, j a Japanese version of the pronunciation, k a katakana rendering of the Japanese pronunciation, and o an OCR-corrupted version of the katakana. Knight and Graehl used finite-state operations to perform inference at test time, observing o and recovering the most likely w, while marginalizing out e, j, and k.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Simpler graphical models on strings", "sec_num": "6.2" }, { "text": "Bouchard-C\u00f4t\u00e9 et al. 2009reconstructed an-cient word forms given modern equivalents. They used a directed graphical model, whose tree structure reflected the evolutionary development of the modern languages, and which included latent variables for historical intermediate forms that were never observed in training data. They used Gibbs sampling rather than an exact solution (possible on trees) or a variational approximation (like our BP). Our work seeks to be general in terms of the graphical model structures used, as well as efficient through the use of BP with approximate messages. We also seek to avoid local normalization, using a globally normalized model. 16", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Simpler graphical models on strings", "sec_num": "6.2" }, { "text": "We distinguish our work from \"dynamic\" graphical models such as Dynamic Bayesian Networks and Conditional Random Fields, where the string brechen would be represented by creating 7 lettervalued variables. Those methods can represent strings (or paths) of any length-but the length for each training or test string must be specified in advance, not inferred. Furthermore, it is awkward and costly to model unknown alignments, since the variables are position-specific, and any position in brechen could in principle align with any position in brichst. WFSTs are a much more natural and flexible model of string pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unbounded objects in graphical models", "sec_num": "6.3" }, { "text": "We also distinguish our work from current nonparametric Bayesian models, which sometimes generate unbounded strings, trees, or grammars. If they generate two unbounded objects, they model their relationship by a single synchronous generation process (akin to Section 6.1), rather than by a globally normalized product of overlapping factors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unbounded objects in graphical models", "sec_num": "6.3" }, { "text": "To study our approach, we conducted initial experiments that reconstruct missing word forms in morphological paradigms. In inflectional morphology, each uninflected verb form (lemma) is associated with a vector of forms that are inflected for tense, person, number, etc. Some inflected forms may be observed frequently in natural text, others rarely. Two variables that are usually predictable from each other may or may not keep this relationship in the case of an irregular verb. Our task is to reconstruct (generate) specific unobserved morphological forms in a paradigm by learning from observed ones. This is a particularly interesting semisupervised scenario, because different subsets of the variables are observed on different examples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "7" }, { "text": "We used orthographic rather than phonological forms. We extracted morphological paradigms for all 9393 German verbs in the CELEX morphological database. Each paradigm lists 5 present-tense and 4 past-tense indicative forms, as well as the verb's lemma, for a total of 10 string-valued variables. 17 In each paradigm, we removed, or hid, verb forms that occur only rarely in natural text, i.e, verb forms with a small frequency figure provided by CELEX. 18 All paradigms other than sein ('to be') were now incompletely observed. Table 1 gives some statistics.", "cite_spans": [ { "start": 296, "end": 298, "text": "17", "ref_id": null } ], "ref_spans": [ { "start": 528, "end": 535, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Experimental data", "sec_num": "7.1" }, { "text": "Our current MRF uses only binary factors. Each factor is a WFST that is trained to relate 2 of the 10 variables (morphological forms). Each WFST can score an aligned pair using a log-linear model that counts features in a sliding 3-character window. To score an unaligned pair, it sums over all possible alignments. Specifically, our WFST topology and parameterization follow the state-of-theart approach to supervised morphology in Dreyer et al. (2008) , although we dropped some of their features to speed up these early experiments. 19 We 17 Some pairs of forms are always identical in German, hence are treated as a single form by CELEX. We likewise use a single variable-these are the \"1,3\" variables in Fig. 3 .", "cite_spans": [ { "start": 433, "end": 453, "text": "Dreyer et al. (2008)", "ref_id": "BIBREF4" }, { "start": 536, "end": 538, "text": "19", "ref_id": null } ], "ref_spans": [ { "start": 709, "end": 715, "text": "Fig. 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Model factors and parameters", "sec_num": "7.2" }, { "text": "Occasionally a form is listed as UNKNOWN. We neither train nor evaluate on such forms, although the model will still predict them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model factors and parameters", "sec_num": "7.2" }, { "text": "18 The frequency figure for each word form is based on counts in the Mannheim News corpus. We hide forms with frequency < 10. 19 We dropped their latent classes and regions as well as features that detected which characters were orthographic vowels. Also, we retained their \"target language model features\" only in the baseline \"U\" model, since elsewhere they implemented and manipulated all WFSMs using the OpenFST library (Allauzen et al., 2007) .", "cite_spans": [ { "start": 126, "end": 128, "text": "19", "ref_id": null }, { "start": 424, "end": 447, "text": "(Allauzen et al., 2007)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Model factors and parameters", "sec_num": "7.2" }, { "text": "We trained \u03b8 on the incompletely observed paradigms. As suggested in section 5, we used a variant of piecewise pseudolikelihood training (Sutton and McCallum, 2008) . Suppose there is a binary factor F attached to forms U and V . For any value of \u03b8, we can define p U V (U | V ) from the tiny MRF consisting only of U , V , and F . We can therefore compute the goodness 20 summed over all observed (U, V ) pairs in training data. We attempted to tune \u03b8 to maximize the total L U V over all U, V pairs, 21 regularized by subtracting ||\u03b8|| 2 . Note that different factors thus enjoyed different amounts of observed training data, but training was fully supervised (except for the unobserved alignments between u i and v i ).", "cite_spans": [ { "start": 137, "end": 164, "text": "(Sutton and McCallum, 2008)", "ref_id": "BIBREF20" }, { "start": 370, "end": 372, "text": "20", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Training in the experiments", "sec_num": "7.3" }, { "text": "L U V def = log p U V (u i | v i ) + log V U (v i | u i ),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training in the experiments", "sec_num": "7.3" }, { "text": "At test time, we are given each lemma (e.g. brechen) and all its observed (frequent) inflected forms (e.g., brachen, bricht,. . . ), and are asked to predict the remaining (rarer) forms (e.g., breche, brichst, . . . ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference in the experiments", "sec_num": "7.4" }, { "text": "We run approximate joint inference using belief propagation. 22 We extract our output from the final beliefs: for each unseen variable V , we preseemed to hurt in our current training setup.", "cite_spans": [ { "start": 61, "end": 63, "text": "22", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Inference in the experiments", "sec_num": "7.4" }, { "text": "We followed Dreyer et al. (2008) in slightly pruning the space of possible alignments. We compensated by replacing their WFST, F , with the union F \u222a 10 \u221212 (0.999\u03a3 \u00d7 \u03a3) * . This ensured that the factor could still map any string to any other string (though perhaps with very low weight), guaranteeing that the intersection at the end of section 4.3 would be non-empty. 20 The second term is omitted if V is the lemma. We do not train the model to predict the lemma since it is always observed in test data.", "cite_spans": [ { "start": 12, "end": 32, "text": "Dreyer et al. (2008)", "ref_id": "BIBREF4" }, { "start": 370, "end": 372, "text": "20", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Inference in the experiments", "sec_num": "7.4" }, { "text": "21 Unfortunately, just before press time we discovered that this was not quite what we had done. A shortcut in our implementation trained pUV (U | V ) and pV U (V | U ) separately. This let them make different use of the (unobserved) alignments-so that even if each individually liked the pair (u, v), they might not have been able to agree on the same accepting path for it at test time. This could have slightly harmed our joint inference results, though not our baselines.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference in the experiments", "sec_num": "7.4" }, { "text": "22 To derive the update order for message passing, we take an arbitrary spanning tree over the factor graph, and let O be a list of all factors and variables that is topologically sorted according to the spanning tree, with the leaves of the tree coming first. We then discard the spanning tree. A single iteration visits all factors and variables in order of O, updating each one's messages to later variables and factors, and then visits all factors and variables in reverse order, updating each one's messages to earlier variables and factors. dict its value to be argmax vpV (v). This prediction considers the values of all other unseen variables but sums over their possibilities. This is the Bayes-optimal decoder for our scoring function, since that function reports the fraction of individual forms that were predicted perfectly. 23", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference in the experiments", "sec_num": "7.4" }, { "text": "It is hard to know a priori what the causal relationships might be in a morphological paradigm. In principle, one would like to automatically choose which factors to have in the MRF. Or one could start with many factors, but use methods such as those suggested in section 5 to learn that certain less useful factors should be left weak to avoid confusing loopy BP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model selection of MRF topology", "sec_num": "7.5" }, { "text": "For our present experiments, we simply compared several fixed model topologies (Fig. 3) . These were variously unconnected (U), chain graphs (C1,. . . , C4), trees (T1, T2), or loopy graphs (L1,. . . , L4). We used several factor graphs that differ only by one or two added factors and compared the results. The graphs were designed by hand; they connect some forms with similar morphological properties more or less densely.", "cite_spans": [], "ref_spans": [ { "start": 79, "end": 87, "text": "(Fig. 3)", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Model selection of MRF topology", "sec_num": "7.5" }, { "text": "We trained different models using the observed forms in the 9393 paradigms as training data. The first 100 paradigms were then used as development data for model selection: 24 we were given the answers to their hidden forms, enabling us to compare the models. The best model was then evaluated on the 9293 remaining paradigms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model selection of MRF topology", "sec_num": "7.5" }, { "text": "The models are compared on development data in Table 2 . Among the factor graphs we evaluated, we find that L4 (see Fig. 3 ) performs best overall (whole-word accuracy 82.1). Note that the unconnected graph U does not perform very well (69.0), but using factor graphs with more connecting factors generally helps overall accuracy (see C1-C3). Note, however, that in some cases the additional structure hurts: The chain model C4 and the loopy model L1 perform relatively badly. The 23 If we instead wished to maximize the fraction of entire paradigms that were predicted perfectly, then we would have approximated full MAP decoding over the paradigm (Viterbi decoding) by using max-product BP. Other loss functions (e.g., edit distance) would motivate other decoding methods. 24 Using these paradigms was simply a quick way to avoid model selection by cross-validation. If data were really as sparse as our training setup pretends (see Table 2 ), then 100 complete paradigms would be too valuable to squander as mere development data. Plural 2 3 1,3 2 1,3 2 2 1,3 1 2 3 1,3 2 1,3 2 2 1,3 1 2 3 1,3 2 1,3 2 2 1,3 (C1) (C2) (C3) 1 2 3 1,3 2 1,3 2 2 1,3 1 2 3 1,3 2 1,3 2 2 1,3 (C4) 1 2 3 1,3 2 1,3 2 2 1,3 Table 2. reason for such a performance degradation is that undertrained factors were used: The factors relating second-person to second-person forms, for example, are trained from only 8 available examples. Non-loopy models always converge (exactly) in one iteration (see footnote 22). But even our loopy models appeared to converge in accuracy within two iterations. Only L3 and L4 required the second iteration, which made tiny improvements.", "cite_spans": [ { "start": 775, "end": 777, "text": "24", "ref_id": null } ], "ref_spans": [ { "start": 47, "end": 54, "text": "Table 2", "ref_id": null }, { "start": 116, "end": 122, "text": "Fig. 3", "ref_id": "FIGREF3" }, { "start": 935, "end": 942, "text": "Table 2", "ref_id": null }, { "start": 1034, "end": 1308, "text": "Plural 2 3 1,3 2 1,3 2 2 1,3 1 2 3 1,3 2 1,3 2 2 1,3 1 2 3 1,3 2 1,3 2 2 1,3 (C1) (C2) (C3) 1 2 3 1,3 2 1,3 2 2 1,3 1 2 3 1,3 2 1,3 2 2 1,3 (C4) 1 2 3 1,3 2 1,3 2 2", "ref_id": "TABREF0" }, { "start": 1313, "end": 1321, "text": "Table 2.", "ref_id": null } ], "eq_spans": [], "section": "Development data results", "sec_num": "7.6" }, { "text": "Based on the development results, we selected model L4 and tested on the remaining 9293 paradigms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Test data results", "sec_num": "7.7" }, { "text": "We regard the unconnected model U as a baseline to improve upon. We also tried a rather different baseline as in (Dreyer et al., 2008) . We trained the machine translation toolkit Moses (Koehn et al., 2007) to translate groups of letters rather than groups of words (\"phrases\"). For each form f to be predicted, we trained a Moses model on all supervised form pairs (l, f ) available in the data, to learn a prediction for the form given the lemma l. The M,3 condition restricted Moses use \"phrases\" no longer than 3 letters, comparable to our own trigram-based factors (see section 7.2). M,15 could use up to 15 letters.", "cite_spans": [ { "start": 113, "end": 134, "text": "(Dreyer et al., 2008)", "ref_id": "BIBREF4" }, { "start": 186, "end": 206, "text": "(Koehn et al., 2007)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Test data results", "sec_num": "7.7" }, { "text": "Again, our novel L4 model far outperformed the others overall. Breaking the results down by form, we find that this advantage mainly comes from the 3 forms with the fewest observed training examples (Table 3 , first 3 rows). The M and U models are barely able to predict these forms at all from the lemma, but L4 can predict them bet-Unconn.", "cite_spans": [], "ref_spans": [ { "start": 199, "end": 207, "text": "(Table 3", "ref_id": null } ], "eq_spans": [], "section": "Test data results", "sec_num": "7.7" }, { "text": "Trees Loops U C1 C2 C3 C4 T1 T2 L1 L2 L3 L4 69.0 72.9 73.4 74.8 65.2 78.1 78.7 62.3 79.6 78.9 82.1 Table 2 : Whole-word accuracies of the different models in reconstructing the missing forms in morphological paradigms, here on 100 verbs (development data). The names refer to the graphs in Fig. 3 . We selected L4 as final model (Table 3) Table 3 : Whole-word accuracies on the missing forms from 9293 test paradigms. The Moses baselines and our unconnected model (U) predict each form separately from the lemma, which is always observed. L4 uses all observations jointly, running belief propagation for decoding. Moses,15 memorizes phrases of length up to 15, all other models use max length 3. The table is sorted by the column \"# obs.\", which reports the numbers of observations for a given form.", "cite_spans": [], "ref_spans": [ { "start": 6, "end": 60, "text": "Loops U C1 C2 C3 C4 T1 T2 L1 L2 L3 L4 69.0", "ref_id": "TABREF0" }, { "start": 111, "end": 118, "text": "Table 2", "ref_id": null }, { "start": 302, "end": 308, "text": "Fig. 3", "ref_id": "FIGREF3" }, { "start": 341, "end": 350, "text": "(Table 3)", "ref_id": null }, { "start": 351, "end": 358, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Chains", "sec_num": null }, { "text": "ter by exploiting other observed or latent forms. By contrast, well-trained forms were already easy enough for the M and U models that L4 had little new to offer and in fact suffered from its approximate training and/or inference. Leaving aside the comparisons, it was useful to confirm that loopy BP could be used in this setting at all. 8014 of the 9293 test paradigms had \u2264 2 observed forms (in addition to the lemma) but \u2265 7 missing forms. One might have expected that loopy BP would have failed to converge, or converged to the wrong thing. Nonetheless, it achieved quite respectable success at exactly predicting various inflected forms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chains", "sec_num": null }, { "text": "For the curious, Table 4 shows accuracies grouped by different categories of paradigms, where the category is determined by the number of missing forms to predict. Most paradigms fall in the category where 7 to 9 forms are missing, so the accuracies in that line are similar to the overall accuracies in Table 3 .", "cite_spans": [], "ref_spans": [ { "start": 17, "end": 24, "text": "Table 4", "ref_id": null }, { "start": 304, "end": 311, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Chains", "sec_num": null }, { "text": "We have proposed that one can jointly model several multiple strings by using Markov Random Fields. We described this formally as an undi- Table 4 : Accuracy on test data, reported separately for paradigms in which 1-3, 4-6, or 7-9 forms are missing. Missing words have CELEX frequency count < 10; these are the ones to predict. (The numbers in col. 2 add up to 9256, not 9293, since some paradigms are incomplete in CELEX to begin with, with no forms to be removed or evaluated.)", "cite_spans": [], "ref_spans": [ { "start": 139, "end": 146, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Conclusions", "sec_num": "8" }, { "text": "rected graphical model with string-valued variables and whose factors (potential functions) are defined by weighted finite-state transducers. Each factor evaluates some subset of the strings. Approximate inference can be done by loopy belief propagation. The messages take the form of weighted finite-state acceptors, and are constructed by standard operations. We explained why the messages might become large, and gave methods for approximating them with smaller messages. We also discussed training methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "8" }, { "text": "We presented some pilot experiments on the task of jointly predicting multiple missing verb forms in morphological paradigms. The factors were simplified versions of statistical finite-state models for supervised morphology. Our MRF for this task might be used not only to conjugate verbs (e.g., in MT), but to guide further learning of morphology-either active learning from a human or semi-supervised learning from the distributional properties of a raw text corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "8" }, { "text": "Our modeling approach is potentially applicable to a wide range of other tasks, including transliteration, phonology, cognate modeling, multiplesequence alignment and system combination.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "8" }, { "text": "Our work ties into a broader vision of using algorithms like belief propagation to coordinate the work of several NLP models and algorithms. Each individual factor considers some portion of a joint problem, using classical statistical NLP methods (weighted grammars, transducers, dynamic programming). The factors coordinate their work by passing marginal probabilities. reported complementary work in this vein.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "8" }, { "text": "Each string is said to be on a different \"tape,\" which has its own \"read head,\" allowing the WFSM to maintain a separate position in each string. Thus, a path in a WFST may consume any number of characters from x before consuming the next character from y.2 Weighted acceptors and transducers are the cases k = 1 and k = 2, which are said to define rational languages and rational relations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Notable exceptions are for chunking and tagging, for information extraction, for dependency parsing, andCromier\u00e8s and Kurohashi (2009) for alignment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Preferably when the beliefs converge to some fixed point (a local minimum of the Bethe free energy). However, convergence is not guaranteed.7 More generally, on all possible observed variables.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This is an (improper) uniform distribution over \u03a3 * . Although is not a proper WFSA (see section 3.2), there is an upper bound on the weights it assigns to strings. That guarantees that all the messages and beliefs computed by (2)-(4) will be proper FSMs, provided that all the factors are proper WFSMs.9 If it is unary, (3) trivially reduces to \u00b5F \u2192U = F . 10 The usual implementation of projection does not change the topology of the WFST, but only deletes the U part of its arc labels. Thus, multiple paths that accept the same value of U remain distinct according to the distinct values of U that they were paired with before projection.11 If there is no deterministic equivalent(Mohri, 1997).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "See Bishop (2006), or consult for notation close to that of this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Although we do normalize locally during piecewise training (see section 7.3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "OpenFst: A general and efficient weighted finite-state transducer library", "authors": [ { "first": "Cyril", "middle": [], "last": "Allauzen", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Riley", "suffix": "" }, { "first": "Johan", "middle": [], "last": "Schalkwyk", "suffix": "" } ], "year": 2007, "venue": "Proc. of CIAA", "volume": "4783", "issue": "", "pages": "11--23", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cyril Allauzen, Michael Riley, Johan Schalkwyk, Wo- jciech Skut, and Mehryar Mohri. 2007. OpenFst: A general and efficient weighted finite-state transducer library. In Proc. of CIAA, volume 4783 of Lecture Notes in Computer Science, pages 11-23.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Pattern Recognition and Machine Learning", "authors": [ { "first": "Christopher", "middle": [ "M" ], "last": "Bishop", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher M. Bishop. 2006. Pattern Recognition and Machine Learning. Springer.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Improved reconstruction of protolanguage word forms", "authors": [ { "first": "Alexandre", "middle": [], "last": "Bouchard-C\u00f4t\u00e9", "suffix": "" }, { "first": "Thomas", "middle": [ "L" ], "last": "Griffiths", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2009, "venue": "Proc. of HLT-NAACL", "volume": "", "issue": "", "pages": "65--73", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexandre Bouchard-C\u00f4t\u00e9, Thomas L. Griffiths, and Dan Klein. 2009. Improved reconstruction of pro- tolanguage word forms. In Proc. of HLT-NAACL, pages 65-73, Boulder, Colorado, June. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "An alignment algorithm using belief propagation and a structure-based distortion model", "authors": [ { "first": "Fabien", "middle": [], "last": "Cromier\u00e8s", "suffix": "" }, { "first": "Sadao", "middle": [], "last": "Kurohashi", "suffix": "" } ], "year": 2009, "venue": "Proc. of EACL", "volume": "", "issue": "", "pages": "166--174", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fabien Cromier\u00e8s and Sadao Kurohashi. 2009. An alignment algorithm using belief propagation and a structure-based distortion model. In Proc. of EACL, pages 166-174, Athens, Greece, March. Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Latent-variable modeling of string transductions with finite-state methods", "authors": [ { "first": "Markus", "middle": [], "last": "Dreyer", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Smith", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "" } ], "year": 2008, "venue": "Proc. of EMNLP, Honolulu", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Markus Dreyer, Jason Smith, and Jason Eisner. 2008. Latent-variable modeling of string transductions with finite-state methods. In Proc. of EMNLP, Hon- olulu, Hawaii, October.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Parameter estimation for probabilistic finite-state transducers", "authors": [ { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "" } ], "year": 2002, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason Eisner. 2002. Parameter estimation for prob- abilistic finite-state transducers. In Proc. of ACL, pages 1-8, Philadelphia, July.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "The cascade-correlation learning architecture", "authors": [ { "first": "E", "middle": [], "last": "Scott", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Fahlman", "suffix": "" }, { "first": "", "middle": [], "last": "Lebiere", "suffix": "" } ], "year": 1990, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Scott E. Fahlman and Christian Lebiere. 1990. The cascade-correlation learning architecture. Technical Report CMU-CS-90-100, School of Computer Sci- ence, Carnegie Mellon University.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A note on join and auto-intersection of n-ary rational relations", "authors": [ { "first": "Andr\u00e9", "middle": [], "last": "Kempe", "suffix": "" }, { "first": "Jean-Marc", "middle": [], "last": "Champarnaud", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the Eindhoven FASTAR Days", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andr\u00e9 Kempe, Jean-Marc Champarnaud, and Jason Eisner. 2004. A note on join and auto-intersection of n-ary rational relations. In Loek Cleophas and Bruce Watson, editors, Proceedings of the Eind- hoven FASTAR Days (Computer Science Techni- cal Report 04-40). Department of Mathematics and Computer Science, Technische Universiteit Eind- hoven, Netherlands.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Machine transliteration", "authors": [ { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Graehl", "suffix": "" } ], "year": 1997, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "128--135", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Knight and Jonathan Graehl. 1997. Machine transliteration. In Proc. of ACL, pages 128-135.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Moses: Open source toolkit for statistical machine translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Hoang", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "Marcello", "middle": [], "last": "Federico", "suffix": "" }, { "first": "Nicola", "middle": [], "last": "Bertoldi", "suffix": "" }, { "first": "Brooke", "middle": [], "last": "Cowan", "suffix": "" }, { "first": "Wade", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Christine", "middle": [], "last": "Moran", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Zens", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Ondrej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Constantin", "suffix": "" }, { "first": "Evan", "middle": [], "last": "Herbst", "suffix": "" } ], "year": 2007, "venue": "Proc. of ACL, Companion Volume", "volume": "", "issue": "", "pages": "177--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proc. of ACL, Companion Volume, pages 177-180, Prague, Czech Republic, June. Association for Com- putational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Structured learning with approximate inference", "authors": [ { "first": "Alex", "middle": [], "last": "Kulesza", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2008, "venue": "Proc. of NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Kulesza and Fernando Pereira. 2008. Structured learning with approximate inference. In Proc. of NIPS.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "First-and secondorder expectation semirings with applications to minimum-risk training on translation forests", "authors": [ { "first": "Zhifei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "" } ], "year": 2009, "venue": "Proc. of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhifei Li and Jason Eisner. 2009. First-and second- order expectation semirings with applications to minimum-risk training on translation forests. In Proc. of EMNLP.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Variational decoding for statistical machine translation", "authors": [ { "first": "Zhifei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "" }, { "first": "Sanjeev", "middle": [], "last": "Khudanpur", "suffix": "" } ], "year": 2009, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhifei Li, Jason Eisner, and Sanjeev Khudanpur. 2009. Variational decoding for statistical machine transla- tion. In Proc. of ACL.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Finite-state transducers in language and speech processing", "authors": [ { "first": "Mehryar", "middle": [], "last": "Mohri", "suffix": "" } ], "year": 1997, "venue": "Computational Linguistics", "volume": "23", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mehryar Mohri. 1997. Finite-state transducers in lan- guage and speech processing. Computational Lin- guistics, 23(2).", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Speech recognition by composition of weighted finite automata", "authors": [ { "first": "C", "middle": [ "N" ], "last": "Fernando", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Pereira", "suffix": "" }, { "first": "", "middle": [], "last": "Riley", "suffix": "" } ], "year": 1997, "venue": "Finite-State Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fernando C. N. Pereira and Michael Riley. 1997. Speech recognition by composition of weighted fi- nite automata. In Emmanuel Roche and Yves Schabes, editors, Finite-State Language Processing. MIT Press, Cambridge, MA.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Learning string edit distance", "authors": [ { "first": "Eric", "middle": [ "Sven" ], "last": "Ristad", "suffix": "" }, { "first": "Peter", "middle": [ "N" ], "last": "Yianilos", "suffix": "" } ], "year": 1996, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric Sven Ristad and Peter N. Yianilos. 1996. Learn- ing string edit distance. Technical Report CS-TR- 532-96, Princeton University, Department of Com- puter Science, October.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Dependency parsing by belief propagation", "authors": [ { "first": "David", "middle": [], "last": "Smith", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "" } ], "year": 2008, "venue": "Proc. of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Smith and Jason Eisner. 2008. Dependency parsing by belief propagation. In Proc. of EMNLP.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Logarithmic opinion pools for conditional random fields", "authors": [ { "first": "Andrew", "middle": [], "last": "Smith", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Cohn", "suffix": "" }, { "first": "Miles", "middle": [], "last": "Osborne", "suffix": "" } ], "year": 2005, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "18--25", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew Smith, Trevor Cohn, and Miles Osborne. 2005. Logarithmic opinion pools for conditional random fields. In Proc. of ACL, pages 18-25, June.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Nonparametric belief propagation", "authors": [ { "first": "Erik", "middle": [ "B" ], "last": "Sudderth", "suffix": "" }, { "first": "Alexander", "middle": [ "T" ], "last": "Ihler", "suffix": "" }, { "first": "Er", "middle": [ "T" ], "last": "Ihler", "suffix": "" }, { "first": "William", "middle": [ "T" ], "last": "Freeman", "suffix": "" }, { "first": "Alan", "middle": [ "S" ], "last": "Willsky", "suffix": "" } ], "year": 2002, "venue": "Proc. of CVPR", "volume": "", "issue": "", "pages": "605--612", "other_ids": {}, "num": null, "urls": [], "raw_text": "Erik B. Sudderth, Alexander T. Ihler, Er T. Ihler, William T. Freeman, and Alan S. Willsky. 2002. Nonparametric belief propagation. In Proc. of CVPR, pages 605-612.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Collective segmentation and labeling of distant entities in information extraction", "authors": [ { "first": "Charles", "middle": [], "last": "Sutton", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2004, "venue": "ICML Workshop on Statistical Relational Learning and Its Connections to Other Fields", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charles Sutton and Andrew McCallum. 2004. Collec- tive segmentation and labeling of distant entities in information extraction. In ICML Workshop on Sta- tistical Relational Learning and Its Connections to Other Fields.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Piecewise training for structured prediction. Machine Learning", "authors": [ { "first": "Charles", "middle": [], "last": "Sutton", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charles Sutton and Andrew McCallum. 2008. Piece- wise training for structured prediction. Machine Learning. In submission.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Dynamic conditional random fields: Factorized probabilistic models for labeling and segmenting sequence data", "authors": [ { "first": "Charles", "middle": [], "last": "Sutton", "suffix": "" }, { "first": "Khashayar", "middle": [], "last": "Rohanimanesh", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2004, "venue": "Proc. of ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charles Sutton, Khashayar Rohanimanesh, and An- drew McCallum. 2004. Dynamic conditional ran- dom fields: Factorized probabilistic models for la- beling and segmenting sequence data. In Proc. of ICML.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Illustration of messages being passed from variable to factor and factor to variable. Each message is represented by a finite-state acceptor.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF3": { "text": "The graphs that we evaluate on development data. The nodes represent morphological forms, e.g. the first node in the left of each graph represents the first person singular present. Each variable is also connected to the lemma (not shown). See results in", "uris": null, "num": null, "type_str": "figure" }, "TABREF0": { "num": null, "content": "