Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D09-1009",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:38:33.089701Z"
},
"title": "Active Learning by Labeling Features",
"authors": [
{
"first": "Gregory",
"middle": [],
"last": "Druck",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Massachusetts Amherst",
"location": {
"postCode": "01003",
"region": "MA"
}
},
"email": "gdruck@cs.umass.edu"
},
{
"first": "Burr",
"middle": [],
"last": "Settles",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Wisconsin Madison",
"location": {
"postCode": "53706",
"region": "WI"
}
},
"email": "bsettles@cs.wisc.edu"
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Massachusetts Amherst",
"location": {
"postCode": "01003",
"region": "MA"
}
},
"email": "mccallum@cs.umass.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Methods that learn from prior information about input features such as generalized expectation (GE) have been used to train accurate models with very little effort. In this paper, we propose an active learning approach in which the machine solicits \"labels\" on features rather than instances. In both simulated and real user experiments on two sequence labeling tasks we show that our active learning method outperforms passive learning with features as well as traditional active learning with instances. Preliminary experiments suggest that novel interfaces which intelligently solicit labels on multiple features facilitate more efficient annotation.",
"pdf_parse": {
"paper_id": "D09-1009",
"_pdf_hash": "",
"abstract": [
{
"text": "Methods that learn from prior information about input features such as generalized expectation (GE) have been used to train accurate models with very little effort. In this paper, we propose an active learning approach in which the machine solicits \"labels\" on features rather than instances. In both simulated and real user experiments on two sequence labeling tasks we show that our active learning method outperforms passive learning with features as well as traditional active learning with instances. Preliminary experiments suggest that novel interfaces which intelligently solicit labels on multiple features facilitate more efficient annotation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The application of machine learning to new problems is slowed by the need for labeled training data. When output variables are structured, annotation can be particularly difficult and timeconsuming. For example, when training a conditional random field (Lafferty et al., 2001) to extract fields such as rent, contact, features, and utilities from apartment classifieds, labeling 22 instances (2,540 tokens) provides only 66.1% accuracy. 1 Recent work has used unlabeled data and limited prior information about input features to bootstrap accurate structured output models. For example, both Haghighi and Klein (2006) and have demonstrated results better than 66.1% on the apartments task described above using only a list of 33 highly discriminative features and the labels they indicate. However, these methods have only been applied in scenarios in which the user supplies such prior knowledge before learning begins.",
"cite_spans": [
{
"start": 253,
"end": 276,
"text": "(Lafferty et al., 2001)",
"ref_id": "BIBREF8"
},
{
"start": 437,
"end": 438,
"text": "1",
"ref_id": null
},
{
"start": 592,
"end": 617,
"text": "Haghighi and Klein (2006)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In traditional active learning (Settles, 2009) , the machine queries the user for only the labels of instances that would be most helpful to the machine. This paper proposes an active learning approach in which the user provides \"labels\" for input features, rather than instances. A labeled input feature denotes that a particular input feature, for example the word call, is highly indicative of a particular label, such as contact. Table 1 provides an excerpt of a feature active learning session.",
"cite_spans": [
{
"start": 31,
"end": 46,
"text": "(Settles, 2009)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 434,
"end": 441,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we advocate using generalized expectation (GE) criteria for learning with labeled features. We provide an alternate treatment of the GE objective function used by and a novel speedup to the gradient computation. We then provide a pool-based feature active learning algorithm that includes an option to skip queries, for cases in which a feature has no clear label. We propose and evaluate feature query selection algorithms that aim to reduce model uncertainty, and compare to several baselines. We evaluate our method using both real and simulated user experiments on two sequence labeling tasks. Compared to previous approaches (Raghavan and Allan, 2007) , our method can be used for both classification and structured tasks, and the feature query selection methods we propose perform better.",
"cite_spans": [
{
"start": 645,
"end": 671,
"text": "(Raghavan and Allan, 2007)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We use experiments with simulated labelers on real data to extensively compare feature query selection algorithms and evaluate on multiple random splits. To make these simulations more realistic, the effort required to perform different labeling actions is estimated from additional experiments with real users. The results show that active learning with features outperforms both passive learning with features and traditional active learning with instances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the user experiments, each annotator actively labels instances, actively labels features one at a time, and actively labels batches of features orga- nized using a \"grid\" interface. The results support the findings of the simulated experiments and provide evidence that the \"grid\" interface can facilitate more efficient annotation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section we describe the underlying probabilistic model for all methods in this paper. We focus on sequence labeling, though the described methods could be applied to other structured output or classification tasks. We model the probability of the label sequence y \u2208 Y n conditioned on the input sequence x \u2208 X n , p(y|x; \u03b8) using first-order linear-chain conditional random fields (CRFs) (Lafferty et al., 2001 ). This probability is",
"cite_spans": [
{
"start": 396,
"end": 418,
"text": "(Lafferty et al., 2001",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2"
},
{
"text": "p(y|x; \u03b8) = 1 Z x exp i j \u03b8 j f j (y i , y i+1 , x, i) ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2"
},
{
"text": "where Z x is the partition function and feature functions f j consider the entire input sequence and at most two consecutive output variables. The most probable output sequence and transition marginal distributions can be computed using variants of Viterbi and forward-backward. Provided a training data distributionp, we estimate CRF parameters by maximizing the conditional log likelihood of the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2"
},
{
"text": "L(\u03b8) = Ep (x,y) [log p(y|x; \u03b8)]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2"
},
{
"text": "We use numerical optimization to maximize L(\u03b8), which requires the gradient of L(\u03b8) with respect to the parameters. It can be shown that the partial derivative with respect to parameter j is equal to the difference between the empirical expectation of F j and the model expectation of F j , where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2"
},
{
"text": "F j (y, x) = i f j (y i , y i+1 , x, i). \u2202 \u2202\u03b8 j L(\u03b8) = Ep (x,y) [F j (y, x)] \u2212 Ep (x) [E p(y|x;\u03b8) [F j (y, x)]].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2"
},
{
"text": "We also include a zero-mean variance \u03c3 2 = 10 Gaussian prior on parameters in all experiments. 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2"
},
{
"text": "The training set may contain partially labeled sequences. Let z denote missing labels. We estimate parameters with this data by maximizing the marginal log-likelihood of the observed labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning with missing labels",
"sec_num": "2.1"
},
{
"text": "L M M L (\u03b8) = Ep (x,y) [log z p(y, z|x; \u03b8)]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning with missing labels",
"sec_num": "2.1"
},
{
"text": "We refer to this training method as maximum marginal likelihood (MML); it has also been explored by Quattoni et al. (2007) . The gradient of L M M L (\u03b8) can also be written as the difference of two expectations. The first is an expectation over the empirical distribution of x and y, and the model distribution of z. The second is a double expectation over the empirical distribution of x and the model distribution of y and z.",
"cite_spans": [
{
"start": 100,
"end": 122,
"text": "Quattoni et al. (2007)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning with missing labels",
"sec_num": "2.1"
},
{
"text": "\u2202 \u2202\u03b8 j L M M L (\u03b8) = Ep (x,y) [E p(z|y,x;\u03b8) [F j (y, z, x)]] \u2212 Ep (x) [E p(y,z|x;\u03b8) [F j (y, z, x)]].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning with missing labels",
"sec_num": "2.1"
},
{
"text": "We train models using L M M L (\u03b8) with expected gradient (Salakhutdinov et al., 2003) . To additionally leverage unlabeled data, we compare with entropy regularization (ER). ER adds a term to the objective function that encourages confident predictions on unlabeled data. Training of linear-chain CRFs with ER is described by Jiao et al. (2006) .",
"cite_spans": [
{
"start": 57,
"end": 85,
"text": "(Salakhutdinov et al., 2003)",
"ref_id": "BIBREF14"
},
{
"start": 326,
"end": 344,
"text": "Jiao et al. (2006)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning with missing labels",
"sec_num": "2.1"
},
{
"text": "In this section, we give a brief overview of generalized expectation criteria (GE) (Mann and Mc-Callum, 2008; Druck et al., 2008) and explain how we can use GE to learn CRF parameters with estimates of feature expectations and unlabeled data. GE criteria are terms in a parameter estimation objective function that express preferences on the value of a model expectation of some function. Given a score function S, an empirical distributio\u00f1 p(x), a model distribution p(y|x; \u03b8), and a constraint function G k (x, y), the value of a GE criterion is",
"cite_spans": [
{
"start": 83,
"end": 109,
"text": "(Mann and Mc-Callum, 2008;",
"ref_id": null
},
{
"start": 110,
"end": 129,
"text": "Druck et al., 2008)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Expectation Criteria",
"sec_num": "3"
},
{
"text": "G(\u03b8) = S(Ep (x) [E p(y|x;\u03b8) [G k (x, y)]]).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Expectation Criteria",
"sec_num": "3"
},
{
"text": "GE provides a flexible framework for parameter estimation because each of these elements can take an arbitrary form. The most important difference between GE and other parameter estimation methods is that it does not require a one-to-one correspondence between constraint functions G k and model feature functions. We leverage this flexibility to estimate parameters of feature-rich CRFs with a very small set of expectation constraints.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Expectation Criteria",
"sec_num": "3"
},
{
"text": "Constraint functions G k can be normalized so that the sum of the expectations of a set of functions is 1. In this case, S may measure the divergence between the expectation of the constraint function and a target expectation\u011c k .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Expectation Criteria",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "G(\u03b8) =\u011c k log(E[G k (x, y)]),",
"eq_num": "(1)"
}
],
"section": "Generalized Expectation Criteria",
"sec_num": "3"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Expectation Criteria",
"sec_num": "3"
},
{
"text": "E[G k (x, y)] = Ep (x) [E p(y|x;\u03b8) [G k (x, y)]].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Expectation Criteria",
"sec_num": "3"
},
{
"text": "It can be shown that the partial derivative of G(\u03b8) with respect to parameter j is proportional to the predicted covariance between the model feature function F j and the constraint function",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Expectation Criteria",
"sec_num": "3"
},
{
"text": "G k . 3 \u2202 \u2202\u03b8 j G(\u03b8) =\u011c k E[G k (x, y)] \u00d7 (2) Ep (x) E p(y|x;\u03b8) [F j (x, y)G k (x, y)] \u2212 E p(y|x;\u03b8) [F j (x, y)]E p(y|x;\u03b8) [G k (x, y)]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Expectation Criteria",
"sec_num": "3"
},
{
"text": "The partial derivative shows that GE learns parameter values for model feature functions based on their predicted covariance with the constraint functions. GE can thus be interpreted as a bootstrapping method that uses the limited training signal to learn about parameters for related model feature functions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Expectation Criteria",
"sec_num": "3"
},
{
"text": "3.1 Learning with feature-label distributions apply GE to a linearchain, first-order CRF. In this section we provide an alternate treatment that arrives at the same objective function from the general form described in the previous section. Often, feature functions in a first-order linearchain CRF f are binary, and are the conjunction 3 If we use squared error for S, the partial derivative is the covariance multiplied by",
"cite_spans": [
{
"start": 337,
"end": 338,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Expectation Criteria",
"sec_num": "3"
},
{
"text": "2(\u011c k \u2212 E[G k (x, y)]).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Expectation Criteria",
"sec_num": "3"
},
{
"text": "of an observational test q(x, i) and a label pair test",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Expectation Criteria",
"sec_num": "3"
},
{
"text": "1 {y i =y ,y i+1 =y } . 4 f (y i , y i+1 , x, i) = 1 {y i =y ,y i+1 =y } q(x, i)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Expectation Criteria",
"sec_num": "3"
},
{
"text": "The constraint functions G k we use here decompose and operate similarly, except that they only include a test for a single label. Single label constraints are easier for users to estimate and make GE training more efficient. Label transition structure can be learned automatically from single label constraints through the covariance-based parameter update of Equation 2. For convenience, we can write G yk to denote the constraint function that combines observation test k with a test for label y. We also add a normalization constant",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Expectation Criteria",
"sec_num": "3"
},
{
"text": "C k = Ep (x) [ i q k (x, i)], G yk (x, y) = i 1 C k 1 {y i =y} q k (x, i)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Expectation Criteria",
"sec_num": "3"
},
{
"text": "Under this construction the expectation of G yk is the predicted conditional probability that the label at some arbitrary position i is y when the observational test at i succeeds,p(",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Expectation Criteria",
"sec_num": "3"
},
{
"text": "y i = y|q k (x, i) = 1; \u03b8).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Expectation Criteria",
"sec_num": "3"
},
{
"text": "If we have a set of constraint functions {G yk : y \u2208 Y}, and we use the score function in Equation 1, then the GE objective function specifies the minimization of the KL divergence between the model and target distributions over labels conditioned on the success of the observational test. In general the objective function will consist of many such KL divergence penalties.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Expectation Criteria",
"sec_num": "3"
},
{
"text": "Computing the first term of the covariance in Equation 2 requires a marginal distribution over three labels, two of which will be consecutive, but the other of which could appear anywhere in the sequence. We can compute this marginal using the algorithm of . As previously described, this algorithm is O(n|Y| 3 ) for a sequence of length n. However, we make the following novel observation: we do not need to compute the extra lattices for feature label pairs with\u011c yk = 0, since this makes Equation 2 equal to zero. In , probabilities were smoothed so that \u2200 y\u011cyk > 0. If we assume that only a small number of labels m have non-zero probability, then the time complexity of the gradient computation is O(nm|Y| 2 ). In this paper typically 1 \u2264m\u2264 4, while |Y| is 11 or 13.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Expectation Criteria",
"sec_num": "3"
},
{
"text": "In experiments in this paper, using this optimization does not significantly affect final accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Expectation Criteria",
"sec_num": "3"
},
{
"text": "We use numerical optimization to estimate model parameters. In general GE objective functions are not convex. Consequently, we initialize 0th-order CRF parameters using a sliding window logistic regression model trained with GE. We also include a Gaussian prior on parameters with \u03c3 2 = 10 in the objective function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Expectation Criteria",
"sec_num": "3"
},
{
"text": "The training procedure described above requires a set of observational tests or input features with target distributions over labels. Estimating a distribution could be a difficult task for an annotator. Consequently, we abstract away from specifying a distribution by allowing the user to assign labels to features (c.f. Haghighi and Klein (2006) , Druck et al. (2008) ). For example, we say that the word feature call has label contact. A label for a feature simply indicates that the feature is a good indicator of the label. Note that features can have multiple labels, as does included in the active learning session shown in Table 1 . We convert an input feature with a set of labels L into a distribution by assigning probability 1/|L| for each l \u2208 L and probability 0 for each l / \u2208 L. By assigning 0 probability to labels l / \u2208 L, we can use the speed-up described in the previous section.",
"cite_spans": [
{
"start": 322,
"end": 347,
"text": "Haghighi and Klein (2006)",
"ref_id": "BIBREF6"
},
{
"start": 350,
"end": 369,
"text": "Druck et al. (2008)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 631,
"end": 638,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Learning with labeled features",
"sec_num": "3.2"
},
{
"text": "Other proposed learning methods use labeled features to label unlabeled data. The resulting partially-labeled corpus can be used to train a CRF by maximizing MML. Similarly, prototype-driven learning (PDL) (Haghighi and Klein, 2006) optimizes the joint marginal likelihood of data labeled with prototype input features for each label. Additional features that indicate similarity to the prototypes help the model to generalize. In a previous comparison between GE and PDL , GE outperformed PDL without the extra similarity features, whose construction may be problem-specific. GE also performed better when supplied accurate label distributions.",
"cite_spans": [
{
"start": 206,
"end": 232,
"text": "(Haghighi and Klein, 2006)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3.3"
},
{
"text": "Additionally, both MML and PDL do not naturally generalize to learning with features that have multiple labels or distributions over labels, as in these scenarios labeling the unlabeled data is not straightforward. In this paper, we attempt to address this problem using a simple heuristic: when there are multiple choices for a token's label, sam-ple a label. In Section 5 we use this heuristic with MML, but in general obtain poor results. Raghavan and Allan (2007) also propose several methods for learning with labeled features, but in a previous comparison GE gave better results (Druck et al., 2008) . Additionally, the generalization of these methods to structured output spaces is not straightforward. Chang et al. (2007) present an algorithm for learning with constraints, but this method requires users to set weights by hand. We plan to explore the use of the recently developed related methods of Bellare et al. (2009) , Gra\u00e7a et al. (2008) , and Liang et al. (2009) in future work. Druck et al. (2008) provide a survey of other related methods for learning with labeled input features.",
"cite_spans": [
{
"start": 442,
"end": 467,
"text": "Raghavan and Allan (2007)",
"ref_id": "BIBREF13"
},
{
"start": 585,
"end": 605,
"text": "(Druck et al., 2008)",
"ref_id": "BIBREF3"
},
{
"start": 710,
"end": 729,
"text": "Chang et al. (2007)",
"ref_id": "BIBREF2"
},
{
"start": 909,
"end": 930,
"text": "Bellare et al. (2009)",
"ref_id": "BIBREF0"
},
{
"start": 933,
"end": 952,
"text": "Gra\u00e7a et al. (2008)",
"ref_id": "BIBREF4"
},
{
"start": 959,
"end": 978,
"text": "Liang et al. (2009)",
"ref_id": "BIBREF10"
},
{
"start": 995,
"end": 1014,
"text": "Druck et al. (2008)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3.3"
},
{
"text": "Feature active learning, presented in Algorithm 1, is a pool-based active learning algorithm (Lewis and Gale, 1994) (with a pool of features rather than instances). The novel components of the algorithm are an option to skip a query and the notion that skipping and labeling have different costs. The option to skip is important when using feature queries because a user may not know how to label some features. In each iteration the model is retrained using the train procedure, which takes as input a set of labeled features C and unlabeled data distributionp. For the reasons described in Section 3.3, we advocate using GE for the train procedure. Then, while the iteration cost c is less than the maximum cost c max , the feature query q that maximizes the query selection metric \u03c6 is selected. The accept function determines whether the labeler will label q. If q is labeled, it is added to the set of labeled features C, and the label cost c label is added to c. Otherwise, the skip cost c skip is added to c. This process continues for N iterations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Active Learning by Labeling Features",
"sec_num": "4"
},
{
"text": "In this section we propose feature query selection methods \u03c6. Queries with a higher scores are considered better candidates. Note again that by features we mean observational tests q k (x, i). It is also important to note these are not feature selection methods since we are determining the features for which supervisory feedback will be most helpful to the model, rather than determining which features will be part of the model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature query selection methods",
"sec_num": "4.1"
},
{
"text": "Input: empirical distributionp, initial feature constraints C, label cost c label , skip cost c skip , max cost per iteration cmax, max iterations N Output: model parameters \u03b8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Feature Active Learning",
"sec_num": null
},
{
"text": "for i = 1 to N do \u03b8 = train(p, C) c = 0 while c < cmax do q = argmax q k \u03c6(q k ) if accept(q) then C = C \u222a label(q) c = c + c label else c = c + c skip end if end while end for \u03b8 = train(p, C)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Feature Active Learning",
"sec_num": null
},
{
"text": "We propose to select queries that provide the largest reduction in model uncertainty. We notate possible responses to a query q k as\u011d. The Expected Information Gain (EIG) of a query is the expectation of the reduction in model uncertainty over all possible responses. Mathematically, IG is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Feature Active Learning",
"sec_num": null
},
{
"text": "\u03c6 EIG (q k ) = E p(\u011d|q k ;\u03b8) [Ep (x) [H(p(y|x; \u03b8)\u2212 H(p(y|x; \u03b8\u011d)]],",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Feature Active Learning",
"sec_num": null
},
{
"text": "where \u03b8\u011d are the new model parameters if the response to q k is\u011d. Unfortunately, this method is computationally intractable. Re-estimating \u03b8\u011d will typically involve retraining the model, and doing this for each possible query-response pair is prohibitively expensive for structured output models. Computing the expectation over possible responses is also difficult, as in this paper users may provide a set of labels for a query, and more generally\u011d could be a distribution over labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Feature Active Learning",
"sec_num": null
},
{
"text": "Instead, we propose a tractable strategy for reducing model uncertainty, motivated by traditional uncertainty sampling (Lewis and Gale, 1994). We assume that when a user responds to a query, the reduction in uncertainty will be equal to the Total Uncertainty (TU), the sum of the marginal entropies at the positions where the feature occurs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Feature Active Learning",
"sec_num": null
},
{
"text": "\u03c6 T U (q k ) = i j q k (x i , j)H(p(y j |x i ; \u03b8))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Feature Active Learning",
"sec_num": null
},
{
"text": "Total uncertainty, however, is highly biased towards selecting frequent features. A mean uncertainty variant, normalized by the feature's count, would tend to choose very infrequent features. Consequently we propose a tradeoff be-tween the two extremes, called weighted uncertainty (WU), that scales the mean uncertainty by the log count of the feature in the corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Feature Active Learning",
"sec_num": null
},
{
"text": "\u03c6 W U (q k ) = log(C k ) \u03c6 T U (q k ) C k .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Feature Active Learning",
"sec_num": null
},
{
"text": "Finally, we also suggest an uncertainty-based metric called diverse uncertainty (DU) that encourages diversity among queries by multiplying TU by the mean dissimilarity between the feature and previously labeled features. For sequence labeling tasks, we can measure the relatedness of features using distributional similarity. 5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Feature Active Learning",
"sec_num": null
},
{
"text": "\u03c6 DU (q k ) = \u03c6 T U (q k ) 1 |C| j\u2208C 1\u2212sim(q k , q j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Feature Active Learning",
"sec_num": null
},
{
"text": "We contrast the notion of uncertainty described above with another type of uncertainty: the entropy of the predicted label distribution for the feature, or expectation uncertainty (EU). As above we also multiply by the log feature count.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Feature Active Learning",
"sec_num": null
},
{
"text": "\u03c6 EU (q k ) = log(C k )H(p(y i = y|q k (x, i) = 1; \u03b8))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Feature Active Learning",
"sec_num": null
},
{
"text": "EU is flawed because it will have a large value for non-discriminative features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Feature Active Learning",
"sec_num": null
},
{
"text": "The methods described above require the model to be retrained between iterations. To verify that this is necessary, we compare against query selection methods that only consider the previously labeled features. First, we consider a feature query selection method called coverage (cov) that aims to select features that are dissimilar from existing labeled features, increasing the labeled features' \"coverage\" of the feature space. In order to compensate for choosing very infrequent features, we multiply by the log count of the feature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Feature Active Learning",
"sec_num": null
},
{
"text": "\u03c6 cov (q k ) = log(C k ) 1 |C| j\u2208C 1 \u2212 sim(q k , q j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Feature Active Learning",
"sec_num": null
},
{
"text": "Motivated by the feature query selection method of Tandem Learning (Raghavan and Allan, 2007) (see Section 4.2 for further discussion), we consider a feature selection metric similarity (sim) that is the maximum similarity to a labeled feature, weighted by the log count of the feature.",
"cite_spans": [
{
"start": 67,
"end": 93,
"text": "(Raghavan and Allan, 2007)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Feature Active Learning",
"sec_num": null
},
{
"text": "\u03c6 sim (q k ) = log(C k ) max j\u2208C sim(q k , q j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Feature Active Learning",
"sec_num": null
},
{
"text": "Features similar to those already labeled are likely to be discriminative, and therefore likely to be labeled (rather than skipped). However, insufficient diversity may also result in an inaccurate model, suggesting that coverage should select more useful queries than similarity. Finally, we compare with several passive baselines. Random (rand) assigns scores to features randomly. Frequency (freq) scores input features using their frequency in the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Feature Active Learning",
"sec_num": null
},
{
"text": "\u03c6 f req (q k ) = i j q k (x i , j)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Feature Active Learning",
"sec_num": null
},
{
"text": "Top LDA (LDA) selects the top words from 50 topics learned from the unlabeled data using latent Dirichlet allocation (LDA) (Blei et al., 2003) . More specifically, the words w generated by each topic t are ranked using the conditional probability p(w|t). The word feature is assigned its maximum rank across all topics.",
"cite_spans": [
{
"start": 123,
"end": 142,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Feature Active Learning",
"sec_num": null
},
{
"text": "\u03c6 LDA (q k ) = max t rank LDA (q k , t)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Feature Active Learning",
"sec_num": null
},
{
"text": "This method will select useful features if the topics discovered are relevant to the task. A similar heuristic was used by Druck et al. (2008) .",
"cite_spans": [
{
"start": 123,
"end": 142,
"text": "Druck et al. (2008)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Feature Active Learning",
"sec_num": null
},
{
"text": "Tandem Learning (Raghavan and Allan, 2007) is an algorithm that combines feature and instance active learning for classification. The algorithm iteratively queries the user first for instance labels, then for feature labels. Feature queries are selected according to their co-occurrence with important model features and previously labeled features. As noted in Section 3.3, GE is preferable to the methods Tandem Learning uses to learn with labeled features. We address the mixing of feature and instance queries in Section 4.3. In order to better understand differences in feature query selection methodology, we proposed a feature query selection method motivated 6 by the method used in Tandem Learning in Section 4.1. However, this method performs poorly in the experiments in Section 5. Liang et al. (2009) simultaneously developed a method for learning with and actively selecting measurements, or target expectations with associated noise. The measurement selection method proposed by Liang et al. (2009) is based on Bayesian experimental design and is similar to the expected information gain method described above. Consequently this method is likely to be intractable for real applications. Note that Liang et al. (2009) only use this method in synthetic experiments, and instead use a method similar to total uncertainty for experiments in part-of-speech tagging. Unlike the experiments presented in this paper, Liang et al. (2009) conduct only simulated active learning experiments and do not consider skipping queries.",
"cite_spans": [
{
"start": 16,
"end": 42,
"text": "(Raghavan and Allan, 2007)",
"ref_id": "BIBREF13"
},
{
"start": 793,
"end": 812,
"text": "Liang et al. (2009)",
"ref_id": "BIBREF10"
},
{
"start": 993,
"end": 1012,
"text": "Liang et al. (2009)",
"ref_id": "BIBREF10"
},
{
"start": 1212,
"end": 1231,
"text": "Liang et al. (2009)",
"ref_id": "BIBREF10"
},
{
"start": 1424,
"end": 1443,
"text": "Liang et al. (2009)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4.2"
},
{
"text": "Sindhwani (Sindhwani et al., 2009) simultaneously developed an active learning method that queries for both instance and feature labels that are then used in a graph-based learning algorithm. They find that querying certain features outperforms querying uncertain features, but this is likely because their query selection method is similar to the expectation uncertainty method described above, and consequently non-discriminative features may be queried often (see also the discussion in Section 4.1). It is also not clear how this graph-based training method would generalize to structured output spaces.",
"cite_spans": [
{
"start": 10,
"end": 34,
"text": "(Sindhwani et al., 2009)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4.2"
},
{
"text": "Throughout this paper, we have focussed on labeling input features. However, the proposed methods generalize to queries for expectation estimates of arbitrary functions, for example queries for the label distributions for input features, labels for instances (using a function that is non-zero only for a particular instance), partial labels for instances, and class priors. The uncertainty-based query selection methods described in Section 4.1 apply naturally to these new query types. Importantly this framework would allow principled mixing of different query types, instead of alternating between them as in Tandem Learning (Raghavan and Allan, 2007) . When mixing queries, it will be important to use different costs for different annotation types (Vijayanarasimhan and Grauman, 2008) , and estimate the probability of obtaining a useful response to a query. We plan to pursue these directions in future work. This idea was also proposed by Liang et al. (2009) , but no experiments with mixed active learning were presented.",
"cite_spans": [
{
"start": 629,
"end": 655,
"text": "(Raghavan and Allan, 2007)",
"ref_id": "BIBREF13"
},
{
"start": 754,
"end": 790,
"text": "(Vijayanarasimhan and Grauman, 2008)",
"ref_id": "BIBREF18"
},
{
"start": 947,
"end": 966,
"text": "Liang et al. (2009)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Expectation Constraint Active Learning",
"sec_num": "4.3"
},
{
"text": "In this section we experiment with an automated oracle labeler. When presented an instance query, the oracle simply provides the true labels. When presented a feature query, the oracle first decides whether to skip the query. We have found that users are more likely to label features that are relevant for only a few labels. Therefore, the oracle labels a feature if the entropy of its per occurrence label expectation, H(p(y i = y|q k (x, i) = 1; \u03b8)) \u2264 0.7. The oracle then labels the feature using a heuristic: label the feature with the label whose expectation is highest, as well as any label whose expectation is at least half as large.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simulated User Experiments",
"sec_num": "5"
},
{
"text": "We estimate the effort of different labeling actions with preliminary experiments in which we observe users labeling data for ten minutes. Users took an average of 4 seconds to label a feature, 2 seconds to skip a feature, and 0.7 seconds to label a token. We setup experiments such that each iteration simulates one minute of labeling by setting c max = 60, c skip = 2 and c label = 4. For instance active learning, we use Algorithm 1 but without the skip option, and set c label = 0.7. We use N = 10 iterations, so the entire experiment simulates 10 minutes of annotation time. For efficiency, we consider the 500 most frequent unlabeled features in each iteration. To start, ten randomly selected seed labeled features are provided.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simulated User Experiments",
"sec_num": "5"
},
{
"text": "We use random (rand) selection, uncertainty sampling (US) (using sequence entropy, normalized by sequence length) and information density (ID) (Settles and Craven, 2008) to select instance queries. We use Entropy Regularization (ER) (Jiao et al., 2006) to leverage unlabeled instances. 7 We weight the ER term by choosing the best 8 weight in {10 \u22123 , 10 \u22122 , 10 \u22121 , 1, 10} multiplied by #labeled #unlabeled for each data set and query selection method. Seed instances are provided such that the simulated labeling time is equivalent to labeling 10 features.",
"cite_spans": [
{
"start": 143,
"end": 169,
"text": "(Settles and Craven, 2008)",
"ref_id": "BIBREF15"
},
{
"start": 233,
"end": 252,
"text": "(Jiao et al., 2006)",
"ref_id": "BIBREF7"
},
{
"start": 286,
"end": 287,
"text": "7",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Simulated User Experiments",
"sec_num": "5"
},
{
"text": "We evaluate on two sequence labeling tasks. The apartments task involves segmenting 300 apartment classified ads into 11 fields including features, rent, neighborhood, and contact. We use the same feature processing as Haghighi and Klein (2006) , with the addition of context features in a window of \u00b13. The cora references task is to extract 13 BibTeX fields such as author and booktitle 7 Results using self-training instead of ER are similar. from 500 research paper references. We use a standard set of word, regular expressions, and lexicon features, as well as context features in a window of \u00b13. All results are averaged over ten random 80:20 splits of the data. Table 2 presents mean (across all iterations) and final token accuracy results. On the apartments task, GE methods greatly outperform MML 9 and ER methods. Each uncertainty-based GE method also outperforms all passive GE methods. On the cora task, only GE with weighted uncertainty significantly outperforms ER and passive GE methods in terms of mean accuracy, but all uncertaintybased GE methods provide higher final accuracy. This suggests that on the cora task, active GE methods are performing better in later iterations. Figure 1 , which compares the learning curves of the best performing methods of each type, shows this phenomenon. Further analysis reveals that the uncertainty-based methods are choosing frequent features that are more likely to be skipped than those selected randomly in early iterations. We next compare with the results of related methods published elsewhere. We cannot make claims about statistical significance, but the results illustrate the competitiveness of our method. The 74.6% final accuracy on apartments is higher than any result obtained by Haghighi and Klein (2006) (the highest is 74.1%), higher than the supervised HMM results reported by Grenager et al. (2005) (74.4%), and matches the results of Mann and Mc-Callum (2008) with GE with more accurate sampled label distributions and 10 labeled examples. Chang et al. (2007) only obtain better results than 88.2% on cora when using 300 labeled examples (two hours of estimated annotation time), 5000 additional unlabeled examples, and extra test time inference constraints. Note that obtaining these results required only 10 simulated minutes of annotation time, and that GE methods are provided no information about the label transition matrix.",
"cite_spans": [
{
"start": 219,
"end": 244,
"text": "Haghighi and Klein (2006)",
"ref_id": "BIBREF6"
},
{
"start": 1752,
"end": 1777,
"text": "Haghighi and Klein (2006)",
"ref_id": "BIBREF6"
},
{
"start": 1853,
"end": 1875,
"text": "Grenager et al. (2005)",
"ref_id": "BIBREF5"
},
{
"start": 1912,
"end": 1937,
"text": "Mann and Mc-Callum (2008)",
"ref_id": null
},
{
"start": 2018,
"end": 2037,
"text": "Chang et al. (2007)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 670,
"end": 677,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 1196,
"end": 1204,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Simulated User Experiments",
"sec_num": "5"
},
{
"text": "Another advantage of feature queries is that feature names are concise enough to be browsed, rather than considered individually. This allows the design of improved interfaces that can further increase the speed of feature active learning. We built a prototype interface that allows the user to quickly browse many candidate features. The features are split into groups of five features each. Each group contains features that are related, as measured by distributional similarity. The features within each group are sorted according to the active learning metric. This interface, displayed in Figure 3 , may be useful because features in the same group are likely to have the same label.",
"cite_spans": [],
"ref_spans": [
{
"start": 594,
"end": 602,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "User Experiments",
"sec_num": "6"
},
{
"text": "We conduct three types of experiments. First, a user labels instances selected by information density, and models are trained using ER. The instance labeling interface allows the user to label tokens quickly by extending the current selection one token at a time and only requiring a single keystroke to label an entire segment. Second, the user labels features presented one-at-a-time by weighted uncertainty, and models are trained using GE. To aid the user in understanding the function of the feature quickly, we provide several examples of the feature occurring in context and the model's current predicted label distribution for the feature. Finally, the user labels features organized using the grid interface described in the previous paragraph. Weighted uncertainty is used to sort feature queries within each group, and GE is used to train models. Each iteration of labeling lasts two minutes, and there are five iterations. Retrain-ing with ER between iterations takes an average of 5 minutes on cora and 3 minutes on apartments. With GE, the retraining times are on average 6 minutes on cora and 4 minutes on apartments. Consequently, even when viewed with total time, rather than annotation time, feature active learning is beneficial. While waiting for models to retrain, users can perform other tasks. Figure 2 displays the results. User 1 labeled apartments data, while Users 2 and 3 labeled cora data. User 1 was able to obtain much better results with feature labeling than with instance labeling, but performed slightly worse with the grid interface than with the serial interface. User 1 commented that they found the label definitions for apartments to be imprecise, so the other experiments were conducted on the cora data. User 2 obtained better results with feature labeling than instance labeling, and obtained higher mean accuracy with the grid interface. User 3 was much better at labeling features than instances, and performed especially well using the grid interface.",
"cite_spans": [],
"ref_spans": [
{
"start": 1317,
"end": 1325,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "User Experiments",
"sec_num": "6"
},
{
"text": "We proposed an active learning approach in which features, rather than instances, are labeled. We presented an algorithm for active learning with features and several feature query selection methods that approximate the expected reduction in model uncertainty of a feature query. In simulated experiments, active learning with features outperformed passive learning with features, and uncertainty-based feature query selection outperformed other baseline methods. In both simulated and real user experiments, active learning with features outperformed passive and active learning with instances. Finally, we proposed a new labeling interface that leverages the conciseness of feature queries. User experiments suggested that this grid interface can improve labeling efficiency. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Averaged over 10 randomly selected sets of 22 instances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "10 is a default value that works well in many settings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We this notation for an indicator function that returns 1 if the condition in braces is satisfied, and 0 otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "sim(q k , qj) returns the cosine similarity between context vectors of words occurring in a window of \u00b13.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The query selection method ofRaghavan and Allan (2007) requires a stack that is modified between queries within each iteration. Here query scores are only updated after each iteration of labeling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Only the best MML results are shown.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank Kedar Bellare for helpful discussions and Gau- ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Alternating projections for learning with expectation constraints",
"authors": [
{
"first": "Kedar",
"middle": [],
"last": "Bellare",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Druck",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2009,
"venue": "UAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kedar Bellare, Gregory Druck, and Andrew McCal- lum. 2009. Alternating projections for learning with expectation constraints. In UAI.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Latent dirichlet allocation",
"authors": [
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Lafferty",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M. Blei, Andrew Y. Ng, Michael I. Jordan, and John Lafferty. 2003. Latent dirichlet allocation. Journal of Machine Learning Research, 3:2003.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Guiding semi-supervision with constraint-driven learning",
"authors": [
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Lev",
"middle": [],
"last": "Ratinov",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2007,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "280--287",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ming-Wei Chang, Lev Ratinov, and Dan Roth. 2007. Guiding semi-supervision with constraint-driven learning. In ACL, pages 280-287.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Learning from labeled features using generalized expectation criteria",
"authors": [
{
"first": "Gregory",
"middle": [],
"last": "Druck",
"suffix": ""
},
{
"first": "Gideon",
"middle": [],
"last": "Mann",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2008,
"venue": "SIGIR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gregory Druck, Gideon Mann, and Andrew McCal- lum. 2008. Learning from labeled features using generalized expectation criteria. In SIGIR.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Expectation maximization and posterior constraints",
"authors": [
{
"first": "Joao",
"middle": [],
"last": "Gra\u00e7a",
"suffix": ""
},
{
"first": "Kuzman",
"middle": [],
"last": "Ganchev",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Taskar",
"suffix": ""
}
],
"year": 2008,
"venue": "Advances in Neural Information Processing Systems 20",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joao Gra\u00e7a, Kuzman Ganchev, and Ben Taskar. 2008. Expectation maximization and posterior constraints. In J.C. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20. MIT Press.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Unsupervised learning of field segmentation models for information extraction",
"authors": [
{
"first": "Trond",
"middle": [],
"last": "Grenager",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2005,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Trond Grenager, Dan Klein, and Christopher D. Man- ning. 2005. Unsupervised learning of field segmen- tation models for information extraction. In ACL.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Prototype-driven learning for sequence models",
"authors": [
{
"first": "Aria",
"middle": [],
"last": "Haghighi",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2006,
"venue": "HTL-NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aria Haghighi and Dan Klein. 2006. Prototype-driven learning for sequence models. In HTL-NAACL.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Semisupervised conditional random fields for improved sequence segmentation and labeling",
"authors": [
{
"first": "Feng",
"middle": [],
"last": "Jiao",
"suffix": ""
},
{
"first": "Shaojun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Chi-Hoon",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Russell",
"middle": [],
"last": "Greiner",
"suffix": ""
},
{
"first": "Dale",
"middle": [],
"last": "Schuurmans",
"suffix": ""
}
],
"year": 2006,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "209--216",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Feng Jiao, Shaojun Wang, Chi-Hoon Lee, Russell Greiner, and Dale Schuurmans. 2006. Semi- supervised conditional random fields for improved sequence segmentation and labeling. In ACL, pages 209-216.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "John",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Prob- abilistic models for segmenting and labeling se- quence data. In ICML.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A sequential algorithm for training text classifiers",
"authors": [
{
"first": "D",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "William",
"middle": [
"A"
],
"last": "Lewis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gale",
"suffix": ""
}
],
"year": 1994,
"venue": "SIGIR",
"volume": "",
"issue": "",
"pages": "3--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David D. Lewis and William A. Gale. 1994. A sequen- tial algorithm for training text classifiers. In SIGIR, pages 3-12, New York, NY, USA. Springer-Verlag New York, Inc.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Learning from measurements in exponential families",
"authors": [
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2009,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Percy Liang, Michael I. Jordan, and Dan Klein. 2009. Learning from measurements in exponential fami- lies. In ICML.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Generalized expectation criteria for semi-supervised learning of conditional random fields",
"authors": [
{
"first": "Gideon",
"middle": [],
"last": "Mann",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2008,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gideon Mann and Andrew McCallum. 2008. General- ized expectation criteria for semi-supervised learn- ing of conditional random fields. In ACL.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Hidden conditional random fields",
"authors": [
{
"first": "A",
"middle": [],
"last": "Quattoni",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "L.-P",
"middle": [],
"last": "Morency",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Darrell",
"suffix": ""
}
],
"year": 2007,
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence",
"volume": "29",
"issue": "",
"pages": "1848--1852",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Quattoni, S. Wang, L.-P Morency, M. Collins, and T. Darrell. 2007. Hidden conditional random fields. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29:1848-1852, October.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "An interactive algorithm for asking and incorporating feature feedback into support vector machines",
"authors": [
{
"first": "Hema",
"middle": [],
"last": "Raghavan",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Allan",
"suffix": ""
}
],
"year": 2007,
"venue": "SIGIR",
"volume": "",
"issue": "",
"pages": "79--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hema Raghavan and James Allan. 2007. An interac- tive algorithm for asking and incorporating feature feedback into support vector machines. In SIGIR, pages 79-86.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Optimization with em and expectation-conjugate-gradient",
"authors": [
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Roweis",
"suffix": ""
},
{
"first": "Zoubin",
"middle": [],
"last": "Ghahramani",
"suffix": ""
}
],
"year": 2003,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "672--679",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruslan Salakhutdinov, Sam Roweis, and Zoubin Ghahramani. 2003. Optimization with em and expectation-conjugate-gradient. In ICML, pages 672-679.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "An analysis of active learning strategies for sequence labeling tasks",
"authors": [
{
"first": "Burr",
"middle": [],
"last": "Settles",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Craven",
"suffix": ""
}
],
"year": 2008,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Burr Settles and Mark Craven. 2008. An analysis of active learning strategies for sequence labeling tasks. In EMNLP.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Active learning literature survey",
"authors": [
{
"first": "Burr",
"middle": [],
"last": "Settles",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Burr Settles. 2009. Active learning literature survey. Technical Report 1648, University of Wisconsin - Madison.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Uncertainty sampling and transductive experimental design for active dual supervision",
"authors": [
{
"first": "Vikas",
"middle": [],
"last": "Sindhwani",
"suffix": ""
},
{
"first": "Prem",
"middle": [],
"last": "Melville",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"D"
],
"last": "Lawrence",
"suffix": ""
}
],
"year": 2009,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vikas Sindhwani, Prem Melville, and Richard D. Lawrence. 2009. Uncertainty sampling and trans- ductive experimental design for active dual supervi- sion. In ICML.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Multi-level active prediction of useful image annotations for recognition",
"authors": [
{
"first": "Sudheendra",
"middle": [],
"last": "Vijayanarasimhan",
"suffix": ""
},
{
"first": "Kristen",
"middle": [],
"last": "Grauman",
"suffix": ""
}
],
"year": 2008,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sudheendra Vijayanarasimhan and Kristen Grauman. 2008. Multi-level active prediction of useful image annotations for recognition. In NIPS.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"text": "Token accuracy vs. time for best performing ER, MML, passive GE, and active GE methods. density GE + weighted uncertainty (serial) GE + weighted uncertainty (grid) User experiments with instance labeling and feature labeling with the serial and grid interfaces.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF2": {
"text": "Grid feature labeling interface. Boxes on the left contain groups of features that appear in similar contexts. Features in the same group often receive the same label. On the right, the model's current expectation and occurrences of the selected feature in context are displayed.",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF1": {
"html": null,
"num": null,
"text": "Two iterations of feature active learning. Each table shows the features labeled, and the resulting change in accuracy. Note that the word included was labeled as both utilities and features, and that * denotes a regular expression feature.",
"type_str": "table",
"content": "<table/>"
},
"TABREF2": {
"html": null,
"num": null,
"text": "As measured by test accuracy, giving ER an advantage.",
"type_str": "table",
"content": "<table><tr><td>method</td><td colspan=\"2\">apartments</td><td colspan=\"2\">cora</td></tr><tr><td/><td>mean</td><td>final</td><td>mean</td><td>final</td></tr><tr><td>ER rand</td><td>48.1</td><td>53.6</td><td>75.9</td><td>81.1</td></tr><tr><td>ER US</td><td>51.7</td><td>57.9</td><td>76.0</td><td>83.2</td></tr><tr><td>ER ID</td><td>51.4</td><td>56.9</td><td>75.9</td><td>83.1</td></tr><tr><td colspan=\"2\">MML rand 47.7</td><td>51.2</td><td>58.6</td><td>64.6</td></tr><tr><td>MML WU</td><td>57.6</td><td>60.8</td><td>61.0</td><td>66.2</td></tr><tr><td>GE rand</td><td>59.0</td><td>64.8 *</td><td>77.6</td><td>83.7</td></tr><tr><td>GE freq</td><td colspan=\"2\">66.5 * 71.6 *</td><td>68.6</td><td>79.8</td></tr><tr><td>GE LDA</td><td colspan=\"2\">65.7 * 71.4 *</td><td>74.9</td><td>85.0</td></tr><tr><td>GE cov</td><td colspan=\"2\">68.2 * \u2020 72.6 *</td><td>73.5</td><td>83.3</td></tr><tr><td>GE sim</td><td>57.8</td><td>65.9 *</td><td>67.1</td><td>79.2</td></tr><tr><td>GE EU</td><td colspan=\"2\">66.5 * 71.6 *</td><td>68.6</td><td>79.8</td></tr><tr><td>GE TU</td><td colspan=\"2\">70.1 * \u2020 73.6 * \u2020</td><td colspan=\"2\">76.9 88.2 * \u2020</td></tr><tr><td>GE WU</td><td colspan=\"4\">71.6 * \u2020 74.6 * \u2020 80.3 * \u2020 88.1 * \u2020</td></tr><tr><td>GE DU</td><td colspan=\"4\">70.5 * \u2020 74.4 * \u2020 78.4 * 87.5 * \u2020</td></tr></table>"
},
"TABREF3": {
"html": null,
"num": null,
"text": "Mean and final token accuracy results. A * or \u2020 denotes that a GE method significantly outperforms all non-GE or passive GE methods, respectively. Bold entries significantly outperform all others. Methods in italics are passive.",
"type_str": "table",
"content": "<table/>"
}
}
}
}