Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
77.3 kB
{
"paper_id": "I08-1048",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:42:43.218634Z"
},
"title": "Learning a Stopping Criterion for Active Learning for Word Sense Disambiguation and Text Classification",
"authors": [
{
"first": "Jingbo",
"middle": [],
"last": "Zhu",
"suffix": "",
"affiliation": {
"laboratory": "Natural Language Processing Lab Northeastern University Shenyang",
"institution": "",
"location": {
"postCode": "110004",
"settlement": "Liaoning",
"region": "P.R.China"
}
},
"email": "zhujingbo@mail.neu.edu.cn"
},
{
"first": "Huizhen",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "Natural Language Processing Lab Northeastern University Shenyang",
"institution": "",
"location": {
"postCode": "110004",
"settlement": "Liaoning",
"region": "P.R.China"
}
},
"email": "wanghuizhen@mail.neu.edu.cn"
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": "",
"affiliation": {},
"email": "hovy@isi.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we address the problem of knowing when to stop the process of active learning. We propose a new statistical learning approach, called minimum expected error strategy, to defining a stopping criterion through estimation of the classifier's expected error on future unlabeled examples in the active learning process. In experiments on active learning for word sense disambiguation and text classification tasks, experimental results show that the new proposed stopping criterion can reduce approximately 50% human labeling costs in word sense disambiguation with degradation of 0.5% average accuracy, and approximately 90% costs in text classification with degradation of 2% average accuracy.",
"pdf_parse": {
"paper_id": "I08-1048",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we address the problem of knowing when to stop the process of active learning. We propose a new statistical learning approach, called minimum expected error strategy, to defining a stopping criterion through estimation of the classifier's expected error on future unlabeled examples in the active learning process. In experiments on active learning for word sense disambiguation and text classification tasks, experimental results show that the new proposed stopping criterion can reduce approximately 50% human labeling costs in word sense disambiguation with degradation of 0.5% average accuracy, and approximately 90% costs in text classification with degradation of 2% average accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Supervised learning models set their parameters using given labeled training data, and generally outperform unsupervised learning methods when trained on equal amount of training data. However, creating a large labeled training corpus is very expensive and time-consuming in some real-world cases such as word sense disambiguation (WSD).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Active learning is a promising way to minimize the amount of human labeling effort by building an system that automatically selects the most informative unlabeled example for human annotation at each annotation cycle. In recent years active learning has attracted a lot of research interest, and has been studied in many natural language processing (NLP) tasks, such as text classification (TC) (Lewis and Gale, 1994; McCallum and Nigam, 1998) , chunking (Ngai and Yarowsky, 2000) , named entity recognition (NER) (Shen et al., 2004; Tomanek et al., 2007) , part-of-speech tagging (Engelson and Dagan, 1999) , information extraction (Thompson et al., 1999) , statistical parsing (Steedman et al., 2003) , and word sense disambiguation (Zhu and Hovy, 2007) .",
"cite_spans": [
{
"start": 395,
"end": 417,
"text": "(Lewis and Gale, 1994;",
"ref_id": "BIBREF7"
},
{
"start": 418,
"end": 443,
"text": "McCallum and Nigam, 1998)",
"ref_id": null
},
{
"start": 455,
"end": 480,
"text": "(Ngai and Yarowsky, 2000)",
"ref_id": "BIBREF11"
},
{
"start": 514,
"end": 533,
"text": "(Shen et al., 2004;",
"ref_id": "BIBREF13"
},
{
"start": 534,
"end": 555,
"text": "Tomanek et al., 2007)",
"ref_id": "BIBREF16"
},
{
"start": 581,
"end": 607,
"text": "(Engelson and Dagan, 1999)",
"ref_id": null
},
{
"start": 633,
"end": 656,
"text": "(Thompson et al., 1999)",
"ref_id": "BIBREF15"
},
{
"start": 679,
"end": 702,
"text": "(Steedman et al., 2003)",
"ref_id": "BIBREF14"
},
{
"start": 735,
"end": 755,
"text": "(Zhu and Hovy, 2007)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Previous studies reported that active learning can help in reducing human labeling effort. With selective sampling techniques such as uncertainty sampling (Lewis and Gale, 1994) and committeebased sampling (McCallum and Nigam, 1998) , the size of the training data can be significantly reduced for text classification (Lewis and Gale, 1994; McCallum and Nigam, 1998) , word sense disambiguation (Chen, et al. 2006; Zhu and Hovy, 2007) , and named entity recognition (Shen et al., 2004; Tomanek et al., 2007) tasks.",
"cite_spans": [
{
"start": 155,
"end": 177,
"text": "(Lewis and Gale, 1994)",
"ref_id": "BIBREF7"
},
{
"start": 206,
"end": 232,
"text": "(McCallum and Nigam, 1998)",
"ref_id": null
},
{
"start": 318,
"end": 340,
"text": "(Lewis and Gale, 1994;",
"ref_id": "BIBREF7"
},
{
"start": 341,
"end": 366,
"text": "McCallum and Nigam, 1998)",
"ref_id": null
},
{
"start": 395,
"end": 414,
"text": "(Chen, et al. 2006;",
"ref_id": "BIBREF2"
},
{
"start": 415,
"end": 434,
"text": "Zhu and Hovy, 2007)",
"ref_id": "BIBREF18"
},
{
"start": 466,
"end": 485,
"text": "(Shen et al., 2004;",
"ref_id": "BIBREF13"
},
{
"start": 486,
"end": 507,
"text": "Tomanek et al., 2007)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Interestingly, deciding when to stop active learning is an issue seldom mentioned issue in these studies. However, it is an important practical topic, since it obviously makes no sense to continue the active learning procedure until the whole corpus has been labeled. How to define an adequate stopping criterion remains an unsolved problem in active learning. In principle, this is a problem of estimation of classifier effectiveness (Lewis and Gale, 1994) . However, in real-world applications, it is difficult to know when the classifier reaches its maximum effectiveness before all unlabeled examples have been annotated. And when the unlabeled data set becomes very large, full annotation is almost impossible for human annotator.",
"cite_spans": [
{
"start": 435,
"end": 457,
"text": "(Lewis and Gale, 1994)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we address the issue of a stopping criterion for active learning, and propose a new statistical learning approach, called minimum ex-pected error strategy, that defines a stopping criterion through estimation of the classifier's expected error on future unlabeled examples. The intuition is that the classifier reaches maximum effectiveness when it results in the lowest expected error on remaining unlabeled examples. This proposed method is easy to implement, involves small additional computation costs, and can be applied to several different learners, such as Naive Bayes (NB), Maximum Entropy (ME), and Support Vector Machines (SVMs) models. Comparing with the confidence-based stopping criteria proposed by Zhu and Hovy (2007) , experimental results show that the new proposed stopping criterion achieves better performance in active learning for both the WSD and TC tasks.",
"cite_spans": [
{
"start": 729,
"end": 748,
"text": "Zhu and Hovy (2007)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Active learning is a two-step semi-supervised learning process in which a small number of labeled samples and a large number of unlabeled examples are first collected in the initialization stage, and a close-loop stage of query and retraining is adopted. The purpose of active learning is to minimize the amount of human labeling effort by having the system in each cycle automatically select for human annotation the most informative unannotated case. Procedure: Active Learning Process Input: initial small training set L, and pool of unlabeled data set U Use L to train the initial classifier C (i.e. a classifier for uncertainty sampling or a set of classifiers for committee-based sampling) In this work, we are interested in selective sampling for pool-based active learning, and focus on uncertainty sampling (Lewis and Gale, 1994) . The key point is how to measure the uncertainty of an unlabeled example, in order to select a new example with maximum uncertainty to augment the training data. The maximum uncertainty implies that the current classifier has the least confidence in its classification of this unlabeled example x. The well-known entropy is a good uncertainty measurement widely used in active learning:",
"cite_spans": [
{
"start": 816,
"end": 838,
"text": "(Lewis and Gale, 1994)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Active Learning Process",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( ) ( | )log ( | ) y Y UM x P y x P y x \u2208 = \u2212 \u2211",
"eq_num": "(1)"
}
],
"section": "Active Learning Process",
"sec_num": "2.1"
},
{
"text": "where P(y|x) is the a posteriori probability. We denote the output class y\u2208Y={y 1 , y 2 , \u2026, y k }. UM is the uncertainty measurement function based on the entropy estimation of the classifier's posterior distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Active Learning Process",
"sec_num": "2.1"
},
{
"text": "As shown in Fig. 1 , the active learning process repeatedly provides the most informative unlabeled examples to an oracle for annotation, and update the training set, until the predefined stopping criterion SC is met. In practice, it is not clear how much annotation is sufficient for inducing a classifier with maximum effectiveness (Lewis and Gale, 1994) . This procedure can be implemented by defining an appropriate stopping criterion for active learning. In active learning process, a general stopping criterion SC can be defined as:",
"cite_spans": [
{
"start": 334,
"end": 356,
"text": "(Lewis and Gale, 1994)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 12,
"end": 18,
"text": "Fig. 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "General Stopping Criteria",
"sec_num": "2.2"
},
{
"text": "1 ( 0 , AL effectiveness C SC otherwise ) \u03b8 \u2265 \u23a7 = \u23a8 \u23a9 (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General Stopping Criteria",
"sec_num": "2.2"
},
{
"text": "where \u03b8 is a user predefined constant and the function effectiveness(C) evaluates the effectiveness of the current classifier. The learning process ends only if the stopping criterion function SC AL is equal to 1. The value of constant \u03b8 represents a tradeoff between the cost of annotation and the effectiveness of the resulting classifier. A larger \u03b8 would cause more unlabeled examples to be selected for human annotation, and the resulting classifier would be more robust. A smaller \u03b8 means the resulting classifier would be less robust, and less unlabeled examples would be selected to annotate. In previous work (Shen et al., 2004; Chen et al., 2006; Li and Sethi, 2006; Tomanek et al., 2007) , there are several common ways to define the func-tion effectiveness(C). First, previous work always used a simple stopping condition, namely, when the training set reaches desirable size. However, it is almost impossible to predefine an appropriate size of desirable training data guaranteed to induce the most effective classifier. Secondly, the learning loop can end if no uncertain unlabeled examples can be found in the pool. That is, all informative examples have been selected for annotation. However, this situation seldom occurs in realworld applications. Thirdly, the active learning process can stop if the targeted performance level is achieved. However, it is difficult to predefine an appropriate and achievable performance, since it should depend on the problem at hand and the users' requirements.",
"cite_spans": [
{
"start": 618,
"end": 637,
"text": "(Shen et al., 2004;",
"ref_id": "BIBREF13"
},
{
"start": 638,
"end": 656,
"text": "Chen et al., 2006;",
"ref_id": "BIBREF2"
},
{
"start": 657,
"end": 676,
"text": "Li and Sethi, 2006;",
"ref_id": "BIBREF8"
},
{
"start": 677,
"end": 698,
"text": "Tomanek et al., 2007)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "General Stopping Criteria",
"sec_num": "2.2"
},
{
"text": "An appealing solution has the active learning process end when repeated cycles show no significant performance improvement on the test set. However, there are two open problems. The first question is how to measure the performance of a classifier in active learning. The second one is how to know when the resulting classifier reaches the highest or adequate performance. It seems feasible that a separate validation set can solve both problems. That is, the active learning process can end if there is no significant performance improvement on the validation set. But how many samples are required for the pregiven separate validation set is an open question. Too few samples may not be adequate for a reasonable estimation and may result in an incorrect result. Too many samples would cause additional high cost because the separate validation set is generally constructed manually in advance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem of Performance Estimation",
"sec_num": "2.3"
},
{
"text": "To avoid the problem of performance estimation mentioned above, Zhu and Hovy (2007) proposed a confidence-based framework to predict the upper bound and the lower bound for a stopping criterion in active learning. The motivation is to assume that the current training data is sufficient to train the classifier with maximum effectiveness if the current classifier already has acceptably strong confi-dence on its classification results for all remained unlabeled data.",
"cite_spans": [
{
"start": 64,
"end": 83,
"text": "Zhu and Hovy (2007)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Confidence-based Strategy",
"sec_num": "3.1"
},
{
"text": "The first method to estimate the confidence of the classifier is based on uncertainty measurement, considering whether the entropy of each selected unlabeled example is less than a small predefined threshold. Here we call it Entropy-MCS. The stopping criterion SC Entropy-MCS can be defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Confidence-based Strategy",
"sec_num": "3.1"
},
{
"text": "1 , ( ) 0 , E Entropy MCS x U UM x SC otherwise \u03b8 \u2212 \u2200 \u2208 \u2264 \u23a7 = \u23a8 \u23a9 (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Confidence-based Strategy",
"sec_num": "3.1"
},
{
"text": "where \u03b8 E is a user predefined entropy threshold and the function UM(x) evaluates the uncertainty of each unlabeled example x.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Confidence-based Strategy",
"sec_num": "3.1"
},
{
"text": "The second method to estimate the confidence of the classifier is based on feedback from the oracle when the active learner asks for true labels for selected unlabeled examples, by considering whether the current trained classifier could correctly predict the labels or the accuracy performance of predictions on selected unlabeled examples is already larger than a predefined accuracy threshold. Here we call it OracleAcc-MCS. The stopping criterion SC OracleAcc-MCS can be defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Confidence-based Strategy",
"sec_num": "3.1"
},
{
"text": "1 ( 0 , ) A OracleAcc MCS OracleAcc C SC otherwise \u03b8 \u2212 \u2265 \u23a7 = \u23a8 \u23a9 (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Confidence-based Strategy",
"sec_num": "3.1"
},
{
"text": "where \u03b8 A is a user predefined accuracy threshold and function OracleAcc(C) evaluates accuracy performance of the classifier on these selected unlabeled examples through feedback of the Oracle.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Confidence-based Strategy",
"sec_num": "3.1"
},
{
"text": "In fact, these above two confidence-based methods do not directly estimate classifier performance that closely reflects the classifier effectiveness, because they only consider entropy of each unlabeled example and accuracy on selected informative examples at each iteration step. In this section we therefore propose a new statistical learning approach to defining a stopping criterion through estimation of the classifier's expected error on all future unlabeled examples, which we call minimum expected error strategy (MES). The motivation behind MES is that the classifier C (a classifier for uncertainty sampling or set of classifiers for committee-based sampling) with maximum effectiveness is the one that results in the lowest expected error on whole test set in the learning process. The stopping criterion SC MES is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Minimum Expected Error Strategy",
"sec_num": "3.2"
},
{
"text": "1 ( ) 0 , err MES Error C SC otherwise \u03b8 \u2264 \u23a7 = \u23a8 \u23a9 (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Minimum Expected Error Strategy",
"sec_num": "3.2"
},
{
"text": "where \u03b8 err is a user predefined expected error threshold and the function Error(C) evaluates the expected error of the classifier C that closely reflects the classifier effectiveness. So the key point of defining MES-based stopping criterion SC MES is how to calculate the function Error(C) that denotes the expected error of the classifier C. Suppose given a training set L and an input sample x, we can write the expected error of the classifier C as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Minimum Expected Error Strategy",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( ) ( ( ) | ) ( ) Error C R C x x P x dx = \u222b",
"eq_num": "(6)"
}
],
"section": "Minimum Expected Error Strategy",
"sec_num": "3.2"
},
{
"text": "where P(x) represents the known marginal distribution of x. C(x) represents the classifier's decision that is one of k classes: y\u2208Y={y 1 , y 2 , \u2026, y k }. R(y i |x) denotes a conditional loss for classifying the input sample x into a class y i that can be defined as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Minimum Expected Error Strategy",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1 ( | ) [ , ] ( | ) k i j j R y x i j P y x \u03bb = = \u2211",
"eq_num": "(7)"
}
],
"section": "Minimum Expected Error Strategy",
"sec_num": "3.2"
},
{
"text": "where P(y j |x) is the a posteriori probability produced by the classifier C. \u03bb [i,j] represents a zeroone loss function for every class pair {i,j} that assigns no loss to a correct classification, and assigns a unit loss to any error. In this paper, we focus on pool-based active learning in which a large unlabeled data pool U is available, as described Fig. 1 . In active learning process, our interest is to estimate the classifier's expected error on future unlabeled examples in the pool U. That is, we can stop the active learning process when the active learner results in the lowest expected error over the unlabeled examples in U. The pool U can provide an estimate of P(x). So for minimum error rate classification (Duda and Hart. 1973) on unlabeled examples, the expected error of the classifier C can be rewritten as",
"cite_spans": [
{
"start": 80,
"end": 85,
"text": "[i,j]",
"ref_id": null
},
{
"start": 726,
"end": 747,
"text": "(Duda and Hart. 1973)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 356,
"end": 362,
"text": "Fig. 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Minimum Expected Error Strategy",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1 ( ) (1 max ( | )) y Y x U Error C P y x U \u2208 \u2208 = \u2212 \u2211",
"eq_num": "(8)"
}
],
"section": "Minimum Expected Error Strategy",
"sec_num": "3.2"
},
{
"text": "Assuming N unlabeled examples in the pool U, the total time is O(N) for automatically determining whether the proposed stopping criterion SC MES is satisfied in the active learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Minimum Expected Error Strategy",
"sec_num": "3.2"
},
{
"text": "If the pool U is very large (e.g. more than 100000 examples), it would still cause high com-putation cost at each iteration of active learning. A good approximation is to estimate the expected error of the classifier using a subset of the pool, not using all unlabeled examples in U. In practice, a good estimation of expected error can be formed with few thousand examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Minimum Expected Error Strategy",
"sec_num": "3.2"
},
{
"text": "In this section, we evaluate the effectiveness of three stopping criteria for active learning for word sense disambiguation and text classification as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "\u2022 Entropy-MCS \u2500 stopping active learning process when the stopping criterion function SC Entropy-MCS defined in (3) is equal to 1, where \u03b8 E =0.01, 0.001, 0.0001.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "\u2022 OracleAcc-MCS \u2500 stopping active learning process when the stopping criterion function SC OracleAcc-MCS defined in (4) is equal to 1, where \u03b8 A =0.9, 1.0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "\u2022 MES \u2500 stopping active learning process when the stopping criterion function SC MES defined in (5) is equal to 1, where \u03b8 err =0.01, 0.001, 0.0001. The purpose of defining stopping criterion of active learning is to study how much annotation is sufficient for a specific task. To comparatively analyze the effectiveness of each stopping criterion, a baseline stopping criterion is predefined as when all unlabeled examples in the pool U are learned. Comparing with the baseline stopping criterion, a better stopping criterion not only achieves almost the same performance, but also has needed to learn fewer unlabeled examples when the active learning process is ended. In other words, for a stopping criterion of active learning, the fewer unlabeled examples that have been leaned when it is met, the bigger reduction in human labeling cost is made.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "In the following active learning experiments, a 10 by 10-fold cross-validation was performed. All results reported are the average of 10 trials in each active learning process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "The first comparison experiment is active learning for word sense disambiguation. We utilize a maximum entropy (ME) model (Berger et al., 1996) to design the basic classifier used in active learning for WSD. The advantage of the ME model is the ability to freely incorporate features from diverse sources into a single, well-grounded statistical model. A publicly available ME toolkit was used in our experiments. In order to extract the linguistic features necessary for the ME model in WSD tasks, all sentences containing the target word are automatically part-ofspeech (POS) tagged using the Brill POS tagger (Brill, 1992) . Three knowledge sources are used to capture contextual information: unordered single words in topical context, POS of neighboring words with position information, and local collocations. These are same as the knowledge sources used in (Lee and Ng, 2002) for supervised automated WSD tasks.",
"cite_spans": [
{
"start": 122,
"end": 143,
"text": "(Berger et al., 1996)",
"ref_id": "BIBREF0"
},
{
"start": 612,
"end": 625,
"text": "(Brill, 1992)",
"ref_id": "BIBREF1"
},
{
"start": 863,
"end": 881,
"text": "(Lee and Ng, 2002)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Sense Disambiguation",
"sec_num": "4.1"
},
{
"text": "The data used for comparison experiments was developed as part of the OntoNotes project (Hovy et al., 2006) , which uses the WSJ part of the Penn Treebank (Marcus et al., 1993) . The senses of noun words occurring in OntoNotes are linked to the Omega ontology (philpot et al., 2005) . In OntoNotes, at least two human annotators manually annotate the coarse-grained senses of selected nouns and verbs in their natural sentence context. In this experiment, we used several tens of thousands of annotated OntoNotes examples, covering in total 421 nouns with an inter-annotator agreement rate of at least 90%. We find that 302 out of 421 nouns occurring in OntoNotes are ambiguous, and thus are used in the following WSD experiments. For these 302 ambiguous nouns, there are 3.2 senses per noun, and 172 instances per noun.",
"cite_spans": [
{
"start": 88,
"end": 107,
"text": "(Hovy et al., 2006)",
"ref_id": "BIBREF5"
},
{
"start": 155,
"end": 176,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF9"
},
{
"start": 260,
"end": 282,
"text": "(philpot et al., 2005)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Sense Disambiguation",
"sec_num": "4.1"
},
{
"text": "The active learning algorithms start with a randomly chosen initial training set of 10 labeled samples for each noun, and make 10 queries after each learning iteration. Table 1 shows the effectiveness of each stopping criterion tested on active learning for WSD on these ambiguous nouns' WSD tasks. We analyze average accuracy performance of the classifier and average percentage of unlabeled examples learned when each stopping criterion is satisfied in active learning for WSD tasks. All accuracies and percentages reported in Table 1 Table 1 shows that these stopping criteria achieve the same accuracy of 86.8% which is within 0.5% of the accuracy of the baseline method (all unlabeled examples are labeled). It is obvious that these stopping criteria can help reduce the human labeling costs, comparing with the baseline method. The best criterion is MES method (\u03b8 err =0.01), following by OracleAcc-MCS method (\u03b8 A =0.9). MES method (\u03b8 err =0.01) and OracleAcc-MCS method (\u03b8 A =0.9) can make 47.3% and 44.5% reductions in labeling costs, respectively. Entropy-MCS method is apparently worse than MES and OracleAcc-MCS methods. The best of the Entropy-MCS method is the one with \u03b8 E =0.01 which makes approximately 1/3 reduction in labeling costs. We also can see from Table 1 that for Entropy-MCS and MES methods, reduction rate becomes smaller as the \u03b8 becomes smaller.",
"cite_spans": [],
"ref_spans": [
{
"start": 169,
"end": 176,
"text": "Table 1",
"ref_id": null
},
{
"start": 529,
"end": 536,
"text": "Table 1",
"ref_id": null
},
{
"start": 537,
"end": 544,
"text": "Table 1",
"ref_id": null
},
{
"start": 1274,
"end": 1281,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Word Sense Disambiguation",
"sec_num": "4.1"
},
{
"text": "The second data set is for active learning for text classification using the WebKB corpus 1 (McCallum et al., 1998) . The WebKB dataset was formed by web pages gathered from various university computer science departments. In the following active learning experiment, we use four most populous categories: student, faculty, course and project, altogether containing 4,199 web pages. Following previous studies (McCallum et al., 1998) , we only remove those words that occur merely once without using stemming or stop-list. The resulting vocabulary has 23,803 words. In the design of the text classifier, the maximum entropy model is also utilized, and no feature selection technique is used.",
"cite_spans": [
{
"start": 92,
"end": 115,
"text": "(McCallum et al., 1998)",
"ref_id": "BIBREF10"
},
{
"start": 410,
"end": 433,
"text": "(McCallum et al., 1998)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text Classification",
"sec_num": "4.2"
},
{
"text": "The algorithm is initially given 20 labeled examples, 5 from each class. Table 2 shows the effectiveness of each stopping criterion of active learning for text classification on WebKB corpus. All results reported are the average of 10 trials.",
"cite_spans": [],
"ref_spans": [
{
"start": 73,
"end": 80,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Text Classification",
"sec_num": "4.2"
},
{
"text": "Average From results shown in Table 2 , we can see that MES method (\u03b8 err =0.01) already achieves 91.5% accuracy in 10.9% unlabeled examples learned. The accuracy of all unlabeled examples learned is 93.5%. This situation means the approximately 90% remaining unlabeled examples only make only 2% performance improvement. Like the results of WSD shown in Table 1 , for Entropy-MCS and MES methods used in active learning for text classification tasks, the corresponding reduction rate becomes smaller as the value of \u03b8 becomes smaller. MES method (\u03b8 err =0.01) can make approximately 90% reduction in human labeling costs and results in 2% accuracy performance degradation. The Entropy-MCS method (\u03b8 E =0.01) can make approximate 80% reduction in costs and results in 1% accuracy performance degradation. Unlike the results of WSD shown in Table 1 , the OracleAcc-MCS method (\u03b8 A =1.0) makes the smallest reduction rate of 75.5%. Actually in real-world applications, the selection of a stopping criterion is a tradeoff issue between labeling cost and effectiveness of the classifier.",
"cite_spans": [],
"ref_spans": [
{
"start": 30,
"end": 37,
"text": "Table 2",
"ref_id": null
},
{
"start": 355,
"end": 362,
"text": "Table 1",
"ref_id": null
},
{
"start": 840,
"end": 847,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Stopping Criterion",
"sec_num": null
},
{
"text": "It is interesting to investigate the impact of performance change on defining a stopping criterion, so we show an example of active learning for WSD task in Fig. 2 Figure 2 . An example of active learning for WSD on noun \"rate\" in OntoNotes. Fig. 2 shows that the accuracy performance generally increases, but apparently degrades at the iterations \"20\", \"80\", \"170\", \"190\", and \"200\", and does not change anymore during the iterations [\"130\"-\"150\"] or [\"200\"-\"220\"] in the active learning process. Actually the first time of the highest performance of 95% achieved is at \"450\", which is not shown in Fig. 2 . In other words, although the accuracy performance curve shows an increasing trend, it is not monotonously increasing. From Fig. 2 we can see that it is not easy to automatically determine the point of no significant performance improvement on the validation set, because points such as \"20\" or \"80\" would mislead final judgment. However, we do believe that the change of performance is a good signal to stop active learning process. So it is worth studying further how to combine the factor of performance change with our proposed stopping criteria of active learning.",
"cite_spans": [
{
"start": 435,
"end": 448,
"text": "[\"130\"-\"150\"]",
"ref_id": null
},
{
"start": 452,
"end": 465,
"text": "[\"200\"-\"220\"]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 157,
"end": 163,
"text": "Fig. 2",
"ref_id": null
},
{
"start": 164,
"end": 172,
"text": "Figure 2",
"ref_id": null
},
{
"start": 242,
"end": 248,
"text": "Fig. 2",
"ref_id": null
},
{
"start": 600,
"end": 606,
"text": "Fig. 2",
"ref_id": null
},
{
"start": 732,
"end": 739,
"text": "Fig. 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "The OracleAcc-MCS method would not work if only one or too few informative examples are queried at the each iteration step in the active learning. There is an open issue how many selected unlabeled examples at each iteration are adequate for the batch-based sample selection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "For these stopping crieria, there is no general method to automatically determine the best threshold for any given task. It may therefore be necessary to use a dynamic threshold change technique in which the predefined threshold can be automatically modified if the performance is still significantly improving when the stopping criterion is met during active learning process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "In this paper, we address the stopping criterion issue of active learning, and analyze the problems faced by some common ways to stop the active learning process. In essence, defining a stopping criterion of active learning is a problem of estimating classifier effectiveness. The purpose of defining stopping criterion of active learning is to know how much annotation is sufficient for a special task. To determine this, this paper proposes a new statistical learning approach, called minimum expected error strategy, for defining a stopping criterion through estimation of the classifier's expected error on future unlabeled examples during the active learning process. Experimental results on word sense disambiguation and text classification tasks show that new proposed minimum expected error strategy outperforms the confidence-based strategy, and achieves promising results. The interesting future work is to study how to combine the best of both strategies, and how to consider performance change to define an appropriate stopping criterion for active learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "See http://www.cs.cmu.edu/~textlearning",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported in part by the National Natural Science Foundation of China under Grant (60473140), the National 863 High-tech Project (2006AA01Z154); the Program for New Century Excellent Talents in University(NCET-05-0287).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A maximum entropy approach to natural language processing",
"authors": [
{
"first": "A",
"middle": [
"L"
],
"last": "Berger",
"suffix": ""
},
{
"first": "S",
"middle": [
"A"
],
"last": "Della",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Della",
"suffix": ""
}
],
"year": 1996,
"venue": "Computational Linguistics",
"volume": "22",
"issue": "1",
"pages": "39--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. L. Berger, S. A. Della, and V. J Della. 1996. A maximum entropy approach to natural language processing. Computational Linguistics 22(1):39-71.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A simple rule-based part of speech tagger",
"authors": [
{
"first": "",
"middle": [],
"last": "Brill",
"suffix": ""
}
],
"year": 1992,
"venue": "the Proceedings of the Third Conference on Applied Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E Brill. 1992. A simple rule-based part of speech tag- ger. In the Proceedings of the Third Conference on Applied Natural Language Processing.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "An empirical study of the behavior of active learning for word sense disambiguation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Schein",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Ungar",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of HLT-NAACL06",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Chen, A. Schein, L. Ungar, M. Palmer. 2006. An empirical study of the behavior of active learning for word sense disambiguation. In Proc. of HLT- NAACL06",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Pattern classification and scene analysis",
"authors": [
{
"first": "R",
"middle": [
"O"
],
"last": "Duda",
"suffix": ""
},
{
"first": "P",
"middle": [
"E"
],
"last": "Hart",
"suffix": ""
}
],
"year": 1973,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. O. Duda and P. E. Hart. 1973. Pattern classification and scene analysis. New York: Wiley.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Engelson and I. Dagan. 1999. Committee-based sample selection for probabilistic classifiers",
"authors": [
{
"first": "S",
"middle": [
"A"
],
"last": "",
"suffix": ""
}
],
"year": null,
"venue": "Journal of Artificial Intelligence Research",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. A. Engelson and I. Dagan. 1999. Committee-based sample selection for probabilistic classifiers. Journal of Artificial Intelligence Research.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Ontonotes: The 90% Solution",
"authors": [
{
"first": "E",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Weischedel",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of HLT-NAACL06",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Hovy, M. Marcus, M. Palmer, L. Ramshaw and R. Weischedel. 2006. Ontonotes: The 90% Solution. In Proc. of HLT-NAACL06.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "An empirical evaluation of knowledge sources and learning algorithm for word sense disambiguation",
"authors": [
{
"first": "Y",
"middle": [
"K"
],
"last": "Lee",
"suffix": ""
},
{
"first": ".",
"middle": [
"H T"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of EMNLP02",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y.K. Lee and. H.T. Ng. 2002. An empirical evaluation of knowledge sources and learning algorithm for word sense disambiguation. In Proc. of EMNLP02",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A sequential algorithm for training text classifiers",
"authors": [
{
"first": "D",
"middle": [
"D"
],
"last": "Lewis",
"suffix": ""
},
{
"first": "W",
"middle": [
"A"
],
"last": "Gale",
"suffix": ""
}
],
"year": 1994,
"venue": "Proc. of SIGIR-94",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. D. Lewis and W. A. Gale. 1994. A sequential algo- rithm for training text classifiers. In Proc. of SIGIR- 94",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Confidence-based active learning. IEEE transaction on pattern analysis and machine intelligence",
"authors": [
{
"first": "M",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "I",
"middle": [
"K"
],
"last": "Sethi",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "28",
"issue": "",
"pages": "1251--1261",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Li, I. K. Sethi. 2006. Confidence-based active learn- ing. IEEE transaction on pattern analysis and ma- chine intelligence, 28(8):1251-1261.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Building a large annotated corpus of English: the Penn Treebank",
"authors": [
{
"first": "M",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Santorini",
"suffix": ""
},
{
"first": "M",
"middle": [
"A"
],
"last": "Marcinkiewicz",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Marcus, B. Santorini, and M. A. Marcinkiewicz. 1993. Building a large annotated corpus of English: the Penn Treebank. Computational Linguistics, 19(2):313-330",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Employing EM in pool-based active learning for text classification",
"authors": [
{
"first": "A",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Nigram",
"suffix": ""
}
],
"year": 1998,
"venue": "Proc. of 15 th ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. McCallum and K. Nigram. 1998. Employing EM in pool-based active learning for text classification. In Proc. of 15 th ICML",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Rule writing or annotation: cost-efficient resource usage for based noun phrase chunking",
"authors": [
{
"first": "G",
"middle": [],
"last": "Ngai",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Ngai and D. Yarowsky. 2000. Rule writing or anno- tation: cost-efficient resource usage for based noun phrase chunking. In Proc. of ACL-02",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The Omega Ontology",
"authors": [
{
"first": "A",
"middle": [],
"last": "Philpot",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of ONTOLEX Workshop at IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Philpot, E. Hovy and P. Pantel. 2005. The Omega Ontology. In Proc. of ONTOLEX Workshop at IJCNLP.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Multi-criteria-based active learning for named entity recognition",
"authors": [
{
"first": "D",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Tan",
"suffix": ""
}
],
"year": 2004,
"venue": "Prof. of ACL-04",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Shen, J. Zhang, J. Su, G. Zhou and C. Tan. 2004. Multi-criteria-based active learning for named entity recognition. In Prof. of ACL-04.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Example selection for bootstrapping statistical parsers",
"authors": [
{
"first": "M",
"middle": [],
"last": "Steedman",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Hwa",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Osborne",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Sakar",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Ruhlen",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Baker",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Crim",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of HLT-NAACL-03",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Steedman, R. Hwa, S. Clark, M. Osborne, A. Sakar, J. Hockenmaier, P. Ruhlen, S. Baker and J. Crim. 2003. Example selection for bootstrapping statistical parsers. In Proc. of HLT-NAACL-03",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Active learning for natural language parsing and information extraction",
"authors": [
{
"first": "C",
"middle": [
"A"
],
"last": "Thompson",
"suffix": ""
},
{
"first": "M",
"middle": [
"E"
],
"last": "Califf",
"suffix": ""
},
{
"first": "R",
"middle": [
"J"
],
"last": "Mooney",
"suffix": ""
}
],
"year": 1999,
"venue": "Proc. of ICML-99",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. A. Thompson, M. E. Califf and R. J. Mooney. 1999. Active learning for natural language parsing and in- formation extraction. In Proc. of ICML-99.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "An approach to text corpus construction which cuts annotation costs and maintains reusability of annotated data",
"authors": [
{
"first": "K",
"middle": [],
"last": "Tomanek",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Wermter",
"suffix": ""
},
{
"first": "U",
"middle": [],
"last": "Hahn",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of EMNLP/CoNLL07",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Tomanek, J. Wermter and U. Hahn. 2007. An ap- proach to text corpus construction which cuts anno- tation costs and maintains reusability of annotated data. In Proc. of EMNLP/CoNLL07",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "An evaluation of statistical spam filtering techniques",
"authors": [
{
"first": "L",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Yao",
"suffix": ""
}
],
"year": 2004,
"venue": "ACM Transactions on Asian Language Information Processing",
"volume": "3",
"issue": "4",
"pages": "243--269",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Zhang, J. Zhu, and T. Yao. 2004. An evaluation of statistical spam filtering techniques. ACM Transac- tions on Asian Language Information Processing, 3(4):243-269.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Active learning for word sense disambiguation with methods for addressing the class imbalance problem",
"authors": [
{
"first": "J",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of EMNLP/CoNLL07",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Zhu, E. Hovy. 2007. Active learning for word sense disambiguation with methods for addressing the class imbalance problem. In Proc. of EMNLP/CoNLL07",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Repeat \u2022 Use the current classifier C to label all unlabeled examples in U \u2022 Based on active learning rules R such as uncertainty sampling or committee-based sampling, present m top-ranked unlabeled examples to oracle H for labeling \u2022 Augment L with the m new examples, and remove them from U \u2022 Use L to retrain the current classifier C Until the predefined stopping criterion SC is met."
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Active learning process"
},
"TABREF2": {
"html": null,
"text": ".",
"content": "<table><tr><td/><td/><td/><td/><td/><td colspan=\"4\">Active Learning for WSD task</td><td/><td/><td/></tr><tr><td/><td>0.94</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>0.92</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>0.9</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>Accuracy</td><td>0.86 0.88</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>0.84</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>0.82</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td colspan=\"2\">rate-n</td></tr><tr><td/><td>0.8</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>0</td><td>20</td><td>40</td><td>60</td><td>80</td><td>100</td><td>120</td><td>140</td><td>160</td><td>180</td><td>200</td><td>220</td></tr><tr><td/><td/><td/><td/><td/><td colspan=\"4\">Number of Learned Examples</td><td/><td/><td/></tr></table>",
"type_str": "table",
"num": null
}
}
}
}