{ "paper_id": "I11-1007", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:31:30.003958Z" }, "title": "Active Learning Strategies for Support Vector Machines, Application to Temporal Relation Classification", "authors": [ { "first": "Seyed", "middle": [ "Abolghasem" ], "last": "Mirroshandel", "suffix": "", "affiliation": { "laboratory": "Laboratoire d'Informatique Fondamentale de Marseille-CNRS -UMR 6166", "institution": "Universit\u00e9 Aix-Marseille", "location": { "settlement": "Marseille", "country": "France" } }, "email": "ghasem.mirroshandel@lif.univ-mrs.fr" }, { "first": "Gholamreza", "middle": [], "last": "Ghassem-Sani", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Alexis", "middle": [], "last": "Nasr", "suffix": "", "affiliation": { "laboratory": "Laboratoire d'Informatique Fondamentale de Marseille-CNRS -UMR 6166", "institution": "Universit\u00e9 Aix-Marseille", "location": { "settlement": "Marseille", "country": "France" } }, "email": "alexis.nasr@lif.univ-mrs.fr" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Temporal relations between events is a valuable source of information which can be used in a large number of natural language processing applications such as question answering, summarization, and information extraction. Supervised temporal relation classification requires large corpora which are difficult, time consuming, and expensive to produce. Active learning strategies are well-suited to reduce this effort by efficiently selecting the most informative samples for labeling. This paper presents novel active learning strategies based on support vector machines (SVM) for temporal relation classification. A large number of empirical comparisons of different active learning algorithms and various kernel functions in SVM shows that proposed active learning strategies are effective for the given task.", "pdf_parse": { "paper_id": "I11-1007", "_pdf_hash": "", "abstract": [ { "text": "Temporal relations between events is a valuable source of information which can be used in a large number of natural language processing applications such as question answering, summarization, and information extraction. Supervised temporal relation classification requires large corpora which are difficult, time consuming, and expensive to produce. Active learning strategies are well-suited to reduce this effort by efficiently selecting the most informative samples for labeling. This paper presents novel active learning strategies based on support vector machines (SVM) for temporal relation classification. A large number of empirical comparisons of different active learning algorithms and various kernel functions in SVM shows that proposed active learning strategies are effective for the given task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The identification of temporal relations between events, in texts, is a valuable information for many natural language processing (NLP) tasks, such as summarization, question answering, and information extraction. In question answering, one expects the system to answer questions such as \"when an event occurred\", or \"what is the chronological order of some desired events\". In text summarization, especially in the multi-document type, knowing the order of events is important for correctly merging related information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Most existing algorithms for temporal relation learning are supervised, they rely on manual annota-tions of corpora. Producing such annotated corpora has shown to be a time consuming, hard, and expensive task (Mani et al., 2006) . In this paper we explore active learning techniques as a way to control and speed up the annotation process.", "cite_spans": [ { "start": 209, "end": 228, "text": "(Mani et al., 2006)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the active learning framework, the learner has control over choosing the instances that will constitute the training set. A typical active learning algorithm begins with a small number of annotated data, and selects one or more informative instances from a large set of unlabeled instances, named the pool. The chosen instance(s) are then labeled and added to other annotated data, and the model is updated with this new information. These steps are repeated until at least one termination condition is satisfied.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While there have been numerous applications of active learning to NLP researches Xu et al., 2007) , it has not been applied, to our knowledge, to temporal relation classification.", "cite_spans": [ { "start": 81, "end": 97, "text": "Xu et al., 2007)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper presents a novel active learning strategy for SVM-based classification algorithm. The proposed algorithm considers three measures: uncertainty, representativeness, and diversity to select the instances that will be annotated. The method we propose is generic, it could be applied to any SVM based classification problem. Temporal relation classification has been selected, in this paper, for illustration purpose. Our experiments show that state-of-the-art results can be reproduced with a significantly smaller part of training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The remainder of this paper is organized as follows: Section 2 is about temporal relation classification and its related work. Section 3 describes some of existing active learning methods. Our proposed method will be presented in Section 4. Section 5 briefly presents the characteristics of the corpora that we have used. Section 6 demonstrates the evaluation of the proposed algorithm. Finally, Section 7 concludes the paper and presents some possible future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For a given ordered pair (x 1 , x 2 ), where x 1 and x 2 are time expressions or events, a temporal information processing system identifies the type of relation that temporally links x 1 to x 2 . For example in \"If all the debt is converted (event 1 ) to common, Automatic Data will issue (event 2 ) about 3.6 million shares; last Monday (time 1 ), the company had (event 3 ) nearly 73 million shares outstanding.\", taken from document wsj 0541 of Time-Bank (Pustejovsky et al., 2003) , there are two temporal relations between pairs (event 1 , event 2 ) and (time 1 , event 3 ). The task of a temporal relation extraction system is to automatically tag these pairs with relations BEFORE and INCLUDES, respectively. Several researchers have focused on temporal relation learning (Chklovski and Pantel, 2005; Lapata and Lascarides, 2006; Bethard et al., 2007; Chambers et al., 2007; Bethard and Martin, 2008; Mirroshandel and Ghassem-Sani, 2010; Puscasu, 2007) among which SVM has shown good performances. In this section, we describe two of the most successful SVM-based methods.", "cite_spans": [ { "start": 459, "end": 485, "text": "(Pustejovsky et al., 2003)", "ref_id": "BIBREF14" }, { "start": 780, "end": 808, "text": "(Chklovski and Pantel, 2005;", "ref_id": "BIBREF6" }, { "start": 809, "end": 837, "text": "Lapata and Lascarides, 2006;", "ref_id": "BIBREF9" }, { "start": 838, "end": 859, "text": "Bethard et al., 2007;", "ref_id": "BIBREF3" }, { "start": 860, "end": 882, "text": "Chambers et al., 2007;", "ref_id": "BIBREF5" }, { "start": 883, "end": 908, "text": "Bethard and Martin, 2008;", "ref_id": "BIBREF2" }, { "start": 909, "end": 945, "text": "Mirroshandel and Ghassem-Sani, 2010;", "ref_id": "BIBREF12" }, { "start": 946, "end": 960, "text": "Puscasu, 2007)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Temporal Relation Classification with SVM", "sec_num": "2" }, { "text": "Inderjeet Mani was the first to propose an SVMbased temporal relation classification model which is based on a linear kernel (Mani et al., 2006) . His system (referred to as (k M ani )) uses five temporal attributes that have been tagged in the standard corpora (Pustejovsky et al., 2003) plus the string of words that constitute the events, as well as their part of part of speech tags.", "cite_spans": [ { "start": 125, "end": 144, "text": "(Mani et al., 2006)", "ref_id": "BIBREF11" }, { "start": 262, "end": 288, "text": "(Pustejovsky et al., 2003)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Temporal Relation Classification with SVM", "sec_num": "2" }, { "text": "The other successful SVM-based temporal classification method uses a polynomial convolution tree kernel, named argument ancestor path distance kernel (AAPD), and outperforms Mani's method (Mirroshandel et al., 2010) . In this model, the algorithm adds event-event syntactic properties to the simple event features described above. In order to use syntactic properties, a convolution tree kernel is applied to the parse trees of sentences containing event pairs. Through this process, useful syntactic features can be gathered for classification by SVM. The two kernels are then polynomially combined.", "cite_spans": [ { "start": 174, "end": 215, "text": "Mani's method (Mirroshandel et al., 2010)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Temporal Relation Classification with SVM", "sec_num": "2" }, { "text": "Supervised methods usually need a large number of annotated samples in the training phase. In most applications including temporal relation classification, the preparation of such samples is a hard, time consuming, and expensive task (Mani et al., 2006) . On the other hand, all these annotated samples may not be useful, because some samples contain little (or even no) new information. Active learning algorithms overcome this problem by adding only the most informative instances labeled by an oracle (e.g., a human expert) to the learning model. Three scenarios have been proposed for the selection of instances: 1) membership query synthesis, 2) streambased selective sampling, and 3) pool-based sampling (Settles, 2010) .", "cite_spans": [ { "start": 234, "end": 253, "text": "(Mani et al., 2006)", "ref_id": "BIBREF11" }, { "start": 710, "end": 725, "text": "(Settles, 2010)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Active Learning", "sec_num": "3" }, { "text": "In membership query synthesis, the model itself generates some instances rather than using realworld unlabeled instances (Angluin, 2004) .", "cite_spans": [ { "start": 121, "end": 136, "text": "(Angluin, 2004)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Active Learning", "sec_num": "3" }, { "text": "In stream-based selective sampling, instances are presented in a stream and the learner decides, based on its specific control measure, whether or not to query its label (Atlas et al., 1990; Cohn et al., 1994) .", "cite_spans": [ { "start": 170, "end": 190, "text": "(Atlas et al., 1990;", "ref_id": "BIBREF1" }, { "start": 191, "end": 209, "text": "Cohn et al., 1994)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Active Learning", "sec_num": "3" }, { "text": "In pool based sampling, which is the scenario that we have chosen), a large number of unlabeled instances are collected to form the pool U. The algorithm begins with a small number of labeled data L, and then chooses one or more informative instances from U. The chosen instance(s) are labeled and added to L. A new model is then learned and the process iterated (Lewis and Gale, 1994) .", "cite_spans": [ { "start": 363, "end": 385, "text": "(Lewis and Gale, 1994)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Active Learning", "sec_num": "3" }, { "text": "In all active learning strategies, the informativeness of each unlabeled instance is evaluated by the learner, and the most informative instance(s) are labeled. Different informativeness measures have been proposed: 1) uncertainty sampling, 2) query by committee, 3) expected model change, 4) expected error reduction, 5) variance reduction, and 6) den-sity weighted methods (Settles, 2010) .", "cite_spans": [ { "start": 375, "end": 390, "text": "(Settles, 2010)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Sample Selection Strategies", "sec_num": "3.1" }, { "text": "Uncertainty sampling is the simplest and the most commonly used selection strategy. In this strategy, instances for which the prediction of the label is the most uncertain are selected by the learner (Lewis and Gale, 1994) .", "cite_spans": [ { "start": 200, "end": 222, "text": "(Lewis and Gale, 1994)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Sample Selection Strategies", "sec_num": "3.1" }, { "text": "In query by committee, there is a committee of models trained on the current labeled data L based on different hypotheses. For each unlabeled instance, committee models vote for their label. The most informative instance is one with the largest disagreement on the votes (Seung et al., 1992) . In the expected model change, the most informative instance is the one which causes the most change to the model . In expected error reduction, the learner selects instances which reduce expected error of model as much as possible (Roy and McCallum, 2001) . In density weighted methods, selected instances must be both uncertain and representative in order to decrease the effect of outliers which may cause some problems especially in uncertainty sampling and query by committee strategies .", "cite_spans": [ { "start": 271, "end": 291, "text": "(Seung et al., 1992)", "ref_id": "BIBREF19" }, { "start": 525, "end": 549, "text": "(Roy and McCallum, 2001)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Sample Selection Strategies", "sec_num": "3.1" }, { "text": "In this section, we present an active learning method based on SVM. There have been other efforts in using active learning in combination with SVM (Brinker, 2003; Xu et al., 2007) , our contribution is the design of new uncertainty measures used for sample selection. In addition, the way representativeness and diversity measures are computed and combined are novel.", "cite_spans": [ { "start": 147, "end": 162, "text": "(Brinker, 2003;", "ref_id": "BIBREF4" }, { "start": 163, "end": 179, "text": "Xu et al., 2007)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Proposed Algorithm", "sec_num": "4" }, { "text": "The algorithm is pool-based. At each iteration, k (k \u2265 1) instances are selected from a pool U. To select the more informative instance(s), three measures are used: uncertainty, representativeness and diversity. In the next subsections, we begin with an overview of multi-class classification with SVM, then introduce our three measures and describe the active learning algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Algorithm", "sec_num": "4" }, { "text": "In SVM binary classification, positive and negative instances are linearly partitioned by a hyper-plane (with maximum marginal distance to instances) in the original or a higher dimensional feature space.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-class classification", "sec_num": "4.1" }, { "text": "In order to classify a new instance x, its distance to the hyper-plane is computed and x is assigned to the class that corresponds to the sign of the computed distance. The distance between instance x and hyper-plane H, supported by the support vectors x 1 . . . x l , is computed as follows (Han and Kamber, 2006) :", "cite_spans": [ { "start": 292, "end": 314, "text": "(Han and Kamber, 2006)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Multi-class classification", "sec_num": "4.1" }, { "text": "d(x, H) = l k=1 y k \u03b1 k x k x T + b 0 (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-class classification", "sec_num": "4.1" }, { "text": "where y k is the class label of support vector x k ; \u03b1 k and b 0 are numeric parameters that are determined automatically.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-class classification", "sec_num": "4.1" }, { "text": "For multi-class classification with m classes, in one-versus-one case, a set H of m(m\u22121) 2 hyperplanes, one for every class pair is defined. The hyper-plane that separates class i and j will be noted H i,j . We note H i \u2282 H the set of the m \u2212 1 hyperplanes that separate class i from the others.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-class classification", "sec_num": "4.1" }, { "text": "In order to classify a new instance x, its distance to each hyper-plane H i,j is computed and x is assigned to class i or j. At the end of this process, for every instance x, every class i has accumulated a certain number of votes, noted V i (x) (number of time a classifier has attributed the class i to instance x). The final class of x, noted C(x) will be the one that has accumulated the highest number of votes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-class classification", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "C(x) = arg max 1\u2264i\u2264m V i (x)", "eq_num": "(2)" } ], "section": "Multi-class classification", "sec_num": "4.1" }, { "text": "Uncertainty is one of the most important measures of informativeness of an instance. If the learner is uncertain about an instance, that shows that the learning model is not able to deal with the instance properly. As a result, knowing the correct label of this uncertain instance will improve the quality of learning model. In the process described in subsection 4.1, there are two places where uncertainty can be measured. In the first case, a decision is taken based on the difference of two distances. The smaller the difference, the less reliable the decision is. In the second case, a decision is taken based on the result of a vote. If the outcome of the vote does not show a clear majority, the decision will be less reliable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Uncertainty", "sec_num": "4.2" }, { "text": "Four measures of uncertainty are presented below, the first and second are based on distances while the third and fourth are based on the result of the vote procedure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Uncertainty", "sec_num": "4.2" }, { "text": "Uncertainty of an instance x is defined here as the distance to its closest class separating hyper-planes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Nearest to One Hyper-Plane (NOH)", "sec_num": "4.2.1" }, { "text": "\u03d5(x) = min H\u2208H C(x) |d(x, H)|", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Nearest to One Hyper-Plane (NOH)", "sec_num": "4.2.1" }, { "text": "(3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Nearest to One Hyper-Plane (NOH)", "sec_num": "4.2.1" }, { "text": "NAH defines the uncertainty of instance x as the sum of its distances to all its class separating hyperplanes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Nearest to All Hyper-Planes (NAH)", "sec_num": "4.2.2" }, { "text": "\u03d5(x) = H\u2208H C(x) |d(x, H)| (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Nearest to All Hyper-Planes (NAH)", "sec_num": "4.2.2" }, { "text": "LVM estimates the uncertainty of an instance by the difference between the two highest votes for this instance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Least Votes Margin (LVM)", "sec_num": "4.2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03d5(x) = V i (x) \u2212 V j (x)", "eq_num": "(5)" } ], "section": "Least Votes Margin (LVM)", "sec_num": "4.2.3" }, { "text": "where i is the class that has collected the highest number of votes and j the class that has collected the second higher number of votes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Least Votes Margin (LVM)", "sec_num": "4.2.3" }, { "text": "VE is based on the entropy of the distribution of the vote outcome:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Votes Entropy (VE)", "sec_num": "4.2.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03d5(x) = \u2212 1\u2264i\u2264m P (V i (x)) log P (V i (x))", "eq_num": "(6)" } ], "section": "Votes Entropy (VE)", "sec_num": "4.2.4" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Votes Entropy (VE)", "sec_num": "4.2.4" }, { "text": "P (V i (x)) is simply estimated as its relative frequency V i (x)/m.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Votes Entropy (VE)", "sec_num": "4.2.4" }, { "text": "Representativeness is another important measure for choosing samples in active learning. In figure 1, sample 1 is the nearest instance to decision boundary, it is therefore the instance that will be selected using uncertainty criterion. But it should be clear that this sample is not appropriate for selection, annotation, and addition to the training data, because it is in fact an outlier and non representative instance. This simple example shows that uncertainty measure alone is not suited to fight against outliers and noisy samples. In order to prevent the learner to select such instances, a representativeness measure \u03c8 is used. It simply computes the average distance between an instance and all other instances in the pool:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Representativeness", "sec_num": "4.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03c8(x) = 1 N x \u2208U dist(x, x )", "eq_num": "(7)" } ], "section": "Representativeness", "sec_num": "4.3" }, { "text": "where N is the number of instances in the pool, and dist is the distance between two samples which can be computed by simply applying a kernel function on them:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Representativeness", "sec_num": "4.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "dist(x i , x j ) = kernel(x i , x j )", "eq_num": "(8)" } ], "section": "Representativeness", "sec_num": "4.3" }, { "text": "As it is shown in equation 7, the samples which are more similar to other samples of the pool will be considered to be more representative.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Representativeness", "sec_num": "4.3" }, { "text": "In order to take into account representativeness in the active learning algorithm, the distance between every sample pairs of the pool must be computed. This computation is a costly process, but these distances can be computed only once for the whole active learning algorithm. Algorithm 1 describes how representativeness and uncertainty measures have been combined.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Representativeness", "sec_num": "4.3" }, { "text": "Diversity is the third measure that is used for instance selection. Instances that are both unreliable and representative may be very close to each other and it might be interesting, in order to better cover the problem space, to select instances that are different from each other. This is done by taking diversity into account. Figure 2 illustrates the effect of considering the diversity measure on a simple problem. In this problem, the learner chooses 4 instances for each iteration. Based on uncertainty and representativeness measures, samples 1, 2, 3, and 5 should be selected. However, 1, 2 and 3 are very similar, and only one of such samples may be enough for learning. Besides, selecting 7 and 8 will lead to a better covering of the problem space.", "cite_spans": [], "ref_spans": [ { "start": 330, "end": 338, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Diversity", "sec_num": "4.4" }, { "text": "In our algorithm, diversity is taken into account after uncertainty and representativeness were. First, B I instances are chosen, based on uncertainty and representativeness. A distance matrix is then constructed, based on the distance measure of equation 8. The B I instances are then grouped into B F (B F < B I ) clusters, using hierarchical clustering and the centroid of each cluster is selected for labeling. This process is explained in algorithm 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Diversity", "sec_num": "4.4" }, { "text": "The pseudo-code of our active learning algorithm is shown in Algorithm 1. This algorithm first trains the model based on the initial labeled data, and applies a combination of uncertainty and representativeness measures to select B I samples from the pool. Then hierarchical clustering is applied to the extracted samples to select B F most diverse samples. Chosen samples are then labeled and added to the training labeled set. This process is iterated until ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Algorithm", "sec_num": "4.5" }, { "text": "T I to extract set T F of B F diverse samples; U = U \u2212 T F ; L = L \u222a T F ;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Algorithm", "sec_num": "4.5" }, { "text": "end while at least one termination condition is satisfied. In our experiments, the algorithm stops when all instances of the pool were selected and labeled.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Algorithm", "sec_num": "4.5" }, { "text": "Our algorithm may seem much more costly than the original SVM algorithm. However, it is easy to show, similar to (Brinker, 2003) , that it only multiply by a coefficient of N/B F (N is the final number of labeled instances) the total computational complexity of original SVM.", "cite_spans": [ { "start": 113, "end": 128, "text": "(Brinker, 2003)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Proposed Algorithm", "sec_num": "4.5" }, { "text": "Two standard corpora were used for our expriments: TimeBank (v. 1.2) (Pustejovsky et al., 2003) and Opinion (www.timeml.org). TimeBank is composed of 183 newswire documents and 64 077 words, and Opinion comprises 73 documents with 38 709 words. These two datasets have been annotated with TimeML (Pustejovsky et al., 2004) . There are 14 temporal relation types (SIMULTANEOUS, IDENTITY, BEFORE, AFTER, IBEFORE, IAFTER, INCLUDES, IS INCLUDED, DURING, DURING INV, BEGINS, BEGUN BY, ENDS, ENDED BY) in the TLink class of TimeML. Similar to (Mani et al., 2006; Chambers et al., 2007) , we used a normalized version of these 14 temporal relation types, which contains only the following six temporal relations:", "cite_spans": [ { "start": 69, "end": 95, "text": "(Pustejovsky et al., 2003)", "ref_id": "BIBREF14" }, { "start": 296, "end": 322, "text": "(Pustejovsky et al., 2004)", "ref_id": "BIBREF15" }, { "start": 537, "end": 556, "text": "(Mani et al., 2006;", "ref_id": "BIBREF11" }, { "start": 557, "end": 579, "text": "Chambers et al., 2007)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Corpus Description", "sec_num": "5" }, { "text": "SIMULTANEOUS ENDS BEGINS BEFORE IBEFORE INCLUDES In order to convert 14 relations into 6, the inverse relations were omitted (e.g., BEFORE and AFTER), and IDENTITY and SIMULTAENOUS, as well as IS INCLUDED and DURING were collapsed, respectively. In our experiments, as in several previous work, we merged the two datasets to generate a single corpus called OTC. Table 1 shows the normalized TLink class distribution (only for Event-Event relations) for OTC corpora.", "cite_spans": [], "ref_spans": [ { "start": 362, "end": 369, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Corpus Description", "sec_num": "5" }, { "text": "The algorithm described above was evaluated on OTC corpus with our four uncertainty measures with and without representativeness and diversity. We used random instance selection (i.e., passive learning) as the baseline strategy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "6" }, { "text": "Several kernels can be used for such experiments. As explained in section 2, we decided to use the kernel proposed in (Mani et al., 2006) , which we will refer to as Mani's kernel, and the Argument Ancestor Path Distance (AAPD) polynomial kernel proposed in (Mirroshandel et al., 2010) . AAPD polynomial is the state of the art pattern-based algorithm, which exclusively combines gold standard features of events and grammatical structures of sentences.", "cite_spans": [ { "start": 118, "end": 137, "text": "(Mani et al., 2006)", "ref_id": "BIBREF11" }, { "start": 258, "end": 285, "text": "(Mirroshandel et al., 2010)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "6" }, { "text": "All evaluations are based on a 5-fold cross validation. The original corpora was randomly partitioned into 5 parts, out of which, a single part was retained for testing the model, and the remaining 4 parts were used for the training and applying our instance selection strategies. This process was then repeated 5 times (the folds), with each of the 5 parts being used exactly once as the test data. To perform the experiments, we started from initial labeled set with 100 randomly selected samples, and in each iteration, 25 samples were selected, labeled, and added to the previously labeled set. Figures 3 and 4 show the result of applying our four uncertainty measures for \"instance selection\" in OTC, using Mani's ( Figure 3 ) and AAPD kernels (Figure 4 ). The figures show that all proposed uncertainty instance selection strategies are effective and lead to learning curves that are above the baseline. Vote based measures have outperformed distance based ones. Among the two distance based measures, NAH led to better results than NOH, showing that averaging (aggregation) over the distances to the different separating hyperplanes is more robust than taking into account only the distance to the closest one.", "cite_spans": [], "ref_spans": [ { "start": 599, "end": 614, "text": "Figures 3 and 4", "ref_id": "FIGREF3" }, { "start": 721, "end": 729, "text": "Figure 3", "ref_id": "FIGREF3" }, { "start": 749, "end": 758, "text": "(Figure 4", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Experimental Results", "sec_num": "6" }, { "text": "The two vote based methods led to very close results, which seems to indicate that the system usually hesistates between two classes (and not more) when trying to classify an instance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Uncertainty Measure Alone", "sec_num": "6.1" }, { "text": "Representativeness has been introduced in order to fight against outliers. Such outliers have two different origins. The first one is data sparseness: some temporal relation events are poorely represented in the data. Eliminating such instances will degrade the results on the corresponding class but will introduce less noise in the data. The second origin of outliers is the difficulty of problem, even for human annotators (Pustejovsky et al., 2003) . This causes some mistakes in annotation and generates some outliers. In the second series of experiments, we combined a representativeness measure with different uncertainty instance selection strategies to tackle outliers' side effects. In our different experiments, the best value for uncertainty coefficient (\u03b1) was 0.65. represent just the improvement rather than the learning accuracy, because the learning curves were not easy to compare. The results show that distance based measures are more sensitive to outliers than vote based ones. Figures 5 and 6 also show that the representativeness measure has less impact on AAPD kernel than it has on Mani's kernel. This is because AAPD kernel is more resistant to outliers than Mani's kernel. ", "cite_spans": [ { "start": 426, "end": 452, "text": "(Pustejovsky et al., 2003)", "ref_id": "BIBREF14" }, { "start": 999, "end": 1008, "text": "Figures 5", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Combining Uncertainty and Representativeness Measures", "sec_num": "6.2" }, { "text": "In the last series of experiments, diversity was added to the instance selection procedure. In each iteration, first 80 instances of the pool were selected by combination of uncertainty and representativeness mea- sures. Next, a hierarchical clustering method was used to select the final 25 instances. The accuracy improvement, as it is shown in Figures 7 and 8 , is moderate.", "cite_spans": [], "ref_spans": [ { "start": 347, "end": 362, "text": "Figures 7 and 8", "ref_id": "FIGREF7" } ], "eq_spans": [], "section": "Combining Uncertainty, Representativeness and Diversity", "sec_num": "6.3" }, { "text": "The reasons why introducing diversity did not have a greater impact on the results is not clear. That may be due to the way diversity was introduced in our model. It could also come from the distribution of the data: if instances that are both unreliable and representative are not close to each other, selecting instances that are different from each other for better coverage of the problem space is not an issue. More work has to be done to investigate that point.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combining Uncertainty, Representativeness and Diversity", "sec_num": "6.3" }, { "text": "The final learning curves, when uncertainty, representativeness, and diversity were all considered, are shown in figures 9 and 10. As shown, vote-based uncertainty measures still obtain better results than distance based measures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combining Uncertainty, Representativeness and Diversity", "sec_num": "6.3" }, { "text": "In this paper, we have addressed the problem of active learning based on support vector machines for temporal relation classification. Three different kind of measures have been used for selecting the most informative instances: uncertainty, representativeness and diversity. The results showed that the three measures improved the learning curve although diversity had a moderate effect.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Future work will focus on three points, the first one is trying other sample selection strategies, as query by committee, the second will focus on combining the two families of uncertainty measures that we have proposed: distance based and vote based. The third one is about diversity. As mentioned above, we do not know if this phenomenon is not well handeled by the model or if it is not an issue for the problem at hand. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" } ], "back_matter": [ { "text": "This work has been funded by the French Agence Nationale pour la Recherche, through the project SEQUOIA (ANR-08-EMER-013).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgement", "sec_num": null } ], "bib_entries": { "BIBREF1": { "ref_id": "b1", "title": "Training connectionist networks with queries and selective sampling", "authors": [ { "first": "L", "middle": [], "last": "Atlas", "suffix": "" }, { "first": "D", "middle": [], "last": "Cohn", "suffix": "" }, { "first": "R", "middle": [], "last": "Ladner", "suffix": "" }, { "first": "", "middle": [], "last": "El-Sharkawi", "suffix": "" }, { "first": "", "middle": [], "last": "Marks", "suffix": "" } ], "year": 1990, "venue": "Advances in neural information processing systems 2", "volume": "", "issue": "", "pages": "566--573", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Atlas, D. Cohn, R. Ladner, MA El-Sharkawi, and II Marks. 1990. Training connectionist networks with queries and selective sampling. In Advances in neu- ral information processing systems 2, pages 566-573.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Learning semantic links from a corpus of parallel temporal and causal relations", "authors": [ { "first": "S", "middle": [], "last": "Bethard", "suffix": "" }, { "first": "J", "middle": [ "H" ], "last": "Martin", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Short Papers", "volume": "", "issue": "", "pages": "177--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Bethard and J.H. Martin. 2008. Learning semantic links from a corpus of parallel temporal and causal re- lations. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Short Papers, pages 177-180. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Finding temporal structure in text: Machine learning of syntactic temporal relations", "authors": [ { "first": "S", "middle": [], "last": "Bethard", "suffix": "" }, { "first": "J", "middle": [ "H" ], "last": "Martin", "suffix": "" }, { "first": "S", "middle": [], "last": "Klingenstein", "suffix": "" } ], "year": 2007, "venue": "International Journal of Semantic Computing", "volume": "", "issue": "4", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Bethard, J.H. Martin, and S. Klingenstein. 2007. Find- ing temporal structure in text: Machine learning of syntactic temporal relations. International Journal of Semantic Computing, 1(4).", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Incorporating diversity in active learning with support vector machines", "authors": [ { "first": "K", "middle": [], "last": "Brinker", "suffix": "" } ], "year": 2003, "venue": "MA-CHINE LEARNING-INTERNATIONAL WORKSHOP THEN CONFERENCE", "volume": "20", "issue": "", "pages": "59--66", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Brinker. 2003. Incorporating diversity in ac- tive learning with support vector machines. In MA- CHINE LEARNING-INTERNATIONAL WORKSHOP THEN CONFERENCE-, volume 20, pages 59-66.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Classifying temporal relations between events", "authors": [ { "first": "N", "middle": [], "last": "Chambers", "suffix": "" }, { "first": "S", "middle": [], "last": "Wang", "suffix": "" }, { "first": "D", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions", "volume": "", "issue": "", "pages": "173--176", "other_ids": {}, "num": null, "urls": [], "raw_text": "N. Chambers, S. Wang, and D. Jurafsky. 2007. Classify- ing temporal relations between events. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, pages 173-176. Association for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Global path-based refinement of noisy graphs applied to verb semantics", "authors": [ { "first": "T", "middle": [], "last": "Chklovski", "suffix": "" }, { "first": "P", "middle": [], "last": "Pantel", "suffix": "" } ], "year": 2005, "venue": "Natural Language Processing-IJCNLP 2005", "volume": "", "issue": "", "pages": "792--803", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Chklovski and P. Pantel. 2005. Global path-based re- finement of noisy graphs applied to verb semantics. Natural Language Processing-IJCNLP 2005, pages 792-803.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Improving generalization with active learning", "authors": [ { "first": "D", "middle": [], "last": "Cohn", "suffix": "" }, { "first": "L", "middle": [], "last": "Atlas", "suffix": "" }, { "first": "R", "middle": [], "last": "Ladner", "suffix": "" } ], "year": 1994, "venue": "Machine Learning", "volume": "15", "issue": "", "pages": "201--221", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Cohn, L. Atlas, and R. Ladner. 1994. Improving gen- eralization with active learning. Machine Learning, 15(2):201-221.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Data mining: concepts and techniques", "authors": [ { "first": "J", "middle": [], "last": "Han", "suffix": "" }, { "first": "M", "middle": [], "last": "Kamber", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Han and M. Kamber. 2006. Data mining: concepts and techniques. Morgan Kaufmann.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Learning sentenceinternal temporal relations", "authors": [ { "first": "M", "middle": [], "last": "Lapata", "suffix": "" }, { "first": "A", "middle": [], "last": "Lascarides", "suffix": "" } ], "year": 2006, "venue": "Journal of Artificial Intelligence Research", "volume": "27", "issue": "1", "pages": "85--117", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Lapata and A. Lascarides. 2006. Learning sentence- internal temporal relations. Journal of Artificial Intel- ligence Research, 27(1):85-117.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A sequential algorithm for training text classifiers", "authors": [ { "first": "D", "middle": [ "D" ], "last": "Lewis", "suffix": "" }, { "first": "W", "middle": [ "A" ], "last": "Gale", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval", "volume": "", "issue": "", "pages": "3--12", "other_ids": {}, "num": null, "urls": [], "raw_text": "D.D. Lewis and W.A. Gale. 1994. A sequential algo- rithm for training text classifiers. In Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval, pages 3-12. Springer-Verlag New York, Inc.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Machine learning of temporal relations", "authors": [ { "first": "M", "middle": [], "last": "Mani", "suffix": "" }, { "first": "B", "middle": [], "last": "Verhagen", "suffix": "" }, { "first": "C", "middle": [ "M" ], "last": "Wellner", "suffix": "" }, { "first": "J", "middle": [], "last": "Lee", "suffix": "" }, { "first": "", "middle": [], "last": "Pustejovsky", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "753--760", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mani, M. Verhagen, B. Wellner, C.M. Lee, and J. Puste- jovsky. 2006. Machine learning of temporal relations. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meet- ing of the Association for Computational Linguistics, pages 753-760. Association for Computational Lin- guistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Temporal Relations Learning with a Bootstrapped Crossdocument Classifier", "authors": [ { "first": "S", "middle": [ "A" ], "last": "Mirroshandel", "suffix": "" }, { "first": "G", "middle": [], "last": "Ghassem-Sani", "suffix": "" }, { "first": ";", "middle": [ "M" ], "last": "Khayyamian", "suffix": "" } ], "year": 2010, "venue": "Proceeding of the 2010 conference on ECAI 2010: 19th European Conference on Artificial Intelligence", "volume": "26", "issue": "", "pages": "68--80", "other_ids": {}, "num": null, "urls": [], "raw_text": "S.A. Mirroshandel and G. Ghassem-Sani. 2010. Tem- poral Relations Learning with a Bootstrapped Cross- document Classifier. In Proceeding of the 2010 con- ference on ECAI 2010: 19th European Conference on Artificial Intelligence, pages 829-834. IOS Press. S.A. Mirroshandel, G. Ghassem-Sani, and M. Khayyamian. 2010. Using Syntactic-Based Kernels for Classifying Temporal Relations. Journal of Computer Science and Technology, 26(1):68-80.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Wvali: Temporal relation identification by syntactico-semantic analysis", "authors": [ { "first": "G", "middle": [], "last": "Puscasu", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 4th International Workshop on SemEval", "volume": "", "issue": "", "pages": "484--487", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Puscasu. 2007. Wvali: Temporal relation identifica- tion by syntactico-semantic analysis. In Proceedings of the 4th International Workshop on SemEval, pages 484-487.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "The timebank corpus", "authors": [ { "first": "J", "middle": [], "last": "Pustejovsky", "suffix": "" }, { "first": "P", "middle": [], "last": "Hanks", "suffix": "" }, { "first": "R", "middle": [], "last": "Sauri", "suffix": "" }, { "first": "A", "middle": [], "last": "See", "suffix": "" }, { "first": "R", "middle": [], "last": "Gaizauskas", "suffix": "" }, { "first": "A", "middle": [], "last": "Setzer", "suffix": "" }, { "first": "D", "middle": [], "last": "Radev", "suffix": "" }, { "first": "B", "middle": [], "last": "Sundheim", "suffix": "" }, { "first": "D", "middle": [], "last": "Day", "suffix": "" }, { "first": "L", "middle": [], "last": "Ferro", "suffix": "" } ], "year": 2003, "venue": "Corpus Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Pustejovsky, P. Hanks, R. Sauri, A. See, R. Gaizauskas, A. Setzer, D. Radev, B. Sundheim, D. Day, L. Ferro, et al. 2003. The timebank corpus. In Corpus Linguis- tics, volume 2003, page 40.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Toward optimal active learning through sampling estimation of error reduction", "authors": [ { "first": "J", "middle": [], "last": "Pustejovsky", "suffix": "" }, { "first": "B", "middle": [], "last": "Ingria", "suffix": "" }, { "first": "R", "middle": [], "last": "Sauri", "suffix": "" }, { "first": "J", "middle": [], "last": "Castano", "suffix": "" }, { "first": "J", "middle": [], "last": "Littman", "suffix": "" }, { "first": "R", "middle": [], "last": "Gaizauskas", "suffix": "" }, { "first": "A", "middle": [], "last": "Setzer", "suffix": "" }, { "first": "G", "middle": [], "last": "Katz", "suffix": "" }, { "first": "I", "middle": [], "last": "Mani", "suffix": "" } ], "year": 2001, "venue": "MACHINE LEARNING-INTERNATIONAL WORKSHOP THEN CONFERENCE", "volume": "", "issue": "", "pages": "441--448", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Pustejovsky, B. Ingria, R. Sauri, J. Castano, J. Littman, R. Gaizauskas, A. Setzer, G. Katz, and I. Mani. 2004. The specification language TimeML. The Language of Time: A Reader. Oxford University Press, Oxford. N. Roy and A. McCallum. 2001. Toward optimal active learning through sampling estimation of error reduc- tion. In MACHINE LEARNING-INTERNATIONAL WORKSHOP THEN CONFERENCE-, pages 441- 448. Citeseer.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "An analysis of active learning strategies for sequence labeling tasks", "authors": [ { "first": "B", "middle": [], "last": "Settles", "suffix": "" }, { "first": "M", "middle": [], "last": "Craven", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1070--1079", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Settles and M. Craven. 2008. An analysis of active learning strategies for sequence labeling tasks. In Pro- ceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1070-1079. As- sociation for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Multipleinstance active learning", "authors": [ { "first": "B", "middle": [], "last": "Settles", "suffix": "" }, { "first": "M", "middle": [], "last": "Craven", "suffix": "" }, { "first": "S", "middle": [], "last": "Ray", "suffix": "" } ], "year": 2008, "venue": "Advances in Neural Information Processing Systems (NIPS. Citeseer", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Settles, M. Craven, and S. Ray. 2008. Multiple- instance active learning. In In Advances in Neural In- formation Processing Systems (NIPS. Citeseer.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Active Learning Literature Survey", "authors": [ { "first": "Burr", "middle": [], "last": "Settles", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Burr Settles. 2010. Active Learning Literature Survey. Technical Report Technical Report 1648, University of Wisconsin-Madison.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Query by committee", "authors": [ { "first": "H", "middle": [ "S" ], "last": "Seung", "suffix": "" }, { "first": "M", "middle": [], "last": "Opper", "suffix": "" }, { "first": "H", "middle": [], "last": "Sompolinsky", "suffix": "" } ], "year": 1992, "venue": "Proceedings of the fifth annual workshop on Computational learning theory", "volume": "", "issue": "", "pages": "287--294", "other_ids": {}, "num": null, "urls": [], "raw_text": "H.S. Seung, M. Opper, and H. Sompolinsky. 1992. Query by committee. In Proceedings of the fifth annual workshop on Computational learning theory, pages 287-294. ACM.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Incorporating diversity and density in active learning for relevance feedback", "authors": [ { "first": "Z", "middle": [], "last": "Xu", "suffix": "" }, { "first": "R", "middle": [], "last": "Akella", "suffix": "" }, { "first": "Y", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2007, "venue": "Advances in Information Retrieval", "volume": "", "issue": "", "pages": "246--257", "other_ids": {}, "num": null, "urls": [], "raw_text": "Z. Xu, R. Akella, and Y. Zhang. 2007. Incorporating diversity and density in active learning for relevance feedback. Advances in Information Retrieval, pages 246-257.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "num": null, "text": "The weakness of uncertainty measure for dealing with outliers. Circles and triangles represent labeled instances while squares represent unlabeled instances." }, "FIGREF1": { "type_str": "figure", "uris": null, "num": null, "text": "The necessity of applying diversity measure to select samples from the whole problem space." }, "FIGREF2": { "type_str": "figure", "uris": null, "num": null, "text": "THE PROPOSED ACTIVE LEARNING \u03b1: Uncertainty coefficient L: Labeled set U: Unlabeled pool \u03d5(x): Uncertainty measure \u03c8(x): Representativeness measure B I : Initial query batch size B F : Final query batch size while termination condition is not satisfied do \u03b8 = train(L); T I = \u2205; for i = 1 to B I do // Find most uncertain and representative instanc\u00ea x = arg max x\u2208U [\u03b1\u03d5(x) + (1 \u2212 \u03b1)\u03c8(x)]; T I = T I \u222a {x}; end for Apply Hierarchical clustering on" }, "FIGREF3": { "type_str": "figure", "uris": null, "num": null, "text": "Learning curves for different uncertainty instance selection strategies applied to OTC using Mani's kernel." }, "FIGREF4": { "type_str": "figure", "uris": null, "num": null, "text": "Learning curves for different uncertainty instance selection strategies applied to OTC using AAPD kernel." }, "FIGREF5": { "type_str": "figure", "uris": null, "num": null, "text": "Accuracy improvement when adding representativeness measure to the uncertainty instance selection in Mani's kernel." }, "FIGREF6": { "type_str": "figure", "uris": null, "num": null, "text": "shows the accuracy improvement when adding representativeness to uncertainty with Mani's (resp. AAPD) kernel. We have chosen to Accuracy improvement when adding representativeness measure to the uncertainty instance selection in AAPD kernel." }, "FIGREF7": { "type_str": "figure", "uris": null, "num": null, "text": "Accuracy improvement when adding diversity in the instance selection with Mani's kernel." }, "FIGREF8": { "type_str": "figure", "uris": null, "num": null, "text": "Accuracy improvement when adding diversity in the instance selection with AAPD kernel." }, "FIGREF9": { "type_str": "figure", "uris": null, "num": null, "text": "Learning curves for combined uncertainty, representative and diversity measures with Mani's kernel. Learning curves for combined uncertainty, representative and diversity measures with AAPD kernel." }, "TABREF1": { "num": null, "content": "", "html": null, "type_str": "table", "text": "The normalized TLink class distribution in OTC." } } } }