{ "paper_id": "I08-1036", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:40:42.291078Z" }, "title": "Mining the Web for Relations between Digital Devices using a Probabilistic Maximum Margin Model", "authors": [ { "first": "Oksana", "middle": [], "last": "Yakhnenko", "suffix": "", "affiliation": { "laboratory": "", "institution": "Iowa State University Ames", "location": { "postCode": "50010", "region": "IA" } }, "email": "oksayakh@cs.iastate.edu" }, { "first": "Barbara", "middle": [], "last": "Rosario", "suffix": "", "affiliation": { "laboratory": "", "institution": "Intel Research Santa Clara", "location": { "postCode": "95054", "region": "CA" } }, "email": "barbara.rosario@intel.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Searching and reading the Web is one of the principal methods used to seek out information to resolve problems about technology in general and digital devices in particular. This paper addresses the problem of text mining in the digital devices domain. In particular, we address the task of detecting semantic relations between digital devices in the text of Web pages. We use a Na\u00efve Bayes model trained to maximize the margin and compare its performance with several other comparable methods. We construct a novel dataset which consists of segments of text extracted from the Web, where each segment contains pairs of devices. We also propose a novel, inexpensive and very effective way of getting people to label text data using a Web service, the Mechanical Turk. Our results show that the maximum margin model consistently outperforms the other methods.", "pdf_parse": { "paper_id": "I08-1036", "_pdf_hash": "", "abstract": [ { "text": "Searching and reading the Web is one of the principal methods used to seek out information to resolve problems about technology in general and digital devices in particular. This paper addresses the problem of text mining in the digital devices domain. In particular, we address the task of detecting semantic relations between digital devices in the text of Web pages. We use a Na\u00efve Bayes model trained to maximize the margin and compare its performance with several other comparable methods. We construct a novel dataset which consists of segments of text extracted from the Web, where each segment contains pairs of devices. We also propose a novel, inexpensive and very effective way of getting people to label text data using a Web service, the Mechanical Turk. Our results show that the maximum margin model consistently outperforms the other methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In the digital home domain, home networks are moving beyond the common infrastructure of routers and wireless access points to include application-oriented devices like network attached storage, Internet telephones (VOIP), digital video recorders (e.g., Tivo), media players, entertainment PCs, home automation, and networked photo printers. There is an ongoing challenge associated with domestic network design, technology education, device setup, repair, and tuning. In this digital home setting, searching the Web is one of the principle methods used to seek out information and to resolve problems about technology in general and about digital devices in particular (Bly et al., 2006) .", "cite_spans": [ { "start": 670, "end": 688, "text": "(Bly et al., 2006)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper addresses the problem of automatic text mining in the digital networks domain. Understanding the relations between entities in natural language sentences is a crucial step toward the goal of text mining. We address the task of identifying and extracting the sentences from Web pages which expressed a relation between two given digital devices in contrast to sentences in which these devices cooccur.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "As an example, consider a user who is looking for information on digital video recorders (DVR), in particular, on how she can use a DVR with a PC. This user will not be satisfied with finding Web pages that simply mention these devices (such as the many products catalogs or shopping sites), but rather, the user is interested in retrieving and reading only the Web pages in which a specific relation between the two devices is expressed. The user is interested to learn that, for example, \"Any modern Windows PC can be used for DVR duty\" or that it is possible to transfer data from a DVR to a PC (\"You can simply take out the HD from the DVR, hook it up to the PC, and copy the videos over to the PC\"). 1 The specific task addressed in this paper is the following: given a pair of devices, search the Web and extract only the sentences in which the devices are actually involved in an activity or a relation in the retrieved Web pages.", "cite_spans": [ { "start": 705, "end": 706, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Note that we do not attempt to identify the type of relationship between devices but rather we classify sentences into whether the relation or activity is present or not, and thus we frame the problem as a binary text classification problem. 2 We propose a directed maximum margin probabilistic model to solve this classification task. Maximum margin probabilistic models have received a lot of attention in the machine learning and natural language processing literature. These models are trained to maximize the smallest difference between the probabilities of the true class and the best alternative class. Approaches such as maximum margin Markov networks (M3N) (Taskar et al., 2003) have been considered in prediction problems in which the goal is to assign a label to each word in the sentence or a document (such as part of speech tagging). It has also been shown that training of Bayesian networks by maximizing the margin can result in better performance than M3N in a flat-table structured domain (simulated and UCI repository datasets) and a structured prediction problem (protein secondary structure) (Guo et al., 2005) . Given this background, we draw our attention to the application of maximum margin probabilistic models to a text classification task. We consider a directed model, where the parameters represent a probability distribution for words in each class (maximum margin equivalent of a Na\u00efve Bayes). We evaluate the maximum margin model and compare its performance with the equivalent joint likelihood model (Na\u00efve Bayes), conditional likelihood model (logistic regression) and support vector machines (SVM) on the relationship extraction task described above, as well as several other classification methods. Our results show that the maximum margin Na\u00efve Bayes outperforms the other methods in terms of classification accuracy. To train such a model, manually labeled data is required, which is usually slow and expensive to acquire. To address this, we propose a novel, inexpensive and very effective way of getting people to label text data using the Mechanical Turk, an Amazon website 3 where people earn \"micro-money\" for completing tasks which are simple for humans to accomplish. The paper is organized as follows: in Section 2 we discuss related work. In Section 3 we review joint likelihood and conditional likelihood models and maximum margin Na\u00efve Bayes. In Section 4 we describe the collection of the training sentences, and how Mechanical Turk was used to construct the labels for the data. Section 5 introduces the experimental setup and presents performance results for each of the algorithms. We analyze Na\u00efve Bayes, maximum margin Na\u00efve Bayes and logistic regression in terms of the learned probability distributions in Section 6. Section 7 concludes with discussion.", "cite_spans": [ { "start": 242, "end": 243, "text": "2", "ref_id": null }, { "start": 666, "end": 687, "text": "(Taskar et al., 2003)", "ref_id": "BIBREF13" }, { "start": 1113, "end": 1131, "text": "(Guo et al., 2005)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There has been a spate of work on relation extraction in recent years. However, many papers actually address the task of role extraction: (usually two) entities are identified and the relationship is implied by the co-occurrence of these entities or by some linguistic expression (Agichtein and Gravano, 2000; Zelenko et al., 2003) .", "cite_spans": [ { "start": 280, "end": 309, "text": "(Agichtein and Gravano, 2000;", "ref_id": "BIBREF0" }, { "start": 310, "end": 331, "text": "Zelenko et al., 2003)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Related work 2.1 Relation extraction", "sec_num": "2" }, { "text": "Several papers propose the use of machine learning models and probabilistic models for relation extraction: Na\u00efve Bayes for the relation subcellularlocation in the bio-medical domain (Craven, 1999) or for person-affiliation and organization-location (Zelenko et al., 2003) . Rosario and Hearst (2005) have used a more complicated dynamic graphical model to identify interaction types between proteins and to simultaneously extract the proteins.", "cite_spans": [ { "start": 183, "end": 197, "text": "(Craven, 1999)", "ref_id": "BIBREF3" }, { "start": 250, "end": 272, "text": "(Zelenko et al., 2003)", "ref_id": "BIBREF16" }, { "start": 275, "end": 300, "text": "Rosario and Hearst (2005)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Related work 2.1 Relation extraction", "sec_num": "2" }, { "text": "Probabilistic graphical models and different approaches to training them have received a lot of attention in application to natural language processing. McCallum and Nigam (1998) showed that Na\u00efve Bayes can be a very accurate model for text categorization.", "cite_spans": [ { "start": 153, "end": 178, "text": "McCallum and Nigam (1998)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Maximum margin models", "sec_num": "2.2" }, { "text": "Since probabilistic graphical models represent joint probability distributions whereas classification focuses on the conditional probability, there has been debate regarding the objective that should be maximized in order to train these models. Ng and Jordan (2001) have compared a joint likelihood model (Na\u00efve Bayes) and its discriminative counterpart (logistic regression), and they have shown that while for large number of examples logistic regression has a lower error rate, Na\u00efve Bayes often outperforms logistic regression for smaller data sets. However, Klein and Manning (2002) showed that for natural language and text processing tasks, conditional models are usually better than joint likelihood models. Yakhnenko et al. (2005) also showed that conditional models suffer from overfitting in text and sequence structured domains.", "cite_spans": [ { "start": 245, "end": 265, "text": "Ng and Jordan (2001)", "ref_id": "BIBREF10" }, { "start": 563, "end": 587, "text": "Klein and Manning (2002)", "ref_id": "BIBREF7" }, { "start": 716, "end": 739, "text": "Yakhnenko et al. (2005)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Maximum margin models", "sec_num": "2.2" }, { "text": "In recent years, the interest in learning parameters of probabilistic models by maximizing the probabilistic margin has developed. Taskar et al. (2003) have solved the problem of learning Markov networks (undirected graphs) by maximizing the margin. Their work has focused on likelihood based structured classification where the goal is to assign a class to each word in the sentence or a document. Guo et al. (2005) have proposed a solution to learning parameters of the maximum margin Bayesian Networks.", "cite_spans": [ { "start": 131, "end": 151, "text": "Taskar et al. (2003)", "ref_id": "BIBREF13" }, { "start": 399, "end": 416, "text": "Guo et al. (2005)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Maximum margin models", "sec_num": "2.2" }, { "text": "Surprisingly, little has been done in applying probabilistic models trained to maximize the margin to simple classification tasks (to the best of our knowledge). Therefore, since the Na\u00efve Bayes model has been shown to be a successful algorithm for many text classification tasks (McCallum and Nigam, 1998) we suggest learning the parameters of Na\u00efve Bayes model to maximize the probabilistic margin. We apply the Na\u00efve Bayes model trained to maximize the margin to a relation extraction task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Maximum margin models", "sec_num": "2.2" }, { "text": "We now describe the background in probabilistic models as well as different approaches to parameter estimation for probabilistic models. In particular, we describe Na\u00efve Bayes, logistic regression (analogous to conditionally trained Na\u00efve Bayes) and then introduce Na\u00efve Bayes trained to maximize the margin. First, we introduce some notation. Let D be a corpus that consists of training examples. Let T be the size of D. We represent each example with a tuple s, c where s is a sentence or a document, and c is a label from a set of all possible labels, c \u2208 C = {c 1 ...c m }. Let D= s i , c i where superscript 1 \u2264 i \u2264 T is the index of the document in the corpus, and c i is the label of example s i . Let V be vocabulary of D, so that every document s consists of elements of V . We will use s j to denote a word from s in position j, where 1 \u2264 j \u2264 length(s).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Joint and conditional likelihood models and maximum margin", "sec_num": "3" }, { "text": "A probabilistic model assigns to each instance s a joint probability of the instance and the class P (s, c). If the probability distribution is known, then a new instance s new can be classified by giving it a label which has the highest probability:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative and discriminative Na\u00efve Bayes models", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "c = arg max c k \u2208C P (c k |s new )", "eq_num": "(1)" } ], "section": "Generative and discriminative Na\u00efve Bayes models", "sec_num": "3.1" }, { "text": "Joint likelihood models learn the parameters by maximizing the probability of an example and its class, P (s, c). Na\u00efve Bayes multinomial, for instance, assumes that all words in the sentence are independent given the class, and computes this probability as P (c)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative and discriminative Na\u00efve Bayes models", "sec_num": "3.1" }, { "text": "length(s) j=1 P (s j |c).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative and discriminative Na\u00efve Bayes models", "sec_num": "3.1" }, { "text": "Each of P (s j |c) and P (c) are estimated from the training data using relative frequency estimates. From here on we will refer to joint likelihood Na\u00efve Bayes multinomial as NB-JL.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative and discriminative Na\u00efve Bayes models", "sec_num": "3.1" }, { "text": "Since the conditional probability is needed for the classification task, it has been suggested to solve the maximization problem and train the model so that the choice of the parameters maximizes P (c|s) directly. One can use a joint likelihood model to obtain joint probability distribution P (s, c) and then use the definition of conditional probability to get P (c|s) = P (s, c)/ c k \u2208C P (s, c k ). The solutions that maximize this objective function are searched for by using gradient ascent methods. Logistic regression is a conditional model that assumes the independence of features given the class, and it is a conditional counterpart to NB-JL (Ng and Jordan, 2001 ).", "cite_spans": [ { "start": 653, "end": 673, "text": "(Ng and Jordan, 2001", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Generative and discriminative Na\u00efve Bayes models", "sec_num": "3.1" }, { "text": "We will now introduce a probabilistic maximum margin objective and describe a maximum margin model that is analogous to Na\u00efve Bayes and logistic regression.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative and discriminative Na\u00efve Bayes models", "sec_num": "3.1" }, { "text": "The basic idea behind maximum margin models is to choose model parameters that for each example will make the probability of the true class and the example as high as possible while making the probability of the nearest alternative class as low as possible. Formally, the maximum margin objective is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Maximum margin training of Na\u00efve Bayes models", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b3 = T min i=1 min c =c i P (c i |s i ) P (c|s i ) = T min i=1 min c =c i P (s i , c i ) P (s i , c)", "eq_num": "(2)" } ], "section": "Maximum margin training of Na\u00efve Bayes models", "sec_num": "3.2" }, { "text": "Here P (s, c) is modeled by a generative model, and parameter learning is reduced to solving a convex optimization problem (Guo et al., 2005) .", "cite_spans": [ { "start": 123, "end": 141, "text": "(Guo et al., 2005)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Maximum margin training of Na\u00efve Bayes models", "sec_num": "3.2" }, { "text": "In order for the example to be classified correctly, the probability of the true class given the example has to be higher than the probability of getting the wrong class or", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Maximum margin training of Na\u00efve Bayes models", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b3 i = log p(c i |s i ) \u2212 log p(c j |s i ) > 0", "eq_num": "(3)" } ], "section": "Maximum margin training of Na\u00efve Bayes models", "sec_num": "3.2" }, { "text": "where j = i and c i is the true label of example s i . The larger the margin \u03b3 i is, the more confidence we have in the prediction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Maximum margin training of Na\u00efve Bayes models", "sec_num": "3.2" }, { "text": "We consider a Na\u00efve Bayes model trained to maximize the margin and refer to this model as MMNB. Using exponential family notation, let P (s j |c) = e w s j |c . The likelihood is P (s, c) = e wc len(s) j=1 e w s j |c . Then the log-likelihood", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Maximum margin training of Na\u00efve Bayes models", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "log P (s, c) = w c + len(s) j=1 count(s j )w s j |c = w\u2022\u03c6(s, c)", "eq_num": "(4)" } ], "section": "Maximum margin training of Na\u00efve Bayes models", "sec_num": "3.2" }, { "text": "where w is the weight vector for all the parameters that need to be learned, and \u03c6(s, c) is the vector of counts of words associated with each parameter \u03c6(s, c) = (...count(s j c)....) in s for class c.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Maximum margin training of Na\u00efve Bayes models", "sec_num": "3.2" }, { "text": "The general formulation for Bayesian networks was given in Guo et al., and we adapt their formulation for training a Na\u00efve Bayes model. The parameters are learned by solving a convex optimization problem. If the margin \u03b3 is the smallest log-ratio, then \u03b3 needs to be maximized, where the constraint is that for each instance the log-ratio of the probability of predicting the instance correctly and predicting it incorrectly is at least \u03b3. Such formulation also allows for the use of slack variables \u03be so that the classifier \"gives up\" on the examples that are difficult to classify.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Maximum margin training of Na\u00efve Bayes models", "sec_num": "3.2" }, { "text": "minimize \u03b3,w,\u03be 1 \u03b3 2 + B T i=1 \u03be i subject to w(\u03c6(i, c i ) \u2212 \u03c6(i, c)) \u2265 \u03b3\u03b4(c i , c) \u2212 \u03be i and s i \u2208V e ws i ,c \u2264 1\u2200c \u2208 C and \u03b3 \u2265 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Maximum margin training of Na\u00efve Bayes models", "sec_num": "3.2" }, { "text": "This problem is convex in the variables \u03b3, w, \u01eb. B is a regularization parameter, and \u03b4(c i , c) = 1 if c i = c and 0 otherwise. The inequality constraint for probabilities is needed to preserve convexity of the problem, and in the case of Na\u00efve Bayes, the probability distribution over the parameters (the equality constraint) can be easily obtained by renormalizing the learned parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Maximum margin training of Na\u00efve Bayes models", "sec_num": "3.2" }, { "text": "The minimization problem is somewhat similar to \u2113 2 -norm support vector machine with a soft margin (Cristianini and Shawe-Taylor, 2000) . The first constraint imposes that for each example the log of the ratio between the example under the true class and the example under some alternative class is greater than the margin allowing for some slack. The second constraint enforces that the parameters do not get very large and that the probabilities sum to less than 1 to maintain valid probability distribution (the inequality constraint is required to preserve convexity, and the probability distribution can be obtained after training by renormalization).", "cite_spans": [ { "start": 100, "end": 136, "text": "(Cristianini and Shawe-Taylor, 2000)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Maximum margin training of Na\u00efve Bayes models", "sec_num": "3.2" }, { "text": "Following Guo et al. (2005) , we find parameters using a log-barrier method (Boyd and Vandenberghe, 2004) , the sum of the logarithms of constraints are subtracted from the objective and scaled by a parameter \u00b5. The problem is solved sequentially using a fixed \u00b5 and gradually lowering \u00b5 to 0. The solution for a fixed \u00b5 is obtained using (typically) a second order method to guarantee faster convergence. This solution is then used as the initial parameter values for the next \u00b5. In our implementation we used a limited memory quasi-Newton method (Nocedal and Liu, 1989) .", "cite_spans": [ { "start": 10, "end": 27, "text": "Guo et al. (2005)", "ref_id": "BIBREF6" }, { "start": 76, "end": 105, "text": "(Boyd and Vandenberghe, 2004)", "ref_id": "BIBREF2" }, { "start": 548, "end": 571, "text": "(Nocedal and Liu, 1989)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Maximum margin training of Na\u00efve Bayes models", "sec_num": "3.2" }, { "text": "One major problem of natural language processing is the sparsity of data; to accurately learn a linguistic model, one needs to label a large amount of text, which is usually an expensive requirement. For information extraction, the labeling process is particularly difficult and time consuming. Moreover, in different applications one needs different labeled data for each domain. We propose a creative way of convincing many people to label data quickly and at low cost to us by using the Mechanical Turk. Similarly, Luis von Ahn (2006) creates very successful and compelling computer games in such a way that while playing, people provide labels for images on the Web.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The problem of labeling data", "sec_num": "4.1" }, { "text": "To collect the data, we identified 58 pairs of digital devices, as well as their synonyms (for example, computer, laptop, PC, desktop, etc), and different manufacturers for a given device (for example Toshiba, Dell, IBM, etc). The devices alone were used to construct the query (for example 'computer, camera', as well as a combination of manufacturer and devices (for example 'dell laptop, cannon camera'). Each of these pairs was used as a query in Google, and the sentences that contain both devices were extracted resulting in a total of 3624 sentences. We use the word 'sentence' when referring to the examples, however we note that not all text excerpts are sentences, some are chunks of text data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Collecting data and label agreement analysis", "sec_num": "4.2" }, { "text": "To label the data we used the Mechanical Turk (MTurk), a Web service that allows you to create and post a task for humans to solve; typical tasks are labeling pictures, choosing the best among several photographs, writing product descriptions, proofreading and transcribing podcasts. After the task is completed the requesters can then review the submissions and reject them if the results are poor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Collecting data and label agreement analysis", "sec_num": "4.2" }, { "text": "We created a total of 121 unique surveys consisting of 30 questions. Each question consisted of one of the extracted statements with the devices highlighted in red. The task for the labeler was to choose between 'Yes', if the statement contained a relation between the devices, 'No' if it did not, or 'not ap- plicable' if the text extract was not a sentence, or if the query words were not used as different devices (as for noun compounds such as computer stereo). 4 Each survey was assigned to 3 distinct workers, thus having 3 possible labels for all 3624 sentences. 5 We used Fleiss's kappa (Fleiss, 1971 ) (a generalization of kappa statistic which takes into account multiple raters and measures inter-rater reliability) in order to determine the degree of agreement and to determine whether the agreement was accidental. Kappa statistics is a number between 0 and 1 where 0 is random agreement, and 1 is perfect agreement.", "cite_spans": [ { "start": 466, "end": 467, "text": "4", "ref_id": null }, { "start": 570, "end": 571, "text": "5", "ref_id": null }, { "start": 595, "end": 608, "text": "(Fleiss, 1971", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Collecting data and label agreement analysis", "sec_num": "4.2" }, { "text": "In order to compute kappa statistic, since the computation requires that the raters are the same for each survey, we mapped workers into 'worker1', 'worker2', 'worker3' with 'worker1' being the first worker to complete each of the 121 surveys, 'worker2' the second, and so on. The responses are summarized in Table 1 .", "cite_spans": [], "ref_spans": [ { "start": 309, "end": 316, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Collecting data and label agreement analysis", "sec_num": "4.2" }, { "text": "The overall Fleiss's kappa was 0.41 6 , and therefore, it can be concluded that the agreement between the workers was not accidental.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Collecting data and label agreement analysis", "sec_num": "4.2" }, { "text": "We had perfect agreement for 49% of all sentences, 5% received all three labels (these examples were discarded) and for the remaining 46% two la- 4 This dataset, including all the MTurk's workers responses is available at http://www.cs.iastate.edu/\u02dcoksayakh/relation data.html 5 The requirement for the workers to be different was imposed by the MTurk system, which checks their Amazon identity; however, this still allows for the same person who has multiple identities to complete the same task more than once.", "cite_spans": [ { "start": 146, "end": 147, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Collecting data and label agreement analysis", "sec_num": "4.2" }, { "text": "6 The kappa coefficients for categories 'Yes' and 'No' were 0.45 and 0.41 respectively (moderate agreement) and for category 'not applicable' was 0.15 (slight agreement). bels were assigned (the majority vote was used to determine the final label). For these cases, we noticed that some of the labels were wrong (however in most cases the majority vote results in the correct label) but other sentences were ambiguous and either label could be right. To assign the final label we used majority vote, and we discarded sentences for which 'not applicable' was the majority label.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Collecting data and label agreement analysis", "sec_num": "4.2" }, { "text": "We rewarded the users with between 15 and 30 cents per survey (resulting in less than a cent for a text segment) and we were able to obtain labels for 3594 text segments for under $70. It also took anywhere between a few minutes to a half-hour from the time the survey was made available until it was completed by all three users. We find Mechanical Turk to be a quite interesting, inexpensive, fairly accurate and fast way to obtain labeled data for natural language processing tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Collecting data and label agreement analysis", "sec_num": "4.2" }, { "text": "We used this data to evaluate the classification models as described in the next section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Collecting data and label agreement analysis", "sec_num": "4.2" }, { "text": "The words were stemmed, and the data was smoothed by mapping all the words that appeared only once to a unique token smoothing token (resulting in a total of approximately 2,800 words in the vocabulary). We performed 10-fold crossvalidation, with smoothed test data where all the unseen words in the test data were mapped to the token smoothing token. We used the exact same data in the folds for all four algorithms -MMNB, NB-JL, logistic regression and SVM. Since MMNB, SVM, and logistic regression allows for regularization, we used tuning to find the optimal performance of the models. At each fold we withheld 30% of the training data for validation purposes (thus resulting in 3 disjoint sets at each fold). The model was trained on the resulting 70% of the training data for different values of the regularization parameters, and the value which yielded the highest accuracy on the validation set was used to train the model that was evaluated on the test set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental setup and results", "sec_num": "5" }, { "text": "As a baseline, we consider a classifier which assigns the most frequent label ('Yes'); such a classifier results in 53% accuracy. validation with tuning data. We compared the accuracies of the maximum margin model with the accuracy of generative Na\u00efve Bayes, logistic regression and SVM as shown in Table 2 . The MMNB has the highest accuracy followed by NB-JL and then SVM with RBF kernel. Even after tuning, logistic regression did not reach the performance of MMNB and NB-JL.", "cite_spans": [], "ref_spans": [ { "start": 299, "end": 306, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Experimental setup and results", "sec_num": "5" }, { "text": "Since MMNB is trained to maximize the margin, we compared it with the Support Vector Machine (linear maximum margin classifier). Counts of words were used as features (resulting in the bag of words representation 7 ). We ran our experiments with linear, quadratic, cubic and RBF kernels. SVM was tuned using the validation set similarly to MMNB. We also experimented with Perceptron and Decision Tree using binary splits with reduced errorpruning, which are methods commonly used for text classification (due to lack of space, we will not describe these methods and their applications, but refer the reader to Manning and Sch\u00fctze (1999)). Among all the known methods, the maximum margin Na\u00efve Bayes is the algorithm with the highest accuracy, suggesting that it is a competitive algorithm in relation extraction and text classification tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental setup and results", "sec_num": "5" }, { "text": "We analyzed the behavior of the parameters of the probabilistic models (Na\u00efve Bayes, MMNB and logistic regression) on the training data. For each example in the training data we computed the probability P (c = noRelation|s) using the parameters from the model, and examined the probabilities assigned to examples from both classes. We show these plots in Figure 1 . As we see, the logistic regression discriminates between the majority of the examples by assigning extreme probabilities (0 and 1). However, there are some examples which are extremely borderline, and thus it does not generalize well on the test set. On the other had, Na\u00efve Bayes does not have such \"sharp\" discrimination. Maximum margin Na\u00efve Bayes has \"sharper\" discrimination than Na\u00efve Bayes, however the discrimination is smoother than for logistic regression. The examples which are more difficult to classify have probabilities that are more spread out (away from 0.5), as opposed to the case of logistic regression, which assigns these difficult examples to probability close to 0.5. This suggests that maximum margin Na\u00efve Bayes, possibly has a better generalization ability than both logistic regression and Na\u00efve Bayes, however to make such a claim additional experiments are needed.", "cite_spans": [], "ref_spans": [ { "start": 355, "end": 363, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Analysis of behavior of Na\u00efve Bayes, maximum margin Na\u00efve Bayes and logistic regression", "sec_num": "6" }, { "text": "The contribution of this paper is threefold. First, we addressed the important problem of identifying the presence of semantic relations between entities in text, focusing on the digital domain. We presented some encouraging results; it remains to be seen however, how this would transfer to better results in an information retrieval task. Secondly, we considered a probabilistic model trained to maximize the margin, that achieved the highest accuracy for this task, suggesting that it could be a competitive algorithm for relation extraction and text classification in general. However in order to fully evaluate the MMNB method for relation classification it needs to be applied to other classification and or relation prediction tasks. We also empirically analyzed the behavior of the parameters learned by maximum margin model and showed that the parameters allow for better generalization power than Na\u00efve Bayes or logistic regression models. Finally, we suggested an inexpensive way of getting people to label text data via Mechanical Turk.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "In italic are real sentences extracted from Web pages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Classifying or clustering the relation types would involve the tricky task of defining the possible semantic relations between devices as well as relations. We plan of addressing this in the future work, however, we believe that such binary distinction is already quite useful for many tasks in this domain.3 Available at http://www.mturk.com", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This representation allows for additional or alternative features such as k-grams of words, whether the words are capitalized, where on the page the sentence was located, etc. Evaluating MMNB and other methods with additional features is of interest in the future", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The authors would like to thank the reviewers for their feedback and comments; William Schilit for invaluable insight and help and for first suggesting using the MTurk to gather labeled data; David McDonald for help with developing survey instructions; and numerous MT workers for providing the labels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgment", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Snowball: Extracting relations from large plain-text collections", "authors": [ { "first": "Eugene", "middle": [], "last": "Agichtein", "suffix": "" }, { "first": "Luis", "middle": [], "last": "Gravano", "suffix": "" } ], "year": 2000, "venue": "Proceedings of Digital Libraries", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eugene Agichtein and Luis Gravano. 2000. Snowball: Extracting relations from large plain-text collections. In Proceedings of Digital Libraries.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Broken expectations in the digital home", "authors": [ { "first": "Sara", "middle": [], "last": "Bly", "suffix": "" }, { "first": "William", "middle": [], "last": "Schilit", "suffix": "" }, { "first": "David", "middle": [], "last": "Mcdonald", "suffix": "" } ], "year": 2006, "venue": "Proceedings of Computer Human Interaction (CHI)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sara Bly, William Schilit, David McDonald, Barbara Rosario, and Ylian Saint-Hilaire. 2006. Broken ex- pectations in the digital home. In Proceedings of Com- puter Human Interaction (CHI).", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Convex Optimization", "authors": [ { "first": "Stephen", "middle": [], "last": "Boyd", "suffix": "" }, { "first": "Lieven", "middle": [], "last": "Vandenberghe", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Boyd and Lieven Vandenberghe. 2004. Convex Optimization. Cambridge University Press.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Learning to extract relations from Medline", "authors": [ { "first": "Mark", "middle": [], "last": "Craven", "suffix": "" } ], "year": 1999, "venue": "AAAI-99 Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Craven. 1999. Learning to extract relations from Medline. In AAAI-99 Workshop.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "An Introduction to Support Vector Machines and Other Kernel-based Learning Methods", "authors": [ { "first": "Nello", "middle": [], "last": "Cristianini", "suffix": "" }, { "first": "John", "middle": [], "last": "Shawe-Taylor", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nello Cristianini and John Shawe-Taylor. 2000. An Introduction to Support Vector Machines and Other Kernel-based Learning Methods. Cambridge Univer- sity Press.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Measuring nominal scale agreement among many raters", "authors": [ { "first": "Joseph", "middle": [ "L" ], "last": "Fleiss", "suffix": "" } ], "year": 1971, "venue": "Psychological Bulletin", "volume": "76", "issue": "5", "pages": "378--382", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joseph L. Fleiss. 1971. Measuring nominal scale agree- ment among many raters. Psychological Bulletin, 76(5):378-382.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Maximum margin bayesian networks", "authors": [ { "first": "Yuhong", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Dana", "middle": [], "last": "Wilkinson", "suffix": "" }, { "first": "Dale", "middle": [], "last": "Schuurmans", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 21th Annual Conference on Uncertainty in Artificial Intelligence (UAI-05)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuhong Guo, Dana Wilkinson, and Dale Schuurmans. 2005. Maximum margin bayesian networks. In Pro- ceedings of the 21th Annual Conference on Uncer- tainty in Artificial Intelligence (UAI-05), page 233.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Conditional structure versus conditional estimation in nlp models", "authors": [ { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2002, "venue": "Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Klein and Christopher Manning. 2002. Conditional structure versus conditional estimation in nlp models. In Empirical Methods in Natural Language Process- ing (EMNLP).", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Foundations of Statistical Natural Language Processing", "authors": [ { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Manning", "suffix": "" }, { "first": "", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher D. Manning and Hinrich Sch\u00fctze. 1999. Foundations of Statistical Natural Language Process- ing. The MIT Press, June.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A comparison of event models for naive bayes text classification", "authors": [ { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "Kamal", "middle": [], "last": "Nigam", "suffix": "" } ], "year": 1998, "venue": "AAAI-98 Workshop on Learning for Text Categorization", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew McCallum and Kamal Nigam. 1998. A com- parison of event models for naive bayes text classifi- cation. In AAAI-98 Workshop on Learning for Text Categorization.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "On discriminative vs. generative classifiers: A comparison of logistic regression and naive bayes", "authors": [ { "first": "Y", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "Michael", "middle": [ "I" ], "last": "Ng", "suffix": "" }, { "first": "", "middle": [], "last": "Jordan", "suffix": "" } ], "year": 2001, "venue": "Proceedings of Neural Information Processing Systems (NIPS)", "volume": "", "issue": "", "pages": "841--848", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew Y. Ng and Michael I. Jordan. 2001. On dis- criminative vs. generative classifiers: A comparison of logistic regression and naive bayes. In Proceedings of Neural Information Processing Systems (NIPS), pages 841-848.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "On the limited memory method for large scale optimization", "authors": [ { "first": "Jorge", "middle": [], "last": "Nocedal", "suffix": "" }, { "first": "Dong", "middle": [ "C" ], "last": "Liu", "suffix": "" } ], "year": 1989, "venue": "Mathematical Programming", "volume": "3", "issue": "45", "pages": "503--528", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jorge Nocedal and Dong C. Liu. 1989. On the limited memory method for large scale optimization. Mathe- matical Programming, 3(45):503-528.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Multi-way relation classification: Application to protein-protein interactions", "authors": [ { "first": "Barbara", "middle": [], "last": "Rosario", "suffix": "" }, { "first": "Marti", "middle": [], "last": "Hearst", "suffix": "" } ], "year": 2005, "venue": "Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barbara Rosario and Marti Hearst. 2005. Multi-way re- lation classification: Application to protein-protein in- teractions. In Empirical Methods in Natural Language Processing (EMNLP).", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Max-margin markov networks", "authors": [ { "first": "Benjamin", "middle": [], "last": "Taskar", "suffix": "" }, { "first": "Carlos", "middle": [], "last": "Guestrin", "suffix": "" }, { "first": "Daphne", "middle": [], "last": "Koller", "suffix": "" } ], "year": 2003, "venue": "Proceedings of Neural Information Processing Systems (NIPS)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Benjamin Taskar, Carlos Guestrin, and Daphne Koller. 2003. Max-margin markov networks. In Proceedings of Neural Information Processing Systems (NIPS).", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Games with a purpose", "authors": [ { "first": "Ahn", "middle": [], "last": "Luis Von", "suffix": "" } ], "year": 2006, "venue": "Computer", "volume": "39", "issue": "6", "pages": "92--94", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luis von Ahn. 2006. Games with a purpose. Computer, 39(6):92-94.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Discriminatively trained markov model for sequence classification", "authors": [ { "first": "Oksana", "middle": [], "last": "Yakhnenko", "suffix": "" }, { "first": "Adrian", "middle": [], "last": "Silvescu", "suffix": "" }, { "first": "Vasant", "middle": [], "last": "Honavar", "suffix": "" } ], "year": 2005, "venue": "Proceedings of International Conference on Data Mining (ICDM)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oksana Yakhnenko, Adrian Silvescu, and Vasant Honavar. 2005. Discriminatively trained markov model for sequence classification. In Proceedings of International Conference on Data Mining (ICDM).", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Kernel methods for relation extraction", "authors": [ { "first": "Dmitry", "middle": [], "last": "Zelenko", "suffix": "" }, { "first": "Chinatsu", "middle": [], "last": "Aone", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Richardella", "suffix": "" } ], "year": 2003, "venue": "Proceedings of Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2003. Kernel methods for relation extraction. In Proceedings of Empirical Methods in Natural Language Processing (EMNLP).", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Probability distribution of P (c = noRelation|s) learned by the Na\u00efve Bayes (upper left), logistic regression (upper right) and maximum margin Na\u00efve Bayes(lower). In gray are class-conditional probabilities assigned to positive examples, and in black are class-conditional probabilities assigned to negative examples.", "type_str": "figure", "uris": null, "num": null }, "TABREF1": { "html": null, "num": null, "content": "
summarizes the performance of MMNB
and other algorithms as determined by 10-fold cross-
", "text": "", "type_str": "table" }, "TABREF2": { "html": null, "num": null, "content": "
: Classification accuracies as determined by 10-
fold cross-validation. SVM-1 uses linear kernel, SVM-2 uses
quadratic kernel, SVM-3 uses cubic kernel, SVM-RBF uses
RBF kernel with parameter \u03b3 = 0.1. The Decision Tree (DT)
uses binary splits. LR is logistic regression.
", "text": "", "type_str": "table" } } } }