{ "paper_id": "D07-1024", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:20:12.230585Z" }, "title": "Semi-Supervised Classification for Extracting Protein Interaction Sentences using Dependency Parsing", "authors": [ { "first": "G\u00fcne\u015f", "middle": [], "last": "Erkan", "suffix": "", "affiliation": {}, "email": "gerkan@umich.edu" }, { "first": "Dragomir", "middle": [ "R" ], "last": "Radev", "suffix": "", "affiliation": {}, "email": "radev@umich.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We introduce a relation extraction method to identify the sentences in biomedical text that indicate an interaction among the protein names mentioned. Our approach is based on the analysis of the paths between two protein names in the dependency parse trees of the sentences. Given two dependency trees, we define two separate similarity functions (kernels) based on cosine similarity and edit distance among the paths between the protein names. Using these similarity functions, we investigate the performances of two classes of learning algorithms, Support Vector Machines and k-nearest-neighbor, and the semisupervised counterparts of these algorithms, transductive SVMs and harmonic functions, respectively. Significant improvement over the previous results in the literature is reported as well as a new benchmark dataset is introduced. Semi-supervised algorithms perform better than their supervised version by a wide margin especially when the amount of labeled data is limited.", "pdf_parse": { "paper_id": "D07-1024", "_pdf_hash": "", "abstract": [ { "text": "We introduce a relation extraction method to identify the sentences in biomedical text that indicate an interaction among the protein names mentioned. Our approach is based on the analysis of the paths between two protein names in the dependency parse trees of the sentences. Given two dependency trees, we define two separate similarity functions (kernels) based on cosine similarity and edit distance among the paths between the protein names. Using these similarity functions, we investigate the performances of two classes of learning algorithms, Support Vector Machines and k-nearest-neighbor, and the semisupervised counterparts of these algorithms, transductive SVMs and harmonic functions, respectively. Significant improvement over the previous results in the literature is reported as well as a new benchmark dataset is introduced. Semi-supervised algorithms perform better than their supervised version by a wide margin especially when the amount of labeled data is limited.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Protein-protein interactions play an important role in vital biological processes such as metabolic and signaling pathways, cell cycle control, and DNA replication and transcription (Phizicky and Fields, 1995) . A number of (mostly manually curated) databases such as MINT (Zanzoni et al., 2002) , BIND , and SwissProt (Bairoch and Apweiler, 2000) have been created to store protein interaction information in structured and standard formats. However, the amount of biomedical literature regarding protein interactions is increasing rapidly and it is difficult for interaction database curators to detect and curate protein interaction information manually. Thus, most of the protein interaction information remains hidden in the text of the papers in the biomedical literature. Therefore, the development of information extraction and text mining techniques for automatic extraction of protein interaction information from free texts has become an important research area.", "cite_spans": [ { "start": 182, "end": 209, "text": "(Phizicky and Fields, 1995)", "ref_id": "BIBREF19" }, { "start": 273, "end": 295, "text": "(Zanzoni et al., 2002)", "ref_id": "BIBREF25" }, { "start": 319, "end": 347, "text": "(Bairoch and Apweiler, 2000)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we introduce an information extraction approach to identify sentences in text that indicate an interaction relation between two proteins. Our method is different than most of the previous studies (see Section 2) on this problem in two aspects: First, we generate the dependency parses of the sentences that we analyze, making use of the dependency relationships among the words. This enables us to make more syntax-aware inferences about the roles of the proteins in a sentence compared to the classical pattern-matching information extraction methods. Second, we investigate semisupervised machine learning methods on top of the dependency features we generate. Although there have been a number of learning-based studies in this domain, our methods are the first semi-supervised efforts to our knowledge. The high cost of labeling free text for this problem makes semi-supervised methods particularly valuable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We focus on two semi-supervised learning methods: transductive SVMs (TSVM) (Joachims, 1999) , and harmonic functions (Zhu et al., 2003) . We also compare these two methods with their supervised counterparts, namely SVMs and k-nearest neighbor algorithm. Because of the nature of these algorithms, we propose two similarity functions (kernels in SVM terminology) among the instances of the learning problem. The instances in this problem are natural language sentences with protein names in them, and the similarity functions are defined on the positions of the protein names in the corresponding parse trees. Our motivating assumption is that the path between two protein names in a dependency tree is a good description of the semantic relation between them in the corresponding sentence. We consider two similarity functions; one based on the cosine similarity and the other based on the edit distance among such paths.", "cite_spans": [ { "start": 75, "end": 91, "text": "(Joachims, 1999)", "ref_id": "BIBREF15" }, { "start": 117, "end": 135, "text": "(Zhu et al., 2003)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There have been many approaches to extract protein interactions from free text. One of them is based on matching pre-specified patterns and rules (Blaschke et al., 1999; Ono et al., 2001 ). However, complex cases that are not covered by the pre-defined patterns and rules cannot be extracted by these methods. Huang et al. (2004) proposed a method where patterns are discovered automatically from a set of sentences by dynamic programming. Bunescu et al. (2005) have studied the performance of rule learning algorithms. They propose two methods for protein interaction extraction. One is based on the rule learning method Rapier and the other on longest common subsequences. They show that these methods outperform hand-written rules.", "cite_spans": [ { "start": 146, "end": 169, "text": "(Blaschke et al., 1999;", "ref_id": "BIBREF2" }, { "start": 170, "end": 186, "text": "Ono et al., 2001", "ref_id": "BIBREF18" }, { "start": 310, "end": 329, "text": "Huang et al. (2004)", "ref_id": "BIBREF14" }, { "start": 440, "end": 461, "text": "Bunescu et al. (2005)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Another class of approaches is using more syntaxaware natural language processing (NLP) techniques. Both full and partial (shallow) parsing strategies have been applied in the literature. In partial parsing the sentence structure is decomposed partially and local dependencies between certain phrasal components are extracted. An example of the application of this method is relational parsing for the inhibition relation (Pustejovsky et al., 2002) . In full parsing, however, the full sentence structure is taken into account. Temkin and Gilder (2003) used a full parser with a lexical analyzer and a context free grammar (CFG) to extract protein-protein interaction from text. Another study that uses fullsentence parsing to extract human protein interactions is (Daraselia et al., 2004) . Alternatively, Yakushiji et al. (2005) propose a system based on head-driven phrase structure grammar (HPSG). In their system protein interaction expressions are presented as predicate argument structure patterns from the HPSG parser. These parsing approaches consider only syntactic properties of the sentences and do not take into account semantic properties. Thus, although they are complicated and require many resources, their performance is not satisfactory.", "cite_spans": [ { "start": 422, "end": 448, "text": "(Pustejovsky et al., 2002)", "ref_id": "BIBREF20" }, { "start": 528, "end": 552, "text": "Temkin and Gilder (2003)", "ref_id": "BIBREF22" }, { "start": 765, "end": 789, "text": "(Daraselia et al., 2004)", "ref_id": "BIBREF10" }, { "start": 807, "end": 830, "text": "Yakushiji et al. (2005)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Machine learning techniques for extracting protein interaction information have gained interest in the recent years. The PreBIND system uses SVM to identify the existence of protein interactions in abstracts and uses this type of information to enhance manual expert reviewing for the BIND database (Donaldson et al., 2003) . Words and word bigrams are used as binary features. This system is also tested with the Naive Bayes classifier, but SVM is reported to perform better. Mitsumori et al. (2006) also use SVM to extract protein-protein interactions. They use bag-of-words features, specifically the words around the protein names. These systems do not use any syntactic or semantic information. Sugiyama et al. (2003) extract features from the sentences based on the verbs and nouns in the sentences such as the verbal forms, and the part of speech tags of the 20 words surrounding the verb (10 before and 10 after it). Further features are used to indicate whether a noun is found, as well as the part of speech tags for the 20 words surrounding the noun, and whether the noun contains numerical characters, non-alpha characters, or uppercase letters. They construct k-nearest neighbor, decision tree, neural network, and SVM classifiers by using these features. They report that the SVM classifier performs the best. They use part-of-speech information, but do not consider any dependency or semantic information.", "cite_spans": [ { "start": 299, "end": 323, "text": "(Donaldson et al., 2003)", "ref_id": "BIBREF12" }, { "start": 477, "end": 500, "text": "Mitsumori et al. (2006)", "ref_id": "BIBREF17" }, { "start": 700, "end": 722, "text": "Sugiyama et al. (2003)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The paper is organized as follows. In Section 3 we describe our method of extracting features from the dependency parse trees of the sentences and defining the similarity between two sentences. In Section 4 we discuss our supervised and semi-supervised methods. In Section 5 we describe the data sets and evaluation metrics that we used, and present our re-sults. We conclude in Section 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In order to apply the semi-supervised harmonic functions and its supervised counterpart kNN, and the kernel based TSVM and SVM methods, we need to define a similarity measure between two sentences. For this purpose, we use the dependency parse trees of the sentences. Unlike a syntactic parse (which describes the syntactic constituent structure of a sentence), the dependency parse of a sentence captures the semantic predicate-argument relationships among its words. The idea of using dependency parse trees for relation extraction in general was studied by Bunescu and Mooney (2005a) . To extract the relationship between two entities, they design a kernel function that uses the shortest path in the dependency tree between them. The motivation is based on the observation that the shortest path between the entities usually captures the necessary information to identify their relationship. They show that their approach outperforms the dependency tree kernel of Culotta and Sorensen (2004) , which is based on the subtree that contains the two entities. We adapt the idea of Bunescu and Mooney (2005a) to the task of identifying protein-protein interaction sentences. We define the similarity between two sentences based on the paths between two proteins in the dependency parse trees of the sentences. In this study we assume that the protein names have already been annotated and focus instead on the task of extracting protein-protein interaction sentences for a given protein pair. We parse the sentences with the Stanford Parser 1 (de Marneffe et al., 2006) . From the dependency parse trees of each sentence we extract the shortest path between a protein pair.", "cite_spans": [ { "start": 560, "end": 586, "text": "Bunescu and Mooney (2005a)", "ref_id": "BIBREF3" }, { "start": 968, "end": 995, "text": "Culotta and Sorensen (2004)", "ref_id": "BIBREF9" }, { "start": 1081, "end": 1107, "text": "Bunescu and Mooney (2005a)", "ref_id": "BIBREF3" }, { "start": 1546, "end": 1568, "text": "Marneffe et al., 2006)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Sentence Similarity Based on Dependency Parsing", "sec_num": "3" }, { "text": "For example, Figure 1 shows the dependency tree we got for the sentence \"The results demonstrated that KaiC interacts rhythmically with KaiA, KaiB, and SasA.\" This example sentence illustrates that the dependency path between a protein pair captures the relevant information regarding the relationship between the proteins better compared to using the words in the unparsed sentence. Consider the pro-1 http://nlp.stanford.edu/software/lex-parser.shtml tein pair KaiC and SasA. The words in the sentence between these proteins are interacts, rhythmically, with, KaiA, KaiB, and and. Among these words rhythmically, KaiA, and and KaiB are not directly related to the interaction relationship between KaiC and SasA. On the other hand, the words in the dependency path between this protein pair give sufficient information to identify their relationship.", "cite_spans": [], "ref_spans": [ { "start": 13, "end": 21, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Sentence Similarity Based on Dependency Parsing", "sec_num": "3" }, { "text": "In this sentence we have four proteins (KaiC, KaiA, KaiB, and SasA). So there are six pairs of proteins for which a sentence may or may not be describing an interaction. The following are the paths between the six protein pairs. In this example there is a single path between each protein pair. However, there may be more than one paths between a protein pair, if one or both appear multiple times in the sentence. In such cases, we select the shortest paths between the protein pairs. If a sentence contains n different proteins, there are n 2 different pairs of proteins. We use machine learning approaches to classify each sentence as an interaction sentence or not for a protein pair. A sentence may be an interaction sentence for one protein pair, while not for another protein pair. For instance, our example sentence is a positive interaction sentence for the KaiC and SasA protein pair. However, it is a negative interaction sentence for the KaiA and SasA protein pair, i.e., it does not describe an interaction between this pair of proteins. Thus, before parsing a sentence, we make multiple copies of it, one for each protein pair. To reduce data sparseness, we rename the proteins in the pair as PROTX1 and PROTX2, and all the other proteins in the sentence as PROTX0. So, for our example sentence we have the following instances in the training set:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence Similarity Based on Dependency Parsing", "sec_num": "3" }, { "text": "1. PROTX1 -nsubj -interacts -prep with -PROTX2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence Similarity Based on Dependency Parsing", "sec_num": "3" }, { "text": "2. PROTX1 -nsubj -interacts -prep with -PROTX0conj and -PROTX2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence Similarity Based on Dependency Parsing", "sec_num": "3" }, { "text": "3. PROTX1 -nsubj -interacts -prep with -PROTX0conj and -PROTX2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence Similarity Based on Dependency Parsing", "sec_num": "3" }, { "text": "4. PROTX1 -conj and -PROTX2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence Similarity Based on Dependency Parsing", "sec_num": "3" }, { "text": "5. PROTX1 -conj and -PROTX2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence Similarity Based on Dependency Parsing", "sec_num": "3" }, { "text": "6. PROTX1 -conj and -PROTX0 -conj and -PROTX2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence Similarity Based on Dependency Parsing", "sec_num": "3" }, { "text": "The first three instances are positive as they describe an interaction between PROTX1 and PROTX2. The last three are negative, as they do not describe an interaction between PROTX1 and PROTX2. We define the similarity between two instances based on cosine similarity and edit distance based similarity between the paths in the instances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence Similarity Based on Dependency Parsing", "sec_num": "3" }, { "text": "Suppose p i and p j are the paths between PROTX1 and PROTX2 in instance x i and instance x j , respectively. We represent p i and p j as vectors of term frequencies in the vector-space model. The cosine similarity measure is the cosine of the angle between these two vectors and is calculated as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cosine Similarity", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "cos sim(p i , p j ) = cos(p i , p j ) = p i \u2022 p j p i p j", "eq_num": "(1)" } ], "section": "Cosine Similarity", "sec_num": "3.1" }, { "text": "that is, it is the dot product of p i and p j divided by the lengths of p i and p j . The cosine similarity measure takes values in the range [0, 1] . If all the terms in p i and p j are common, then it takes the maximum value of 1. If none of the terms are common, then it takes the minimum value of 0.", "cite_spans": [ { "start": 142, "end": 145, "text": "[0,", "ref_id": null }, { "start": 146, "end": 148, "text": "1]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Cosine Similarity", "sec_num": "3.1" }, { "text": "A shortcoming of cosine similarity is that it only takes into account the common terms, but does not consider their order in the path. For this reason, we also use a similarity measure based on edit distance (also called Levenshtein distance). Edit distance between two strings is the minimum number of operations that have to be performed to transform the first string to the second. In the original characterbased edit distance there are three types of operations. These are insertion, deletion, or substitution of a single character. We modify the character-based edit distance into a word-based one, where the operations are defined as insertion, deletion, or substitution of a single word. The edit distance between path 1 and path 2 of our example sentence is 2. We insert PROTX0 and conj and to path 1 to convert it to path 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Based on Edit Distance", "sec_num": "3.2" }, { "text": "-insert (conj and) -PROTX2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PROTX1 -nsubj -interacts -prep with -insert (PROTX0)", "sec_num": "1." }, { "text": "2. PROTX1 -nsubj -interacts -prep with -PROTX0conj and -PROTX2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PROTX1 -nsubj -interacts -prep with -insert (PROTX0)", "sec_num": "1." }, { "text": "We normalize edit distance by dividing it by the length (number of words) of the longer path, so that it takes values in the range [0, 1]. We convert the distance measure into a similarity measure as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PROTX1 -nsubj -interacts -prep with -insert (PROTX0)", "sec_num": "1." }, { "text": "edit sim(p i , p j ) = e \u2212\u03b3(edit distance(p i ,p j )) (2) Bunescu and Mooney (2005a) propose a similar method for relation extraction in general. However, their similarity measure is based on the number of the overlapping words between two paths. When two paths have different lengths, they assume the similarity between them is zero. On the other hand, our edit distance based measure can also account for deletions and insertions of words.", "cite_spans": [ { "start": 58, "end": 84, "text": "Bunescu and Mooney (2005a)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "PROTX1 -nsubj -interacts -prep with -insert (PROTX0)", "sec_num": "1." }, { "text": "When a similarity measure is defined among the instances of a learning problem, a simple and natural choice is to use a nearest neighbor based approach that classifies each instance by looking at the labels of the instances that are most similar to it. Perhaps the simplest and most popular similarity-based learning algorithm is the k-nearest neighbor classification method (kNN). Let U be the set of unlabeled instances, and L be the set of labeled instances in a learning problem. Given an instance x \u2208 U , let N L k (x) be the set of top k instances in L that are most similar to x with respect to some similarity measure. The kNN equation for a binary classification problem can be written as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "kNN and Harmonic Functions", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "y(x) = z\u2208N L k (x) sim(x, z)y(z) z \u2032 \u2208N L k (x) sim(x, z \u2032 )", "eq_num": "(3)" } ], "section": "kNN and Harmonic Functions", "sec_num": "4.1" }, { "text": "where y(z) \u2208 {0, 1} is the label of the instance z. 2 Note that y(x) can take any real value in the [0, 1] interval. The final classification decision is made by setting a threshold in this interval (e.g. 0.5) and classifying the instances above the threshold as positive and others as negative. For our problem, each instance is a dependency path between the proteins in the pair and the similarity function can be one of the functions we have defined in Section 3. Equation 3 can be seen as averaging the labels (0 or 1) of the nearest neighbors of each unlabeled instance. This suggests a generalized semi-supervised version of the same algorithm by incorporating unlabeled instances as neighbors as well:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "kNN and Harmonic Functions", "sec_num": "4.1" }, { "text": "y(x) = z\u2208N L\u222aU k (x) sim(x, z)y(z) z \u2032 \u2208N L\u222aU k (x) sim(x, z \u2032 ) (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "kNN and Harmonic Functions", "sec_num": "4.1" }, { "text": "Unlike Equation 3, the unlabeled instances are also considered in Equation 4 when finding the nearest neighbors. We can visualize this as an undirected graph, where each data instance (labeled or unlabeled) is a node that is connected to its k nearest neighbor nodes. The value of y(\u2022) is set to 0 or 1 for labeled nodes depending on their class. For each unlabeled node x, y(x) is equal to the average of the y(\u2022) values of its neighbors. Such a function that satisfies the average property on all unlabeled nodes is called a harmonic function and is known to exist and have a unique solution (Doyle and Snell, 1984) . Harmonic functions were first introduced as a semisupervised learning method by Zhu et al. (2003) . There are interesting alternative interpretations of a harmonic function on a graph. One of them can be explained in terms of random walks on a graph. Consider a random walk on a graph where at each time point we move from the current node to one of its neighbors. The next node is chosen among the neighbors of the current node with probability proportional to the weight (similarity) of the edge that connects the two nodes. Assuming we start the random walk from the node x, y(x) in Equation 4 is then equal to the probability that this random walk will hit a node labeled 1 before it hits a node labeled 0.", "cite_spans": [ { "start": 594, "end": 617, "text": "(Doyle and Snell, 1984)", "ref_id": "BIBREF13" }, { "start": 700, "end": 717, "text": "Zhu et al. (2003)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "kNN and Harmonic Functions", "sec_num": "4.1" }, { "text": "Support vector machines (SVM) is a supervised machine learning approach designed for solving twoclass pattern recognition problems. The aim is to find the decision surface that separates the positive and negative labeled training examples of a class with maximum margin (Burges, 1998) .", "cite_spans": [ { "start": 270, "end": 284, "text": "(Burges, 1998)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Transductive SVM", "sec_num": "4.2" }, { "text": "Transductive support vector machines (TSVM) are an extension of SVM, where unlabeled data is used in addition to labeled data. The aim now is to assign labels to the unlabeled data and find a decision surface that separates the positive and negative instances of the original labeled data and the (now labeled) unlabeled data with maximum margin. Intuitively, the unlabeled data pushes the decision boundary away from the dense regions. However, unlike SVM, the optimization problem now is NP-hard (Zhu, 2005) . Pointers to studies for approximation algorithms can be found in (Zhu, 2005) .", "cite_spans": [ { "start": 498, "end": 509, "text": "(Zhu, 2005)", "ref_id": "BIBREF27" }, { "start": 577, "end": 588, "text": "(Zhu, 2005)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Transductive SVM", "sec_num": "4.2" }, { "text": "In Section 3 we defined the similarity between two instances based on the cosine similarity and the edit distance based similarity between the paths in the instances. Here, we use these path similarity measures as kernels for SVM and TSVM and modify the SV M light package (Joachims, 1999) by plugging in our two kernel functions.", "cite_spans": [ { "start": 273, "end": 289, "text": "(Joachims, 1999)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Transductive SVM", "sec_num": "4.2" }, { "text": "A well-defined kernel function should be symmetric positive definite. While cosine kernel is welldefined, Cortes et al. (2004) proved that edit kernel is not always positive definite. However, it is possible to make the kernel matrix positive definite by adjusting the \u03b3 parameter, which is a positive real number. Li and Jiang (2005) applied the edit kernel to predict initiation sites in eucaryotic mRNAs and obtained improved results compared to polynomial kernel.", "cite_spans": [ { "start": 106, "end": 126, "text": "Cortes et al. (2004)", "ref_id": "BIBREF8" }, { "start": 315, "end": 334, "text": "Li and Jiang (2005)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Transductive SVM", "sec_num": "4.2" }, { "text": "One of the problems in the field of protein-protein interaction extraction is that different studies generally use different data sets and evaluation metrics. Thus, it is difficult to compare their results. Bunescu et al. (2005) manually developed the AIMED corpus 3 for protein-protein interaction and protein name recognition. They tagged 199 Medline abstracts, obtained from the Database of Interacting Proteins (DIP) (Xenarios et al., 2001 ) and known to contain protein interactions. This corpus is becoming a standard, as it has been used in the recent studies by (Bunescu et al., 2005; Bunescu and Mooney, 2005b; Bunescu and Mooney, 2006; Mitsumori et al., 2006; Yakushiji et al., 2005) .", "cite_spans": [ { "start": 207, "end": 228, "text": "Bunescu et al. (2005)", "ref_id": "BIBREF6" }, { "start": 421, "end": 443, "text": "(Xenarios et al., 2001", "ref_id": "BIBREF23" }, { "start": 570, "end": 592, "text": "(Bunescu et al., 2005;", "ref_id": "BIBREF6" }, { "start": 593, "end": 619, "text": "Bunescu and Mooney, 2005b;", "ref_id": "BIBREF4" }, { "start": 620, "end": 645, "text": "Bunescu and Mooney, 2006;", "ref_id": "BIBREF5" }, { "start": 646, "end": 669, "text": "Mitsumori et al., 2006;", "ref_id": "BIBREF17" }, { "start": 670, "end": 693, "text": "Yakushiji et al., 2005)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Data Sets", "sec_num": "5.1" }, { "text": "In our study we used the AIMED corpus and the CB (Christine Brun) corpus that is provided as a resource by BioCreAtIvE II (Critical Assessment for Information Extraction in Biology) challenge evaluation 4 . We pre-processed the CB corpus by first annotating the protein names in the corpus automatically and then, refining the annotation manually. As discussed in Section 3, we pre-processed both of the data sets as follows. We replicated each sentence for each different protein pair. For n different proteins in a sentence, n 2 new sentences are created, as there are that many different pairs of proteins. In each newly created sentence we marked the protein pair considered for interaction as PROTX1 and PROTX2, and all the remaining proteins in the sentence as PROTX0. If a sentence describes an interaction between PROTX1 and PROTX2, it is labeled as positive, otherwise it is labeled as negative. The summary of the data sets after pre-processing is displayed in Table 1 5 .", "cite_spans": [], "ref_spans": [ { "start": 971, "end": 978, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Data Sets", "sec_num": "5.1" }, { "text": "Since previous studies that use AIMED corpus perform 10-fold cross-validation. We also performed 10-fold cross-validation in both data sets and report the average results over the runs. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Sets", "sec_num": "5.1" }, { "text": "We use precision, recall, and F-score as our metrics to evaluate the performances of the methods. Precision (\u03c0) and recall (\u03c1) are defined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "5.2" }, { "text": "\u03c0 = T P T P + F P ; \u03c1 = T P T P + F N (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "5.2" }, { "text": "Here, T P (True Positives) is the number of sentences classified correctly as positive; F P (False Positives) is the number of negative sentences that are classified as positive incorrectly by the classifier; and F N (False Negatives) is the number of positive sentences that are classified as negative incorrectly by the classifier. F-score is the harmonic mean of recall and precision.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "5.2" }, { "text": "F -score = 2\u03c0\u03c1 \u03c0 + \u03c1 (6)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "5.2" }, { "text": "We evaluate and compare the performances of the semi-supervised machine learning approaches (TSVM and harmonic functions) with their supervised counterparts (SVM and kNN) for the task of protein-protein interaction extraction. As discussed in Section 3, we use cosine similarity and edit distance based similarity as similarity functions in harmonic functions and kNN, and as kernel functions in TSVM and SVM. Our instances consist of the shortest paths between the protein pairs in the dependency parse trees of the sentences. In our experiments, we tuned the \u03b3 parameter of the edit distance based path similarity function to 4.5 with cross-validation. The results in Table 2 and Table 3 are obtained with 10-fold cross-validation. We report the average results over the runs. Table 2 shows the results obtained for the AIMED data set. Edit distance based path similarity function performs considerably better than the cosine similarity function with harmonic functions and kNN and usually slightly better with SVM and TSVM. We achieve our best F-score performance of 59.96% with TSVM with edit kernel. While SVM with edit kernel achieves the highest precision of 77.52%, it performs slightly worse than SVM with cosine kernel in terms of F-score measure. TSVM performs slightly better than SVM, both of which perform better than harmonic functions. kNN is the worst performing algorithm for this data set.", "cite_spans": [], "ref_spans": [ { "start": 670, "end": 689, "text": "Table 2 and Table 3", "ref_id": "TABREF3" }, { "start": 779, "end": 786, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "5.3" }, { "text": "In Table 2 , we also show the results obtained previously in the literature by using the same data set. Yakushiji et al. (2005) use an HPSG parser to produce predicate argument structures. They utilize these structures to automatically construct protein interaction extraction rules. Mitsumori et al. (2006) use SVM with the unparsed text around the protein names as features to extract protein interaction sentences. Here, we show their best result obtained by using the three words to the left and to the right of the proteins. The most closely related study to ours is that by Bunescu and Mooney (2005a) . They define a kernel function based on the shortest path between two entities of a relationship in the dependency parse tree of a sentence (the SPK method). They apply this method to the domain of protein-protein interaction extraction in (Bunescu and Mooney, 2006) . Here, they also test the methods ELCS (Extraction Using Longest Common Subsequences) (Bunescu et al., 2005) and SSK (Subsequence Kernel) (Bunescu and Mooney, 2005b) . We cannot compare our results to theirs directly, because they report their results as a precisionrecall graph. However, the best F-score in their graph seems to be around 0.50 and definitely lower than the best F-scores we have achieved (\u2248 0.59). Bunescu and Mooney (2006) also use SVM as their learning method in their SPK approach. They define their similarity based on the number of overlapping words between two paths and assign a similarity of zero if the two paths have different lengths. Our improved performance with SVM and the shortest path dependency features may be due to the editdistance based kernel, which takes into account not only the overlapping words, but also word order and accounts for deletions and insertions of words. Our results show that, SVM, TSVM, and harmonic functions achieve better F-score and recall performances than the previous studies by Yakushiji et al. (2005) , Mitsumori et al. (2006) , and the SSK and ELCS approaches of Bunescu and Mooney (2006) . SVM and TSVM also achieve higher precision scores. Since, Mitsumori et al. (2006) also use SVM in their study, our improved results with SVM confirms our motivation of using dependency paths as features. Table 3 shows the results we got with the CB data set. The F-score performance with the edit distance based similarity function is always better than that of cosine similarity function for this data set. The difference in performances is considerable for harmonic functions and kNN. Our best F-score is achieved with TSVM with edit kernel (85.22%). TSVM performs slightly better than SVM. When cosine similarity function is used, kNN performs better than harmonic functions. However, when edit distance based similarity is used, harmonic functions achieve better performance. SVM and TSVM perform better than harmonic functions. But, the gap in performance is low when edit distance based similarity is used with harmonic functions. (Yakushiji et al., 2005) 33.70 33.10 33.40 (Mitsumori et al., 2006) 54.20 42.60 47.70 Semi-supervised approaches are usually more effective when there is less labeled data than unlabeled data, which is usually the case in real applications. To see the effect of semi-supervised approaches we perform experiments by varying the amount of la- Figure 2 : The F-score on the AIMED dataset with varying sizes of training data beled training sentences in the range [10, 3000] . For each labeled training set size, sentences are selected randomly among all the sentences, and the remaining sentences are used as the unlabeled test set. The results that we report are the averages over 10 such random runs for each labeled training set size. We report the results for the algorithms when edit distance based similarity is used, as it mostly performs better than cosine similarity. Figure 2 shows the results obtained over the AIMED data set. Semisupervised approaches TSVM and harmonic functions perform considerably better than their supervised counterparts SVM and kNN when we have small number of labeled training data. It is interesting to note that, although SVM is one of the best performing algorithms with more training data, it is the worst performing algorithm with small amount of labeled training sentences. Its performance starts to increase when number of training data is larger than 200. Eventually, its performance gets close to that of the other algorithms. Harmonic functions is the best performing algorithm when we have less than 200 labeled training data. TSVM achieves better performance when there are more than 500 labeled training sentences. Figure 3 : The F-score on the CB dataset with varying sizes of training data tences, harmonic functions and TSVM perform significantly better than kNN, while SVM is the worst performing algorithm. When we have more than 500 labeled training sentences, kNN is the worst performing algorithm, while the performance of SVM increases and gets similar to that of TSVM and slightly better than that of harmonic functions.", "cite_spans": [ { "start": 104, "end": 127, "text": "Yakushiji et al. (2005)", "ref_id": "BIBREF24" }, { "start": 284, "end": 307, "text": "Mitsumori et al. (2006)", "ref_id": "BIBREF17" }, { "start": 580, "end": 606, "text": "Bunescu and Mooney (2005a)", "ref_id": "BIBREF3" }, { "start": 848, "end": 874, "text": "(Bunescu and Mooney, 2006)", "ref_id": "BIBREF5" }, { "start": 962, "end": 984, "text": "(Bunescu et al., 2005)", "ref_id": "BIBREF6" }, { "start": 1014, "end": 1041, "text": "(Bunescu and Mooney, 2005b)", "ref_id": "BIBREF4" }, { "start": 1292, "end": 1317, "text": "Bunescu and Mooney (2006)", "ref_id": "BIBREF5" }, { "start": 1923, "end": 1946, "text": "Yakushiji et al. (2005)", "ref_id": "BIBREF24" }, { "start": 1949, "end": 1972, "text": "Mitsumori et al. (2006)", "ref_id": "BIBREF17" }, { "start": 2010, "end": 2035, "text": "Bunescu and Mooney (2006)", "ref_id": "BIBREF5" }, { "start": 2096, "end": 2119, "text": "Mitsumori et al. (2006)", "ref_id": "BIBREF17" }, { "start": 2975, "end": 2999, "text": "(Yakushiji et al., 2005)", "ref_id": "BIBREF24" }, { "start": 3018, "end": 3042, "text": "(Mitsumori et al., 2006)", "ref_id": "BIBREF17" }, { "start": 3434, "end": 3438, "text": "[10,", "ref_id": null }, { "start": 3439, "end": 3444, "text": "3000]", "ref_id": null } ], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 2242, "end": 2249, "text": "Table 3", "ref_id": "TABREF4" }, { "start": 3316, "end": 3324, "text": "Figure 2", "ref_id": null }, { "start": 3848, "end": 3856, "text": "Figure 2", "ref_id": null }, { "start": 4635, "end": 4643, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "5.3" }, { "text": "We introduced a relation extraction approach based on dependency parsing and machine learning to identify protein interaction sentences in biomedical text. Unlike syntactic parsing, dependency parsing captures the semantic predicate argument relationships between the entities in addition to the syntactic relationships. We extracted the shortest paths between protein pairs in the dependency parse trees of the sentences and defined similarity functions (kernels in SVM terminology) for these paths based on cosine similarity and edit distance. Supervised machine learning approaches have been applied to this domain. However, they rely only on labeled training data, which is difficult to gather. To our knowledge, this is the first effort in this domain to apply semisupervised algorithms, which make use of both labeled and unlabeled data. We evaluated and compared the performances of two semi-supervised ma-chine learning approaches (harmonic functions and TSVM), with their supervised counterparts (kNN and SVM). We showed that, edit distance based similarity function performs better than cosine similarity function since it takes into account not only common words, but also word order. Our 10-fold cross validation results showed that, TSVM performs slightly better than SVM, both of which perform better than harmonic functions. The worst performing algorithm is kNN. We compared our results with previous results published with the AIMED data set. We achieved the best F-score performance with TSVM with the edit distance kernel (59.96%) which is significantly higher than the previously reported results for the same data set. In most real-world applications there are much more unlabeled data than labeled data. Semisupervised approaches are usually more effective in these cases, because they make use of both the labeled and unlabeled instances when making decisions. To test this hypothesis for the application of extracting protein interaction sentences from text, we performed experiments by varying the number of labeled training sentences. Our results show that, semi-supervised algorithms perform considerably better than their supervised counterparts, when there are small number of labeled training sentences. An interesting result is that, in such cases SVM performs significantly worse than the other algorithms. Harmonic functions achieve the best performance when there are only a few labeled training sentences. As number of labeled training sentences increases the performance gap between supervised and semisupervised algorithms decreases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Equation 3 is the weighted (or soft) version of the kNN algorithm. In the classical voting scheme, x is classified in the category that the majority of its neighbors belong to.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "ftp://ftp.cs.utexas.edu/pub/mooney/bio-data/ 4 http://biocreative.sourceforge.net/biocreative 2.html 5 The pre-processed data sets are available at http://belobog.si.umich.edu/clair/biocreative", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported in part by grants R01-LM008106 and U54-DA021519 from the US National Institutes of Health.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Bind -the biomolecular interaction network database", "authors": [ { "first": "G", "middle": [], "last": "Bader", "suffix": "" }, { "first": "D", "middle": [], "last": "Betel", "suffix": "" }, { "first": "C", "middle": [], "last": "Hogue", "suffix": "" } ], "year": 2003, "venue": "Nucleic Acids Research", "volume": "31", "issue": "1", "pages": "248--250", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Bader, D. Betel, and C. Hogue. 2003. Bind -the biomolecular interaction network database. Nucleic Acids Research, 31(1):248-250.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The swiss-prot protein sequence database and its supplement trembl in 2000", "authors": [ { "first": "A", "middle": [], "last": "Bairoch", "suffix": "" }, { "first": "R", "middle": [], "last": "Apweiler", "suffix": "" } ], "year": 2000, "venue": "Nucleic Acids Research", "volume": "28", "issue": "1", "pages": "45--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Bairoch and R. Apweiler. 2000. The swiss-prot pro- tein sequence database and its supplement trembl in 2000. Nucleic Acids Research, 28(1):45-48.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Automatic extraction of biological information from scientific text: Protein-protein interactions", "authors": [ { "first": "C", "middle": [], "last": "Blaschke", "suffix": "" }, { "first": "M", "middle": [ "A" ], "last": "Andrade", "suffix": "" }, { "first": "C", "middle": [ "A" ], "last": "Ouzounis", "suffix": "" }, { "first": "A", "middle": [], "last": "Valencia", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the AAAI Conference on Intelligent Systems for Molecular Biology", "volume": "", "issue": "", "pages": "60--67", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Blaschke, M. A. Andrade, C. A. Ouzounis, and A. Va- lencia. 1999. Automatic extraction of biological in- formation from scientific text: Protein-protein interac- tions. In Proceedings of the AAAI Conference on In- telligent Systems for Molecular Biology (ISMB 1999), pages 60-67.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A shortest path dependency kernel for relation extraction", "authors": [ { "first": "R", "middle": [ "C" ], "last": "Bunescu", "suffix": "" }, { "first": "R", "middle": [ "J" ], "last": "Mooney", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "724--731", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. C. Bunescu and R. J. Mooney. 2005a. A shortest path dependency kernel for relation extraction. In Pro- ceedings of the Human Language Technology Confer- ence and Conference on Empirical Methods in Natu- ral Language Processing, pages 724-731, Vancouver, B.C, October.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Subsequence kernels for relation extraction", "authors": [ { "first": "R", "middle": [ "C" ], "last": "Bunescu", "suffix": "" }, { "first": "R", "middle": [ "J" ], "last": "Mooney", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 19th Conference on Neural Information Processing Systems (NIPS)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. C. Bunescu and R. J. Mooney. 2005b. Subsequence kernels for relation extraction. In Proceedings of the 19th Conference on Neural Information Processing Systems (NIPS), Vancouver, B.C, December.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Text Mining and Natural Language Processing, chapter Extracting Relations from Text: From Word Sequences to Dependency Paths", "authors": [ { "first": "R", "middle": [ "C" ], "last": "Bunescu", "suffix": "" }, { "first": "R", "middle": [ "J" ], "last": "Mooney", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. C. Bunescu and R. J. Mooney, 2006. Text Mining and Natural Language Processing, chapter Extracting Re- lations from Text: From Word Sequences to Depen- dency Paths. forthcoming book.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Comparative experiments on learning information extractors for proteins and their interactions", "authors": [ { "first": "R", "middle": [], "last": "Bunescu", "suffix": "" }, { "first": "R", "middle": [], "last": "Ge", "suffix": "" }, { "first": "J", "middle": [ "R" ], "last": "Kate", "suffix": "" }, { "first": "M", "middle": [ "E" ], "last": "Marcotte", "suffix": "" }, { "first": "R", "middle": [ "J" ], "last": "Mooney", "suffix": "" }, { "first": "K", "middle": [ "A" ], "last": "Ramani", "suffix": "" }, { "first": "W", "middle": [ "Y" ], "last": "Wong", "suffix": "" } ], "year": 2005, "venue": "Artificial Intelligence in Medicine", "volume": "33", "issue": "2", "pages": "139--155", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Bunescu, R. Ge, J. R. Kate, M. E. Marcotte, R. J. Mooney, K. A. Ramani, and W. Y. Wong. 2005. Com- parative experiments on learning information extrac- tors for proteins and their interactions. Artificial Intel- ligence in Medicine, 33(2):139-155, February.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A tutorial on support vector machines for pattern recognition", "authors": [ { "first": "C", "middle": [ "J C" ], "last": "Burges", "suffix": "" } ], "year": 1998, "venue": "Data Mining and Knowledge Discovery", "volume": "2", "issue": "2", "pages": "121--167", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. J. C. Burges. 1998. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery, 2(2):121-167.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Rational kernels: Theory and algorithms", "authors": [ { "first": "C", "middle": [], "last": "Cortes", "suffix": "" }, { "first": "P", "middle": [], "last": "Haffner", "suffix": "" }, { "first": "M", "middle": [], "last": "Mohri", "suffix": "" } ], "year": 2004, "venue": "Journal of Machine Learning Research", "volume": "", "issue": "5", "pages": "1035--1062", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Cortes, P. Haffner, and M. Mohri. 2004. Rational kernels: Theory and algorithms. Journal of Machine Learning Research, (5):1035-1062, August.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Dependency tree kernels for relation extraction", "authors": [ { "first": "A", "middle": [], "last": "Culotta", "suffix": "" }, { "first": "J", "middle": [], "last": "Sorensen", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Culotta and J. Sorensen. 2004. Dependency tree ker- nels for relation extraction. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04), Barcelona, Spain, July.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Extracting human protein interactions from medline using a full-sentence parser", "authors": [ { "first": "N", "middle": [], "last": "Daraselia", "suffix": "" }, { "first": "A", "middle": [], "last": "Yuryev", "suffix": "" }, { "first": "S", "middle": [], "last": "Egorov", "suffix": "" }, { "first": "S", "middle": [], "last": "Novichkova", "suffix": "" }, { "first": "A", "middle": [], "last": "Nikitin", "suffix": "" }, { "first": "I", "middle": [], "last": "Mazo", "suffix": "" } ], "year": 2004, "venue": "Bioinformatics", "volume": "20", "issue": "5", "pages": "604--611", "other_ids": {}, "num": null, "urls": [], "raw_text": "N. Daraselia, A. Yuryev, S. Egorov, S. Novichkova, A. Nikitin, and I. Mazo. 2004. Extracting human protein interactions from medline using a full-sentence parser. Bioinformatics, 20(5):604-611.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Generating Typed Dependency Parses from Phrase Structure Parses", "authors": [ { "first": "M-C", "middle": [], "last": "De Marneffe", "suffix": "" }, { "first": "B", "middle": [], "last": "Maccartney", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the IEEE / ACL 2006 Workshop on Spoken Language Technology. The Stanford Natural Language Processing Group", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M-C. de Marneffe, B. MacCartney, and C. D. Manning. 2006. Generating Typed Dependency Parses from Phrase Structure Parses. In Proceedings of the IEEE / ACL 2006 Workshop on Spoken Language Technology. The Stanford Natural Language Processing Group.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Prebind and textomy -mining the biomedical literature for protein-protein interactions using a support vector machine", "authors": [ { "first": "I", "middle": [], "last": "Donaldson", "suffix": "" }, { "first": "J", "middle": [], "last": "Martin", "suffix": "" }, { "first": "B", "middle": [], "last": "De Bruijn", "suffix": "" }, { "first": "C", "middle": [], "last": "Wolting", "suffix": "" }, { "first": "V", "middle": [], "last": "Lay", "suffix": "" }, { "first": "B", "middle": [], "last": "Tuekam", "suffix": "" }, { "first": "S", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "B", "middle": [], "last": "Baskin", "suffix": "" }, { "first": "G", "middle": [ "D" ], "last": "Bader", "suffix": "" }, { "first": "K", "middle": [], "last": "Michalockova", "suffix": "" }, { "first": "T", "middle": [], "last": "Pawson", "suffix": "" }, { "first": "C", "middle": [ "W V" ], "last": "Hogue", "suffix": "" } ], "year": 2003, "venue": "BMC Bioinformatics", "volume": "4", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "I. Donaldson, J. Martin, B. de Bruijn, C. Wolting, V. Lay, B. Tuekam, S. Zhang, B. Baskin, G. D. Bader, K. Michalockova, T. Pawson, and C. W. V. Hogue. 2003. Prebind and textomy -mining the biomedical literature for protein-protein interactions using a sup- port vector machine. BMC Bioinformatics, 4:11.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Random Walks and Electric Networks", "authors": [ { "first": "P", "middle": [ "G" ], "last": "Doyle", "suffix": "" }, { "first": "J", "middle": [ "L" ], "last": "Snell", "suffix": "" } ], "year": 1984, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. G. Doyle and J. L. Snell. 1984. Random Walks and Electric Networks. Mathematical Association of America.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Discovering patterns to extract proteinprotein interactions from full texts", "authors": [ { "first": "M", "middle": [], "last": "Huang", "suffix": "" }, { "first": "X", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Y", "middle": [], "last": "Hao", "suffix": "" }, { "first": "D", "middle": [ "G" ], "last": "Payan", "suffix": "" }, { "first": "K", "middle": [], "last": "Qu", "suffix": "" }, { "first": "M", "middle": [], "last": "Li", "suffix": "" } ], "year": 2004, "venue": "Bioinformatics", "volume": "20", "issue": "18", "pages": "3604--3612", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Huang, X. Zhu, Y. Hao, D. G. Payan, K. Qu, and M. Li. 2004. Discovering patterns to extract protein- protein interactions from full texts. Bioinformatics, 20(18):3604-3612.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Transductive inference for text classification using support vector machines", "authors": [ { "first": "T", "middle": [], "last": "Joachims", "suffix": "" } ], "year": 1999, "venue": "Proceedings of ICML-99, 16th International Conference on Machine Learning", "volume": "", "issue": "", "pages": "200--209", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Joachims. 1999. Transductive inference for text classification using support vector machines. In Ivan Bratko and Saso Dzeroski, editors, Proceedings of ICML-99, 16th International Conference on Machine Learning, pages 200-209. Morgan Kaufmann Publish- ers, San Francisco, US.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A class of edit kernels for svms to predict translation initiation sites in eukaryotic mrnas", "authors": [ { "first": "H", "middle": [], "last": "Li", "suffix": "" }, { "first": "T", "middle": [], "last": "Jiang", "suffix": "" } ], "year": 2005, "venue": "Journal of Computational Biology", "volume": "12", "issue": "6", "pages": "702--718", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Li and T. Jiang. 2005. A class of edit kernels for svms to predict translation initiation sites in eukaryotic mrnas. Journal of Computational Biology, 12(6):702- 718.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Extracting protein-protein interaction information from biomedical text with svm", "authors": [ { "first": "T", "middle": [], "last": "Mitsumori", "suffix": "" }, { "first": "M", "middle": [], "last": "Murata", "suffix": "" }, { "first": "Y", "middle": [], "last": "Fukuda", "suffix": "" }, { "first": "K", "middle": [], "last": "Doi", "suffix": "" }, { "first": "H", "middle": [], "last": "Doi", "suffix": "" } ], "year": 2006, "venue": "IEICE Transactions on Information and Systems", "volume": "", "issue": "8", "pages": "2464--2466", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Mitsumori, M. Murata, Y. Fukuda, K. Doi, and H. Doi. 2006. Extracting protein-protein interaction informa- tion from biomedical text with svm. IEICE Trans- actions on Information and Systems, E89-D(8):2464- 2466.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Automated extraction of information on protein-protein interactions from the biological literature", "authors": [ { "first": "T", "middle": [], "last": "Ono", "suffix": "" }, { "first": "H", "middle": [], "last": "Hishigaki", "suffix": "" }, { "first": "A", "middle": [], "last": "Tanigami", "suffix": "" }, { "first": "T", "middle": [], "last": "Takagi", "suffix": "" } ], "year": 2001, "venue": "Bioinformatics", "volume": "17", "issue": "2", "pages": "155--161", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Ono, H. Hishigaki, A. Tanigami, and T. Takagi. 2001. Automated extraction of information on protein-protein interactions from the biological liter- ature. Bioinformatics, 17(2):155-161.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Protein-protein interactions: methods for detection and analysis. Microbiol", "authors": [ { "first": "E", "middle": [ "M" ], "last": "Phizicky", "suffix": "" }, { "first": "S", "middle": [], "last": "Fields", "suffix": "" } ], "year": 1995, "venue": "Rev", "volume": "59", "issue": "1", "pages": "94--123", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. M. Phizicky and S. Fields. 1995. Protein-protein in- teractions: methods for detection and analysis. Micro- biol. Rev., 59(1):94-123, March.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Robust relational parsing over biomedical literature: Extracting inhibit relations", "authors": [ { "first": "J", "middle": [], "last": "Pustejovsky", "suffix": "" }, { "first": "J", "middle": [], "last": "Castano", "suffix": "" }, { "first": "J", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "M", "middle": [], "last": "Kotecki", "suffix": "" }, { "first": "B", "middle": [], "last": "Cochran", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the seventh Pacific Symposium on Biocomputing (PSB 2002)", "volume": "", "issue": "", "pages": "362--373", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Pustejovsky, J. Castano, J. Zhang, M. Kotecki, and B. Cochran. 2002. Robust relational parsing over biomedical literature: Extracting inhibit relations. In Proceedings of the seventh Pacific Symposium on Bio- computing (PSB 2002), pages 362-373.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Extracting information on protein-protein interactions from biological literature based on machine learning approaches", "authors": [ { "first": "K", "middle": [], "last": "Sugiyama", "suffix": "" }, { "first": "K", "middle": [], "last": "Hatano", "suffix": "" }, { "first": "M", "middle": [], "last": "Yoshikawa", "suffix": "" }, { "first": "S", "middle": [], "last": "Uemura", "suffix": "" } ], "year": 2003, "venue": "Genome Informatics", "volume": "14", "issue": "", "pages": "699--700", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Sugiyama, K. Hatano, M. Yoshikawa, and S. Uemura. 2003. Extracting information on protein-protein in- teractions from biological literature based on machine learning approaches. Genome Informatics, 14:699- 700.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Extraction of protein interaction information from unstructured text using a context-free grammar", "authors": [ { "first": "J", "middle": [ "M" ], "last": "Temkin", "suffix": "" }, { "first": "M", "middle": [ "R" ], "last": "Gilder", "suffix": "" } ], "year": 2003, "venue": "Bioinformatics", "volume": "19", "issue": "", "pages": "2046--2053", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. M. Temkin and M. R. Gilder. 2003. Extraction of pro- tein interaction information from unstructured text us- ing a context-free grammar. Bioinformatics, 19:2046- 2053.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Dip: The database of interacting proteins: 2001 update", "authors": [ { "first": "I", "middle": [], "last": "Xenarios", "suffix": "" }, { "first": "E", "middle": [], "last": "Fernandez", "suffix": "" }, { "first": "L", "middle": [], "last": "Salwinski", "suffix": "" }, { "first": "X", "middle": [ "J" ], "last": "Duan", "suffix": "" }, { "first": "M", "middle": [ "J" ], "last": "Thompson", "suffix": "" }, { "first": "E", "middle": [ "M" ], "last": "Marcotte", "suffix": "" }, { "first": "D", "middle": [], "last": "Eisenberg", "suffix": "" } ], "year": 2001, "venue": "Nucleic Acids Res", "volume": "29", "issue": "", "pages": "239--241", "other_ids": {}, "num": null, "urls": [], "raw_text": "I. Xenarios, E. Fernandez, L. Salwinski, X. J. Duan, M. J. Thompson, E. M. Marcotte, and D. Eisenberg. 2001. Dip: The database of interacting proteins: 2001 up- date. Nucleic Acids Res., 29:239 -241, January.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Biomedical information extraction with predicateargument structure patterns", "authors": [ { "first": "A", "middle": [], "last": "Yakushiji", "suffix": "" }, { "first": "Y", "middle": [], "last": "Miyao", "suffix": "" }, { "first": "Y", "middle": [], "last": "Tateisi", "suffix": "" }, { "first": "J", "middle": [], "last": "Tsujii", "suffix": "" } ], "year": 2005, "venue": "Proceedings of The Eleventh Annual Meeting of The Association for Natural Language Processing", "volume": "", "issue": "", "pages": "93--96", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Yakushiji, Y. Miyao, Y. Tateisi, and J. Tsujii. 2005. Biomedical information extraction with predicate- argument structure patterns. In Proceedings of The Eleventh Annual Meeting of The Association for Natu- ral Language Processing, pages 93-96.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Mint: A molecular interaction database", "authors": [ { "first": "A", "middle": [], "last": "Zanzoni", "suffix": "" }, { "first": "L", "middle": [], "last": "Montecchi-Palazzi", "suffix": "" }, { "first": "M", "middle": [], "last": "Quondam", "suffix": "" }, { "first": "G", "middle": [], "last": "Ausiello", "suffix": "" }, { "first": "M", "middle": [], "last": "Helmer-Citterich", "suffix": "" }, { "first": "G", "middle": [], "last": "Cesareni", "suffix": "" } ], "year": 2002, "venue": "FEBS Letters", "volume": "513", "issue": "", "pages": "135--140", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Zanzoni, L. Montecchi-Palazzi, M. Quondam, G. Ausiello, M. Helmer-Citterich, and G. Cesareni. 2002. Mint: A molecular interaction database. FEBS Letters, 513:135-140.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Semisupervised learning using gaussian fields and harmonic functions", "authors": [ { "first": "X", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Z", "middle": [], "last": "Ghahramani", "suffix": "" }, { "first": "J", "middle": [ "D" ], "last": "Lafferty", "suffix": "" } ], "year": 2003, "venue": "ICML", "volume": "", "issue": "", "pages": "912--919", "other_ids": {}, "num": null, "urls": [], "raw_text": "X. Zhu, Z. Ghahramani, and J. D. Lafferty. 2003. Semi- supervised learning using gaussian fields and har- monic functions. In T. Fawcett and N. Mishra, editors, ICML, pages 912-919. AAAI Press.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Semi-supervised learning literature survey", "authors": [ { "first": "X", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "X. Zhu. 2005. Semi-supervised learning lit- erature survey. Technical Report 1530, Com- puter Sciences, University of Wisconsin-Madison.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "The dependency tree of the sentence \"The results demonstrated that KaiC interacts rhythmically with KaiA, KaiB, and SasA.\"1. KaiC -nsubj -interacts -prep with -SasA 2. KaiC -nsubj -interacts -prep with -SasA -conj and -KaiA 3. KaiC -nsubj -interacts -prep with -SasA -conj and -KaiB 4. SasA -conj and -KaiA 5. SasA -conj and -KaiB 6. KaiA -conj and -SasA -conj and -KaiB", "type_str": "figure", "num": null }, "FIGREF2": { "uris": null, "text": "shows the results obtained over the CB data set. When we have less than 500 labeled sen-", "type_str": "figure", "num": null }, "TABREF1": { "html": null, "text": "Data Sets", "num": null, "type_str": "table", "content": "" }, "TABREF3": { "html": null, "text": "", "num": null, "type_str": "table", "content": "
: Experimental Results -AIMED Data Set
MethodPrecision Recall F-Score
SVM-edit85.1584.7984.96
SVM-cos87.8381.4584.49
TSVM-edit85.6284.8985.22
TSVM-cos85.6784.3184.96
Harmonic-edit86.6980.1583.26
Harmonic-cos72.2870.9171.56
kNN-edit72.8986.9579.28
kNN-cos65.4289.4975.54
" }, "TABREF4": { "html": null, "text": "Experimental Results -CB Data Set", "num": null, "type_str": "table", "content": "" } } } }