|
{ |
|
"paper_id": "D09-1013", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T16:39:58.090391Z" |
|
}, |
|
"title": "A Rich Feature Vector for Protein-Protein Interaction Extraction from Multiple Corpora", |
|
"authors": [ |
|
{ |
|
"first": "Makoto", |
|
"middle": [], |
|
"last": "Miwa", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "the University of Tokyo", |
|
"location": { |
|
"addrLine": "Japan Hongo 7-3-1, Bunkyo-ku", |
|
"settlement": "Tokyo", |
|
"country": "Japan" |
|
} |
|
}, |
|
"email": "mmiwa@is.s.u-tokyo.ac.jp" |
|
}, |
|
{ |
|
"first": "Rune", |
|
"middle": [], |
|
"last": "Saetre", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "the University of Tokyo", |
|
"location": { |
|
"addrLine": "Japan Hongo 7-3-1, Bunkyo-ku", |
|
"settlement": "Tokyo", |
|
"country": "Japan" |
|
} |
|
}, |
|
"email": "rune.saetre@is.s.u-tokyo.ac.jp" |
|
}, |
|
{ |
|
"first": "Yusuke", |
|
"middle": [], |
|
"last": "Miyao", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "the University of Tokyo", |
|
"location": { |
|
"addrLine": "Japan Hongo 7-3-1, Bunkyo-ku", |
|
"settlement": "Tokyo", |
|
"country": "Japan" |
|
} |
|
}, |
|
"email": "yusuke@is.s.u-tokyo.ac.jp" |
|
}, |
|
{ |
|
"first": "Jun", |
|
"middle": [ |
|
"'" |
|
], |
|
"last": "Ichi Tsujii", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "the University of Tokyo", |
|
"location": { |
|
"addrLine": "Japan Hongo 7-3-1, Bunkyo-ku", |
|
"settlement": "Tokyo", |
|
"country": "Japan" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Because of the importance of proteinprotein interaction (PPI) extraction from text, many corpora have been proposed with slightly differing definitions of proteins and PPI. Since no single corpus is large enough to saturate a machine learning system, it is necessary to learn from multiple different corpora. In this paper, we propose a solution to this challenge. We designed a rich feature vector, and we applied a support vector machine modified for corpus weighting (SVM-CW) to complete the task of multiple corpora PPI extraction. The rich feature vector, made from multiple useful kernels, is used to express the important information for PPI extraction, and the system with our feature vector was shown to be both faster and more accurate than the original kernelbased system, even when using just a single corpus. SVM-CW learns from one corpus, while using other corpora for support. SVM-CW is simple, but it is more effective than other methods that have been successfully applied to other NLP tasks earlier. With the feature vector and SVM-CW, our system achieved the best performance among all state-of-the-art PPI extraction systems reported so far.", |
|
"pdf_parse": { |
|
"paper_id": "D09-1013", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Because of the importance of proteinprotein interaction (PPI) extraction from text, many corpora have been proposed with slightly differing definitions of proteins and PPI. Since no single corpus is large enough to saturate a machine learning system, it is necessary to learn from multiple different corpora. In this paper, we propose a solution to this challenge. We designed a rich feature vector, and we applied a support vector machine modified for corpus weighting (SVM-CW) to complete the task of multiple corpora PPI extraction. The rich feature vector, made from multiple useful kernels, is used to express the important information for PPI extraction, and the system with our feature vector was shown to be both faster and more accurate than the original kernelbased system, even when using just a single corpus. SVM-CW learns from one corpus, while using other corpora for support. SVM-CW is simple, but it is more effective than other methods that have been successfully applied to other NLP tasks earlier. With the feature vector and SVM-CW, our system achieved the best performance among all state-of-the-art PPI extraction systems reported so far.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The performance of an information extraction program is highly dependent on various factors, including text types (abstracts, complete articles, reports, etc.), exact definitions of the information to be extracted, shared sub-topics of the text collections from which information is to be extracted.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Even if two corpora are annotated in terms of the same type of information by two groups, the performance of a program trained by one corpus is unlikely to be reproduced in the other corpus. On the other hand, from a practical point of view, it is worth while to effectively use multiple existing annotated corpora together, because it is very costly to make new annotations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "One problem with several different corpora is protein-protein interaction (PPI) extraction from text. While PPIs play a critical role in understanding the working of cells in diverse biological contexts, the manual construction of PPI databases such as BIND, DIP, HPRD, IntAct, and MINT (Mathivanan et al., 2006 ) is known to be very time-consuming and labor-intensive. The automatic extraction of PPI from published papers has therefore been a major research topic in Natural Language Processing for Biology (BioNLP). Among several PPI extraction task settings, the most common is sentence-based, pair-wise PPI extraction. At least four annotated corpora have been provided for this setting: AIMed (Bunescu et al., 2005) , HPRD50 (Fundel et al., 2006) , IEPA (Ding et al., 2002) , and LLL (N\u00e9dellec, 2005) . Each of these corpora have been used as the standard corpus for training and testing PPI programs. Moreover, several corpora are annotated for more types of events than just for PPI. Such examples include BioInfer (Pyysalo et al., 2007) , and GENIA (Kim et al., 2008a) , and they can be reorganized into PPI corpora. Even though all of these corpora were made for PPI extraction, they were constructed based on different definitions of proteins and PPI, which reflect different biological research interests .", |
|
"cite_spans": [ |
|
{ |
|
"start": 264, |
|
"end": 311, |
|
"text": "HPRD, IntAct, and MINT (Mathivanan et al., 2006", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 699, |
|
"end": 721, |
|
"text": "(Bunescu et al., 2005)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 731, |
|
"end": 752, |
|
"text": "(Fundel et al., 2006)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 760, |
|
"end": 779, |
|
"text": "(Ding et al., 2002)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 790, |
|
"end": 806, |
|
"text": "(N\u00e9dellec, 2005)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 1023, |
|
"end": 1045, |
|
"text": "(Pyysalo et al., 2007)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 1058, |
|
"end": 1077, |
|
"text": "(Kim et al., 2008a)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Research on PPI extraction so far has revealed that the performance on each of the corpora could benefit from additional examples . Learning from multiple annotated corpora could lead to better PPI extraction performance. Various research paradigms such as inductive transfer learning (ITL) and domain adaptation (DA) have mainly focused on how to effectively use corpora annotated by other groups, by reducing the incompatibilities (Pan and Yang, 2008) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 442, |
|
"end": 453, |
|
"text": "Yang, 2008)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we propose the extraction of PPIs from multiple different corpora. We design a rich feature vector, and as an ITL method, we apply a support vector machine (SVM) modified for corpus weighting (SVM-CW) (Schweikert et al., 2008) , in order to evaluate the use of multiple corpora for the PPI extraction task. Our rich feature vector is made from multiple useful kernels, each of which is based on multiple parser inputs, proposed by Miwa et al. (2008) . The system with our feature vector was better than or at least comparable to the state-of-the-art PPI extraction systems on every corpus. The system is a good starting point to use the multiple corpora. Using one of the corpora as the target corpus, SVM-CW weights the remaining corpora (we call them the source corpora) with \"goodness\" for training on the target corpus. While SVM-CW is simple, we show that SVM-CW can improve the performance of the system more effectively and more efficiently than other methods proven to be successful in other NLP tasks earlier. As a result, SVM-CW with our feature vector is comprised of a PPI system with five different models, of which each model is superior to the best model in the original PPI extraction task, which used only the single corpus.", |
|
"cite_spans": [ |
|
{ |
|
"start": 216, |
|
"end": 241, |
|
"text": "(Schweikert et al., 2008)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 446, |
|
"end": 464, |
|
"text": "Miwa et al. (2008)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "While sentence-based, pair-wise PPI extraction was initially tackled by using simple methods based on co-occurrences, lately, more sophisticated machine learning systems augmented by NLP techniques have been applied (Bunescu et al., 2005) . The task has been tackled as a classification problem. To pull out useful information from NLP tools including taggers and parsers, several kernels have been applied to calculate the similarity between PPI pairs. Miwa et al. (2008) recently proposed the use of multiple kernels using multiple parsers. This outperformed other systems on the AIMed, which is the most frequently used corpus for the PPI extraction task, by a wide margin.", |
|
"cite_spans": [ |
|
{ |
|
"start": 216, |
|
"end": 238, |
|
"text": "(Bunescu et al., 2005)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 454, |
|
"end": 472, |
|
"text": "Miwa et al. (2008)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Works", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "To improve the performance using external C l a s s i f i c a t i o n R e s u l t T r a i n i n g D a t a F e a t u r e v e c t o r R a w T e x t s P a r s e r s C l a s s i f i e r T e s t D a t a R a w T e x t s M o d e l P a i r I n f o r m a t i o n P a i r I n f o r m a t i o n L a b e l Figure 1 : Overview of our PPI extraction system training data, many ITL and DA methods have been proposed. Most of ITL methods assume that the feature space is same, and that the labels may be different in only some examples, while most of DA methods assume that the labels are the same, and that the feature space is different. Among the methods, we use adaptive SVM (aSVM) , singular value decomposition (SVD) based alternating structure optimization (SVD-ASO) (Ando et al., 2005) , and transfer AdaBoost (TrAdaBoost) (Dai et al., 2007) to compare with SVM-CW. We do not use semi-supervised learning (SSL) methods, because it would be considerably costly to generate enough clean unlabeled data needed for SSL (Erkan et al., 2007) . aSVM is seen as a promising DA method among several modifications of SVM including SVM-CW. aSVM tries to find a model that is close to the one made from other classification problems. SVD-ASO is one of the most successful SSL, DA, or multi-task learning methods in NLP. The method tries to find an additional useful feature space by solving auxiliary problems that are close to the target problem. With well-designed auxiliary problems, the method has been applied to text classification, text chunking, and word sense disambiguation (Ando, 2006) . The method was reported to perform better than or comparable to the best state-of-the-art systems in all of these tasks. TrAd-aBoost was proposed as an ITL method. In training, the method reduces the effect of incompatible examples by decreasing their weights, and thereby tries to use useful examples from source corpora. The method has been applied to text classification, and the reported performance was better than SVM and transductive SVM (Dai et al., 2007) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 891, |
|
"end": 910, |
|
"text": "(Ando et al., 2005)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 948, |
|
"end": 966, |
|
"text": "(Dai et al., 2007)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 1140, |
|
"end": 1160, |
|
"text": "(Erkan et al., 2007)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 1697, |
|
"end": 1709, |
|
"text": "(Ando, 2006)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 2157, |
|
"end": 2175, |
|
"text": "(Dai et al., 2007)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 42, |
|
"end": 422, |
|
"text": "C l a s s i f i c a t i o n R e s u l t T r a i n i n g D a t a F e a t u r e v e c t o r R a w T e x t s P a r s e r s C l a s s i f i e r T e s t D a t a R a w T e x t s M o d e l P a i r I n f o r m a t i o n P a i r I n f o r m a t i o n L a b", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 427, |
|
"end": 435, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Related Works", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The target task of our system is a sentence-based, pair-wise PPI extraction. It is formulated as a classification problem that judges whether a given pair XPG p1 protein interacts with multiple subunits of TFIIH prot and with CSB p2 protein. Figure 1 shows the overview of the proposed PPI extraction system. As a classifier using a single corpus, we use the 2-norm soft-margin linear SVM (L2-SVM) classifier, with the dual coordinate decent (DCD) method, by . In this section, we explain the two main features: the feature vector, and the corpus weighting method for multiple corpora.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 242, |
|
"end": 250, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "PPI Extraction System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We propose a feature vector with three types of features, corresponding to the three different kernels, which were each combined with the two parsers: the Enju 2.3.0, and KSDEP beta 1 ; this feature vector is used because the kernels with these parsers were shown to be effective for PPI extraction by Miwa et al. (2008) , and because it is important to start from a good performance single corpus system. Both parsers were retrained using the GENIA Treebank corpus provided by Kim et al. (2003) . By using our linear feature vector, we can perform calculations faster by using fast linear classifiers like L2-SVM, and we also obtain a more accurate extraction, than by using the original kernel method. output is grouped according to the feature-type and parser, and each group of features is separately normalized by the L2-norm 1 . Finally, all values are put into a single feature vector, and the whole feature vector is then also normalized by the L2norm. The features are constructed by using predicate argument structures (PAS) from Enju, and by using the dependency trees from KSDEP.", |
|
"cite_spans": [ |
|
{ |
|
"start": 302, |
|
"end": 320, |
|
"text": "Miwa et al. (2008)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 478, |
|
"end": 495, |
|
"text": "Kim et al. (2003)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Vector", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "N M O D S B J r N M O D E N T I T Y 1 p r o t e i n i n t e r a c t E N T I T Y 2 p r o t e i n p r o t e i n E N T I T Y 1 p r o t e i n i n t e r a c t s w i t h m u l t i p l e a n d w i t h E N T I T Y 2 p r o t e i n . N M O D S B J C O O D C O O R D N M O D P M O D N M O D S B J r N M O D p r o t e i n i n t e r a c t p r o t e i n S B J r C O O D r P M O D V - w a l k s E - w a l k s \u30fb \u30fb \u30fb \u30fb \u30fb \u30fb \u30fb \u30fb \u30fb", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Vector", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The BOW feature includes the lemma form of a word, its relative position to the target pair of proteins (Before, Middle, After), and its frequency in the target sentence. BOW features form the BOW kernel in the original kernel method. BOW features for the pair in Figure 2 are shown in Figure 4 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 264, |
|
"end": 272, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 286, |
|
"end": 294, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Bag-of-Words (BOW) Features", |
|
"sec_num": "3.1.1" |
|
}, |
|
{ |
|
"text": "SP features include vertex walks (v-walks), edge walks (e-walks), and their subsets (Kim et al., 2008b) on the target pair in a parse structure, and represent the connection between the pair. The features are the subsets of the tree kernels on the shortest path (Saetre et al., 2007) . an e-walk includes a lemma and its two links. The links indicates the predicate argument relations for PAS, and the dependencies for dependency trees.", |
|
"cite_spans": [ |
|
{ |
|
"start": 84, |
|
"end": 103, |
|
"text": "(Kim et al., 2008b)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 262, |
|
"end": 283, |
|
"text": "(Saetre et al., 2007)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shortest Path (SP) Features", |
|
"sec_num": "3.1.2" |
|
}, |
|
{ |
|
"text": "Graph features are made from the all-paths graph kernel proposed by . The kernel represents the target pair using graph matrices based on two subgraphs, and the graph features are all the non-zero elements in the graph matrices.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Graph Features", |
|
"sec_num": "3.1.3" |
|
}, |
|
{ |
|
"text": "The two subgraphs are a parse structure subgraph (PSS) and a linear order subgraph (LOS). Figure 6 describes the subgraphs of the sentence parsed by KSDEP in Figure 2 . PSS represents the parse structure of a sentence. PSS has word vertices or link vertices. A word vertex contains its lemma and its part-of-speech (POS), while a link vertex contains its link. Additionally, both types of vertices contain their positions relative to the shortest path. The \"IP\"s in the vertices on the shortest path represent the positions, and the vertices are differentiated from the other vertices like \"P\", \"CC\", and \"and:CC\" in Figure 6 . LOS represents the word sequence in the sentence. LOS has word vertices, each of which contains its lemma, its relative position to the target pair, and its POS.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 90, |
|
"end": 98, |
|
"text": "Figure 6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 158, |
|
"end": 166, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 617, |
|
"end": 625, |
|
"text": "Figure 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Graph Features", |
|
"sec_num": "3.1.3" |
|
}, |
|
{ |
|
"text": "Each subgraph is represented by a graph matrix G as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Graph Features", |
|
"sec_num": "3.1.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "G = L T \u221e n=1 A n L,", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Graph Features", |
|
"sec_num": "3.1.3" |
|
}, |
|
{ |
|
"text": "where L is a N \u00d7L label matrix, A is an N \u00d7N edge matrix, N represents the number of vertices, and L represents the number of labels. The label of a vertex includes all information described above (e.g. \"ENTITY1:NN:IP\" in Figure 6 ). If two vertices have exactly same information, the labels will be same. G can be calculated efficiently by using the Neumann Series . The label matrix represents the correspondence between labels and vertices. L ij is 1 if the i-th vertex corresponds to the j-th label, and 0 otherwise. The edge matrix represents the connection between the pairs of vertices. A ij is a weight w ij (0.9 or 0.3 in Figure 6 ) if the i-th vertex is connected to the j-th vertex, and 0 otherwise. By this calculation, G ij represent the sum of the weights of all paths between the i-th label and the j-th label. Figure 7: Learning curves on two large corpora. The x-axis is related to the percentage of the examples in a corpus. The curves are obtained by a 10-fold CV with a random split. Table 1 shows the sizes of the PPI corpora that we used. Their widely-ranged differences including the sizes were manually analyzed by . While AIMed, HPRD50, IEPA, and LLL were all annotated as PPI corpora, BioInfer in its original form contains much more fine-grained information than does just the PPI. BioInfer was transformed into a PPI corpus by a program, so making it the largest of the five. Among them, AIMed alone was created by annotating whole abstracts, while the other corpora were made by annotating single sentences selected from abstracts. Figure 7 shows the learning curves on two large corpora: AIMed and BioInfer. The curves are obtained by performing a 10-fold cross validation (CV) on each corpus, with random splits, using our system. The curves show that the performances can benefit from the additional examples. To get a better PPI extraction system for a chosen target, we need to draw useful shared information from external source corpora. We refer to examples in the source corpora as \"source examples\", and examples in a target corpus as \"target examples\". Among the corpora, we assume that the labels in some examples are incompatible, and that their distributions are also different, but that the feature space is shared.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 222, |
|
"end": 230, |
|
"text": "Figure 6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 631, |
|
"end": 639, |
|
"text": "Figure 6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1004, |
|
"end": 1011, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 1561, |
|
"end": 1569, |
|
"text": "Figure 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Graph Features", |
|
"sec_num": "3.1.3" |
|
}, |
|
{ |
|
"text": "In order to draw useful information from the source corpora to get a better model for the target Figure 6 : Parse structure subgraph and linear order subgraph to extract graph features of the pair in Figure 2 . The parse structure subgraph is from the parse tree by KSDEP.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 97, |
|
"end": 105, |
|
"text": "Figure 6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 200, |
|
"end": 208, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Corpus Weighting for Mixing Corpora", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "E N T I T Y 1 N N I P p r o t e i n N N I P i n t e r a c t V B Z I P w i t h I N I P m u l t i p l e J J s u b u n i t N N S o f I N P R O T N N a n d C C w i t h I N I P E N T I T Y 2 N N I P p r o t e i n N N I P . . N M O D I P S B J I P C O O D I P P M O D N M O D N M O D P M O D C C C O O R D I P N M O D I P P M O D I P P E N T I T Y 1 N N p r o t e i n N N M i n t e r a c t V B Z M w i t h I N M m u l t i p l e J J M s u b u n i t N N S M o f I N M P R O T N N M a n d C C M w i t h I N M E N T I T Y 2 N", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Corpus Weighting for Mixing Corpora", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "corpus, we use SVM-CW, which has been used as a DA method. Given a set of instance-label pairs (x i , y i ), i = 1, . . ., ls + lt, x i \u2208R n , and y i \u2208{\u22121, +1}, we solve the following problem:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Corpus Weighting for Mixing Corpora", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "min w 1 2 w T w + C s ls i=1 i + C t ls+lt i=ls+1 i ,", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Corpus Weighting for Mixing Corpora", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where w is a weight vector, is a loss function, and ls and lt are the numbers of source and target examples respectively. C s \u2265 0 and C t \u2265 0 are penalty parameters. We use a squared hinge loss", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Corpus Weighting for Mixing Corpora", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "i = max(0, 1 \u2212 y i w T x i ) 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Corpus Weighting for Mixing Corpora", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Here, the source corpora are treated as one corpus. The problem, excluding the second term, is equal to L2-SVM. The problem can be solved using the DCD method.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Corpus Weighting for Mixing Corpora", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "As an ITL method, SVM-CW weights each corpus, and tries to benefit from the source corpora, by adjusting the effect of their compatibility and incompatibility. For the adjustment, these penalty parameters should be set properly. Since we are unaware of the widely ranged differences among the corpora, we empirically estimated them by performing 10-fold CV on the training data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Corpus Weighting for Mixing Corpora", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We used five corpora for evaluation: AIMed, BioInfer, HPRD50, IEPA, and LLL. For the comparison with other methods, we report the Fscore (%), and the area under the receiver operating characteristic (ROC) curve (AUC) (%) using (abstract-wise) a 10-fold CV and a oneanswer-per-occurrence criterion. These measures are commonly used for the PPI extraction tasks. The F-score is a harmonic mean of Precision and Recall. The ROC curve is a plot of a true positive rate (TPR) vs a false positive rate (FPR) for different thresholds. We tuned the regularization parameters of all classifiers by performing a 10fold CV on the training data using a random split. The other parameters were fixed, and we report the highest of the macro-averaged F-scores as our final F-score. For 10-fold CV, we split the corpora as recommended by .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Settings", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In this section, we evaluate our system on a single corpus, in order to evaluate our feature vector and to justify the use of the following modules: normalization methods and classification methods.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PPI Extraction on a Single Corpus", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "First, we compare our preprocessing method with other preprocessing methods to confirm how our preprocessing method improves the performance. Our method produced 64.2% in F-score using L2-SVM on AIMed. Scaling all features individually to have a maximal absolute value of 1, produced only 44.2% in the F-score, while normalizing the feature vector by L2-norm produced 61.5% in the F-score. Both methods were inferior to our method, because the values of features in the same group should be treated together, and Table 2 : Classification performance on AIMed using five different linear classifiers. The F-score (F) and Area Under the ROC curve (AUC) are shown. L2 is L2-SVM, L1 is L1-SVM, LR is logistic regression, AP is averaged perceptron, and CW is confidence weighted linear classification.", |
|
"cite_spans": [ |
|
{ |
|
"start": 612, |
|
"end": 615, |
|
"text": "(F)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 513, |
|
"end": 520, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "PPI Extraction on a Single Corpus", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "group with different values can produce better results, as will be explored in our future work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PPI Extraction on a Single Corpus", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Next, using our feature vector, we applied five different linear classifiers to extract PPI from AIMed: L2-SVM, 1-norm soft-margin SVM (L1-SVM), logistic regression (LR) (Fan et al., 2008) , averaged perceptron (AP) (Collins, 2002) , and confidence weighted linear classification (CW) (Dredze et al., 2008) . Table 2 indicates the performance of these classifiers on AIMed. We employed better settings for the task than did the original methods for AP and CW. We used a Widrow-Hoff learning rule (Bishop, 1995) for AP, and we performed one iteration for CW. L2-SVM is as good as, if not better, than other classifiers (Fscore and AUC). In the least, L2-SVM is as fast as these classifiers. AP and CW are worse than the other three methods, because they require a large number of examples, and are un-suitable for the current task. This result indicates that all linear classifiers, with the exception of AP and CW, perform almost equally, when using our feature vector.", |
|
"cite_spans": [ |
|
{ |
|
"start": 170, |
|
"end": 188, |
|
"text": "(Fan et al., 2008)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 216, |
|
"end": 231, |
|
"text": "(Collins, 2002)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 285, |
|
"end": 306, |
|
"text": "(Dredze et al., 2008)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 496, |
|
"end": 510, |
|
"text": "(Bishop, 1995)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 309, |
|
"end": 316, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "PPI Extraction on a Single Corpus", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Finally, we implemented the kernel method by Miwa et al. (2008) . For a 10-fold CV on AIMed, the running time was 9,507 seconds, and the performance was 61.5% F-score and 87.1% AUC. Our system used 4,702 seconds, and the performance was 64.2% F-score and 89.1% AUC. This result displayed that our system, with L2-SVM, and our new feature vector, is better, and faster, than the kernel-based system.", |
|
"cite_spans": [ |
|
{ |
|
"start": 45, |
|
"end": 63, |
|
"text": "Miwa et al. (2008)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PPI Extraction on a Single Corpus", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In this section, we first apply each model from a source corpus to a target corpus, to show how different the corpora are. We then evaluate SVM-CW by comparing it with three other methods (see Section 2) with limited features, and apply it to every corpus. A I M e d B i o I n f e r H P R D 5 0 I E P A L L L F T a r g e t c o r p u s A I M e d B i o I n f e r H P R D 5 Figure 8 : F-score on a target corpus using a model on a source corpus. For the comparison, we show the 10-fold CV result on each target corpus and co-occurrences. The regularization parameter was fixed to 1.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 257, |
|
"end": 429, |
|
"text": "A I M e d B i o I n f e r H P R D 5 0 I E P A L L L F T a r g e t c o r p u s A I M e d B i o I n f e r H P R D 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 430, |
|
"end": 438, |
|
"text": "Figure 8", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation of Corpus Weighting", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "0 I E P A L L L c o - o c c M o d e l", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation of Corpus Weighting", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "First, we apply the model from a source corpus to a target corpus. Figure 8 shows how the model from a source corpus performs on the target corpus. Interestingly, the model from IEPA performs better on LLL than the model from LLL itself. All the results showed that using different corpora (except IEPA) is worse than just using the same corpora. However, the cross-corpora scores are still better than the co-occurrences base-line, which indicates that the corpora share some information, even though they are not fully compatible.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 67, |
|
"end": 75, |
|
"text": "Figure 8", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation of Corpus Weighting", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Next, we compare SVM-CW with three other methods: aSVM, SVD-ASO, and TrAdaBoost. For this comparison, we used our feature vector without including the graph features, because SVD-ASO and TrAdaBoost require large computational resources. We applied SVD-ASO and TrAdaBoost in the following way. As for SVD-ASO, we made 400 auxiliary problems from the labels of each corpus by splitting features randomly, and extracted 50 additional features each for 4 feature groups. In total, we made new 200 additional features from 2,000 auxiliary problems. As recommended by Ando et al. 2005, we removed negative weights, performed SVD to each feature group, and iterated ASO once. Since Ad-aBoost easily overfitted with our rich feature vector, we applied soft margins (Ratsch et al., 2001) to TrAdaBoost. The update parameter for source examples was calculated using the update parameter on the training data in AdaBoost and the original parameter in TrAdaBoost. This ensures that the parameter would be the same as the original parameter, when the C value in the soft margin approaches infinity. Table 4 : F-score and AUC by SVM-CW. Rows correspond to a target corpus, and columns a source corpus. A:AIMed, B:BioInfer, H:HPRD50, I:IEPA, and L:LLL corpora. \"all\" signifies that all source corpora are used as one source corpus, ignoring the differences among the corpora. For the comparison, we show the 10-fold CV result on each target corpus.", |
|
"cite_spans": [ |
|
{ |
|
"start": 757, |
|
"end": 778, |
|
"text": "(Ratsch et al., 2001)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1086, |
|
"end": 1093, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation of Corpus Weighting", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "In Table 3 , we demonstrate the results of the comparison. SVM-CW improved the classification performance at least as much as all the other methods. The improvement is mainly attributed to the aggressive use of source examples while learning the model. Some source examples can be used as training data, as indicated in Figure 8 . SVM-CW does not set the restriction between C s and C t in Equation 2, so it can use source examples aggressively while learning the model. Since aSVM transfers a model, and SVD-ASO transfers an additional feature space, aSVM and SVD-ASO do not use the source examples while learning the model. In addition to the difference in the data usage, the settings of aSVM and SVD-ASO do not match the current task. As for aSVM, the DA assumption (that the labels are the same) does not match the task. In SVD-ASO, the numbers of both source examples and auxiliary problems are much smaller than those reported by Ando et al. (2005) . TrAdaBoost uses the source examples while learning the model, but never increases the weight of the examples, and it attempts to reduce their effects.", |
|
"cite_spans": [ |
|
{ |
|
"start": 937, |
|
"end": 955, |
|
"text": "Ando et al. (2005)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 10, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 320, |
|
"end": 328, |
|
"text": "Figure 8", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation of Corpus Weighting", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Finally, we apply SVM-CW to all corpora using all features. Table 4 summarizes the F-score and AUC by SVM-CW with all features. SVM-CW is especially effective for small corpora, showing that SVM-CW can adapt source corpora to a small annotated target corpus. The improvement on AIMed is small compared to the improvement on BioInfer, even though these corpora are similar in size. One of the reasons for this is that whole abstracts are annotated in AIMed, therefore making the examples biased. The difference between L2-SVM and SVM-CW + IEPA on AIMed is small, but statistically, it is significant (McNemar test (McNemar, 1947) , P = 0.0081). In the cases of HPRD50 + IEPA, LLL + IEPA, and two folds in BioInfer + IEPA, C s is larger than C t in Equation (2). This is worth noting, because the source corpus is more weighted than the target corpus, and the prediction performance on the target corpus is improved. Most methods put more trust in the target corpus than in the source corpus, and our results show that this setting is not always effective for mixing corpora. The results also indicate that IEPA contains more useful information for extracting PPI than other corpora, and that using source examples aggressively is important for these combinations. We compared the results of L2-SVM and SVM-CW + IEPA on AIMed, and found that 38 pairs were described as \"interaction\" or \"binding\" in the sentences among 61 . A:AIMed, B:BioInfer, H:HPRD50, I:IEPA, and L:LLL corpora. The results with the highest F-score from Table 4 are reported as the results for SVM-CW. newly found pairs. This analysis is evidence that IEPA contains instances to help find such interactions, and that SVM-CW helps to collect gold pairs that lack enough supporting instances in a single corpus, by adding instances from other corpora. SVM-CW missed coreferential relations that were also missed by L2-SVM. This can be attributed to the fact that the coreferential information is not stored in our current feature vector; so we need an even more expressive feature space. This is left as future work. SVM-CW is effective on most corpus combinations, and all the models from single corpora can be improved by adding other source corpora. This result is impressive, because the baselines by L2-SVM on just single corpora are already better than or at least comparable to other state-of-the-art PPI extraction systems, and also because the variety of the differences among different corpora is quite wide depending on various factors including annotation policies of the corpora . The results suggest that SVM-CW is useful as an ITL method.", |
|
"cite_spans": [ |
|
{ |
|
"start": 599, |
|
"end": 628, |
|
"text": "(McNemar test (McNemar, 1947)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 60, |
|
"end": 67, |
|
"text": "Table 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1522, |
|
"end": 1529, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation of Corpus Weighting", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We compare our system with other previously published PPI extraction systems. Tables 5 and 6 summarize the comparison. Table 5 summarizes the comparison of several PPI extraction systems evaluated on the AIMed corpus. As indicated, the performance of the heavy kernel method is lower than our fast rich feature-vector method. Our system is, to the extent of our knowledge, the best performing PPI extraction system evaluated on the AIMed corpus, both in terms of AUC and F-scores. first reported results using all five corpora. We cannot directly com-pare our result with the F-score results, because they tuned the threshold, but our system still outperforms the system by on every corpus in AUC values. The results also indicate that our system outperforms other systems on all PPI corpora, and that both the rich feature vector and the corpus weighting are effective for the PPI extraction task.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 119, |
|
"end": 126, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparison with Other PPI Systems", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "In this paper, we proposed a PPI extraction system with a rich feature vector, using a corpus weighting method (SVM-CW) for combining the multiple PPI corpora. The feature vector extracts as much information as possible from the main training corpus, and SVM-CW incorporate other external source corpora in order to improve the performance of the classifier on the main target corpus. To the extent of our knowledge, this is the first application of ITL and DA methods to PPI extraction. As a result, the system, with SVM-CW and the feature vector, outperformed all other PPI extraction systems on all of the corpora. The PPI corpora share some information, and it is shown to be effective to add other source corpora when working with a specific target corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The main contributions of this paper are: 1) conducting experiments in extracting PPI using multiple corpora, 2) suggesting a rich feature vector using several previously proposed features and normalization methods, 3) the combination of SVM with corpus weighting and the new feature vector improved results on this task compared with prior work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "There are many differences among the corpora that we used, and some of the differences are still unresolved. For further improvement, it would be necessary to investigate what is shared and what is different among the corpora. The SVM-CW method, and the PPI extraction system, can be applied generally to other classification tasks, and to other binary relation extraction tasks, without the need for modification. There are several other tasks in which many different corpora, which at first glance seem compatible, exist. By applying SVM-CW to such corpora, we will analyze which differences can be resolved by SVM-CW, and what differences require a manual resolution.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "For the PPI extraction system, we found many false negatives that need to be resolved. For further improvement, we need to analyze the cause", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The vector normalized by the L2-norm is also called a unit vector.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This work was partially supported by Grant-in-Aid for Specially Promoted Research (MEXT, Japan), Genome Network Project (MEXT, Japan), and Scientific Research (C) (General) (MEXT, Japan).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "The numbers of positive and all examples, precision (P), recall (R), F-score (F), and AUC are shown. The result with the highest F-score from Table 4 is reported as the result for SVM-CW. The scores in the parentheses of Miwa et al. (2008) indicate the result using the same 10-fold splits as our result", |
|
"authors": [], |
|
"year": null, |
|
"venue": "Comparison with previous PPI extraction results on the AIMed corpus", |
|
"volume": "5", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Table 5: Comparison with previous PPI extraction results on the AIMed corpus. The numbers of positive and all examples, precision (P), recall (R), F-score (F), and AUC are shown. The result with the highest F-score from Table 4 is reported as the result for SVM-CW. The scores in the parentheses of Miwa et al. (2008) indicate the result using the same 10-fold splits as our result, as indicated in Section 4.2.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "All-paths graph kernel for protein-protein interaction extraction with evaluation of cross corpus learning", |
|
"authors": [ |
|
{ |
|
"first": "Antti", |
|
"middle": [], |
|
"last": "Airola", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sampo", |
|
"middle": [], |
|
"last": "Pyysalo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jari", |
|
"middle": [], |
|
"last": "Bj\u00f6rne", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tapio", |
|
"middle": [], |
|
"last": "Pahikkala", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Filip", |
|
"middle": [], |
|
"last": "Ginter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tapio", |
|
"middle": [], |
|
"last": "Salakoski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "BMC Bioinformatics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Antti Airola, Sampo Pyysalo, Jari Bj\u00f6rne, Tapio Pahikkala, Filip Ginter, and Tapio Salakoski. 2008. All-paths graph kernel for protein-protein interac- tion extraction with evaluation of cross corpus learn- ing. BMC Bioinformatics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "A framework for learning predictive structures from multiple tasks and unlabeled data", |
|
"authors": [ |
|
{ |
|
"first": "Tong", |
|
"middle": [], |
|
"last": "Rie Kubota Ando", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bartlett", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "6", |
|
"issue": "", |
|
"pages": "1817--1853", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rie Kubota Ando, Tong Zhang, and Peter Bartlett. 2005. A framework for learning predictive struc- tures from multiple tasks and unlabeled data. Jour- nal of Machine Learning Research, 6:1817-1853.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Applying alternating structure optimization to word sense disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "Ando", |
|
"middle": [], |
|
"last": "Rie Kubota", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the Tenth Conference on Computational Natural Language Learning (CoNLL-X)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "77--84", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rie Kubota Ando. 2006. Applying alternating struc- ture optimization to word sense disambiguation. In Proceedings of the Tenth Conference on Compu- tational Natural Language Learning (CoNLL-X), pages 77-84, June.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Neural Networks for Pattern Recognition", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Bishop", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Bishop. 1995. Neural Networks for Pattern Recognition. Oxford University Press.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Subsequence kernels for relation extraction", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Razvan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raymond", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Bunescu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mooney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "NIPS 2005", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Razvan C. Bunescu and Raymond J. Mooney. 2005. Subsequence kernels for relation extraction. In NIPS 2005.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Comparative experiments on learning information extractors for proteins and their interactions", |
|
"authors": [ |
|
{ |
|
"first": "Yuk", |
|
"middle": [ |
|
"Wah" |
|
], |
|
"last": "Ramani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Wong", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Artificial Intelligence in Medicine", |
|
"volume": "33", |
|
"issue": "2", |
|
"pages": "139--155", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ramani, and Yuk Wah Wong. 2005. Comparative experiments on learning information extractors for proteins and their interactions. Artificial Intelligence in Medicine, 33(2):139-155.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Discriminative training methods for hidden markov models: theory and experiments with perceptron algorithms", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "EMNLP 2002", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--8", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Collins. 2002. Discriminative training meth- ods for hidden markov models: theory and experi- ments with perceptron algorithms. In EMNLP 2002, pages 1-8.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Boosting for transfer learning", |
|
"authors": [ |
|
{ |
|
"first": "Wenyuan", |
|
"middle": [], |
|
"last": "Dai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qiang", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gui-Rong", |
|
"middle": [], |
|
"last": "Xue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yong", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "ICML 2007", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "193--200", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wenyuan Dai, Qiang Yang, Gui-Rong Xue, and Yong Yu. 2007. Boosting for transfer learning. In ICML 2007, pages 193-200.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Mining medline: abstracts, sentences, or phrases?", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Ding", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Berleant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Nettleton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Wurtele", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Pacific Symposium on Biocomputing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "326--337", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Ding, D. Berleant, D. Nettleton, and E. Wurtele. 2002. Mining medline: abstracts, sentences, or phrases? Pacific Symposium on Biocomputing, pages 326-337.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Confidence-weighted linear classification", |
|
"authors": [ |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Dredze", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Koby", |
|
"middle": [], |
|
"last": "Crammer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernando", |
|
"middle": [], |
|
"last": "Pereira", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "264--271", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mark Dredze, Koby Crammer, and Fernando Pereira. 2008. Confidence-weighted linear classification. In ICML 2008, pages 264-271.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Semi-supervised classification for extracting protein interaction sentences using dependency parsing", |
|
"authors": [ |
|
{ |
|
"first": "Gunes", |
|
"middle": [], |
|
"last": "Erkan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arzucan", |
|
"middle": [], |
|
"last": "Ozgur", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dragomir", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Radev", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gunes Erkan, Arzucan Ozgur, and Dragomir R. Radev. 2007. Semi-supervised classification for extract- ing protein interaction sentences using dependency parsing. In EMNLP 2007.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "LIBLINEAR: A library for large linear classification", |
|
"authors": [ |
|
{ |
|
"first": "Kai-Wei", |
|
"middle": [], |
|
"last": "Rong-En Fan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cho-Jui", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiang-Rui", |
|
"middle": [], |
|
"last": "Hsieh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chih-Jen", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "9", |
|
"issue": "", |
|
"pages": "1871--1874", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang- Rui Wang, and Chih-Jen Lin. 2008. LIBLINEAR: A library for large linear classification. Journal of Machine Learning Research, 9:1871-1874.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Relex-relation extraction using dependency parse trees", |
|
"authors": [ |
|
{ |
|
"first": "Katrin", |
|
"middle": [], |
|
"last": "Fundel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "K\u00fcffner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ralf", |
|
"middle": [], |
|
"last": "Zimmer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Bioinformatics", |
|
"volume": "23", |
|
"issue": "3", |
|
"pages": "365--371", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Katrin Fundel, Robert K\u00fcffner, and Ralf Zimmer. 2006. Relex-relation extraction using dependency parse trees. Bioinformatics, 23(3):365-371.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "A dual coordinate descent method for large-scale linear SVM", |
|
"authors": [ |
|
{ |
|
"first": "Cho-Jui", |
|
"middle": [], |
|
"last": "Hsieh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chih-Jen", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Sathiya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Keerthi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sundararajan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "408--415", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cho-Jui Hsieh, Kai-Wei Chang, Chih-Jen Lin, S. Sathiya Keerthi, and S. Sundararajan. 2008. A dual coordinate descent method for large-scale lin- ear SVM. In ICML 2008, pages 408-415.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "GENIA corpus -a semantically annotated corpus for bio-textmining", |
|
"authors": [ |
|
{ |
|
"first": "Jin-Dong", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomoko", |
|
"middle": [], |
|
"last": "Ohta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuka", |
|
"middle": [], |
|
"last": "Tateisi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jun'ichi", |
|
"middle": [], |
|
"last": "Tsujii", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Bioinformatics", |
|
"volume": "19", |
|
"issue": "", |
|
"pages": "180--182", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jin-Dong Kim, Tomoko Ohta, Yuka Tateisi, and Jun'ichi Tsujii. 2003. GENIA corpus -a semanti- cally annotated corpus for bio-textmining. Bioinfor- matics, 19:i180-i182.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Corpus annotation for mining biomedical events from literature", |
|
"authors": [ |
|
{ |
|
"first": "Jin-Dong", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomoko", |
|
"middle": [], |
|
"last": "Ohta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jun'ichi", |
|
"middle": [], |
|
"last": "Tsujii", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "BMC Bioinformatics", |
|
"volume": "9", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jin-Dong Kim, Tomoko Ohta, and Jun'ichi Tsujii. 2008a. Corpus annotation for mining biomedical events from literature. BMC Bioinformatics, 9:10.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Kernel approaches for genic interaction extraction", |
|
"authors": [ |
|
{ |
|
"first": "Seonho", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Juntae", |
|
"middle": [], |
|
"last": "Yoon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jihoon", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Bioinformatics", |
|
"volume": "24", |
|
"issue": "1", |
|
"pages": "118--126", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Seonho Kim, Juntae Yoon, and Jihoon Yang. 2008b. Kernel approaches for genic interaction extraction. Bioinformatics, 24(1):118-126.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "An evaluation of human protein-protein interaction data in the public domain", |
|
"authors": [ |
|
{ |
|
"first": "Balamurugan", |
|
"middle": [], |
|
"last": "Suresh Mathivanan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Periaswamy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kumaran", |
|
"middle": [], |
|
"last": "Gandhi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shubha", |
|
"middle": [], |
|
"last": "Kandasamy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Riaz", |
|
"middle": [], |
|
"last": "Suresh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mohmood", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Akhilesh", |
|
"middle": [], |
|
"last": "Yl Ramachandra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Pandey", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "BMC Bioinformatics", |
|
"volume": "7", |
|
"issue": "5", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Suresh Mathivanan, Balamurugan Periaswamy, TKB Gandhi, Kumaran Kandasamy, Shubha Suresh, Riaz Mohmood, YL Ramachandra, and Akhilesh Pandey. 2006. An evaluation of human protein-protein inter- action data in the public domain. BMC Bioinformat- ics, 7 Suppl 5:S19.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Note on the sampling error of the difference between correlated proportions or percentages", |
|
"authors": [ |
|
{ |
|
"first": "Quinn", |
|
"middle": [], |
|
"last": "Mcnemar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1947, |
|
"venue": "Psychometrika", |
|
"volume": "12", |
|
"issue": "2", |
|
"pages": "153--157", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Quinn McNemar. 1947. Note on the sampling error of the difference between correlated proportions or percentages. Psychometrika, 12(2):153-157, June.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Combining multiple layers of syntactic information for proteinprotein interaction extraction", |
|
"authors": [ |
|
{ |
|
"first": "Makoto", |
|
"middle": [], |
|
"last": "Miwa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rune", |
|
"middle": [], |
|
"last": "Saetre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yusuke", |
|
"middle": [], |
|
"last": "Miyao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomoko", |
|
"middle": [], |
|
"last": "Ohta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jun'ichi", |
|
"middle": [], |
|
"last": "Tsujii", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the Third International Symposium on Semantic Mining in Biomedicine (SMBM 2008)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "101--108", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Makoto Miwa, Rune Saetre, Yusuke Miyao, Tomoko Ohta, and Jun'ichi Tsujii. 2008. Combining mul- tiple layers of syntactic information for protein- protein interaction extraction. In Proceedings of the Third International Symposium on Semantic Mining in Biomedicine (SMBM 2008), pages 101-108.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Taskoriented evaluation of syntactic parsers and their representations", |
|
"authors": [ |
|
{ |
|
"first": "Yusuke", |
|
"middle": [], |
|
"last": "Miyao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rune", |
|
"middle": [], |
|
"last": "Saetre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenji", |
|
"middle": [], |
|
"last": "Sagae", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Takuya", |
|
"middle": [], |
|
"last": "Matsuzaki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jun'ichi", |
|
"middle": [], |
|
"last": "Tsujii", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 45th Meeting of the Association for Computational Linguistics (ACL'08:HLT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yusuke Miyao, Rune Saetre, Kenji Sagae, Takuya Matsuzaki, and Jun'ichi Tsujii. 2008. Task- oriented evaluation of syntactic parsers and their representations. In Proceedings of the 45th Meet- ing of the Association for Computational Linguistics (ACL'08:HLT).", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Learning language in logicgenic interaction extraction challenge", |
|
"authors": [ |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "N\u00e9dellec", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the LLL'05 Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Claire N\u00e9dellec. 2005. Learning language in logic - genic interaction extraction challenge. In Proceed- ings of the LLL'05 Workshop.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "A survey on transfer learning", |
|
"authors": [ |
|
{ |
|
"first": "Qiang", |
|
"middle": [], |
|
"last": "Sinno Jialin Pan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sinno Jialin Pan and Qiang Yang. 2008. A survey on transfer learning. Technical Report HKUST-CS08- 08, Department of Computer Science and Engineer- ing, Hong Kong University of Science and Technol- ogy, Hong Kong, China, November.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "BioInfer: A corpus for information extraction in the biomedical domain", |
|
"authors": [ |
|
{ |
|
"first": "Sampo", |
|
"middle": [], |
|
"last": "Pyysalo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Filip", |
|
"middle": [], |
|
"last": "Ginter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Juho", |
|
"middle": [], |
|
"last": "Heimonen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jari", |
|
"middle": [], |
|
"last": "Bj\u00f6rne", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jorma", |
|
"middle": [], |
|
"last": "Boberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jouni", |
|
"middle": [], |
|
"last": "J\u00e4rvinen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tapio", |
|
"middle": [], |
|
"last": "Salakoski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "BMC Bioinformatics", |
|
"volume": "8", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sampo Pyysalo, Filip Ginter, Juho Heimonen, Jari Bj\u00f6rne, Jorma Boberg, Jouni J\u00e4rvinen, and Tapio Salakoski. 2007. BioInfer: A corpus for infor- mation extraction in the biomedical domain. BMC Bioinformatics, 8:50.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Comparative analysis of five protein-protein interaction corpora", |
|
"authors": [ |
|
{ |
|
"first": "Sampo", |
|
"middle": [], |
|
"last": "Pyysalo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antti", |
|
"middle": [], |
|
"last": "Airola", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Juho", |
|
"middle": [], |
|
"last": "Heimonen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jari", |
|
"middle": [], |
|
"last": "Bj\u00f6rne", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Filip", |
|
"middle": [], |
|
"last": "Ginter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tapio", |
|
"middle": [], |
|
"last": "Salakoski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "BMC Bioinformatics", |
|
"volume": "9", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sampo Pyysalo, Antti Airola, Juho Heimonen, Jari Bj\u00f6rne, Filip Ginter, and Tapio Salakoski. 2008. Comparative analysis of five protein-protein inter- action corpora. In BMC Bioinformatics, volume 9(Suppl 3), page S6.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Soft margins for adaboost", |
|
"authors": [ |
|
{ |
|
"first": "Gunnar", |
|
"middle": [], |
|
"last": "Ratsch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Takashi", |
|
"middle": [], |
|
"last": "Onoda", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Klaus-Robert", |
|
"middle": [], |
|
"last": "Muller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Machine Learning", |
|
"volume": "42", |
|
"issue": "", |
|
"pages": "287--320", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gunnar Ratsch, Takashi Onoda, and Klaus-Robert Muller. 2001. Soft margins for adaboost. Machine Learning, 42(3):287-320.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Syntactic features for protein-protein interaction extraction", |
|
"authors": [ |
|
{ |
|
"first": "Rune", |
|
"middle": [], |
|
"last": "Saetre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenji", |
|
"middle": [], |
|
"last": "Sagae", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jun'ichi", |
|
"middle": [], |
|
"last": "Tsujii", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "LBM 2007 short papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rune Saetre, Kenji Sagae, and Jun'ichi Tsujii. 2007. Syntactic features for protein-protein interaction ex- traction. In LBM 2007 short papers.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "An empirical analysis of domain adaptation algorithms for genomic sequence analysis", |
|
"authors": [ |
|
{ |
|
"first": "Gabriele", |
|
"middle": [], |
|
"last": "Schweikert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Widmer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bernhard", |
|
"middle": [], |
|
"last": "Sch\u00f6lkopf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gunnar", |
|
"middle": [], |
|
"last": "R\u00e4tsch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "NIPS", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1433--1440", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gabriele Schweikert, Christian Widmer, Bernhard Sch\u00f6lkopf, and Gunnar R\u00e4tsch. 2008. An empir- ical analysis of domain adaptation algorithms for genomic sequence analysis. In NIPS, pages 1433- 1440.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Cross-domain video concept detection using adaptive SVMs", |
|
"authors": [ |
|
{ |
|
"first": "Jun", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rong", |
|
"middle": [], |
|
"last": "Yan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Hauptmann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "MULTIMEDIA '07: Proceedings of the 15th international conference on Multimedia", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "188--197", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jun Yang, Rong Yan, and Alexander G. Hauptmann. 2007. Cross-domain video concept detection using adaptive SVMs. In MULTIMEDIA '07: Proceed- ings of the 15th international conference on Multi- media, pages 188-197.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "A sentence including an interacting protein pair (p1, p2", |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"text": "Extraction of a feature vector from the target sentence of proteins in a sentence is interacting or not.Figure 2shows an example of a sentence in which the given pair (p1 and p2) actually interacts.", |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"text": "summarizes the way in which the feature vector is constructed. The system extracts Bag-of-Words (BOW), shortest path (SP), and graph features from the output of two parsers. The PROT M:1, and M:1, interact M:1, multiple M:1, of M:1, protein M:1, subunit M:1, with M:2, protein A:1 Bag-of-Words features of the pair in Figure 2 with their positions (B:Before, M:in the Middle of, A:After) and frequencies.", |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF3": { |
|
"text": "Vertex walks, edge walks in the upper shortest path between the proteins in the parse tree by KSDEP. The walks and their subsets are used as the shortest path features of the pair inFigure 2.", |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF4": { |
|
"text": "Figure 5illustrates the shortest path between the pair in Figure 2, and its v-walks and e-walks extracted from the shortest path in the parse tree by KSDEP. A v-walk includes two lemmas and their link, while", |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"TABREF1": { |
|
"text": "The sizes of used PPI corpora. A:AIMed, B:BioInfer, H:HPRD50, I:IEPA, and L:LLL.", |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td/><td/><td/><td colspan=\"2\">A</td><td/><td>B</td><td/><td>H</td><td/><td>I</td><td>L</td></tr><tr><td/><td colspan=\"10\">positive 1,000 2,534 163 335 164</td></tr><tr><td/><td>all</td><td/><td colspan=\"8\">5,834 9,653 433 817 330</td></tr><tr><td>1 0</td><td>0</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>9</td><td>0</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>8</td><td>0</td><td/><td/><td/><td/><td/><td/><td/><td/><td>A B i I m e o I n f d e ( F r ( ) F )</td></tr><tr><td>7</td><td>0</td><td/><td/><td/><td/><td/><td/><td/><td/><td>A B i I m e o I n f d e ( A U C ) r ( A U C )</td></tr><tr><td>6</td><td>0</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td>0</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>0</td><td>2 0</td><td>4 % 0</td><td>e x a m</td><td>p l</td><td>6 e s 0</td><td>8</td><td>0</td><td>1</td><td>0 0</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF5": { |
|
"text": "Comparison of methods on multiple corpora. Our feature vector without graph features is used. The source corpora with the best F-scores are reported for aSVM, TrAdaBoost, and SVM-CW.", |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td/><td/><td/><td colspan=\"2\">F-score</td><td/><td/><td/><td/><td>AUC</td><td/><td/></tr><tr><td/><td>A</td><td>B</td><td>H</td><td>I</td><td>L</td><td>all</td><td>A</td><td>B</td><td>H</td><td>I</td><td>L</td><td>all</td></tr><tr><td colspan=\"3\">A (64.2) 64.0</td><td>64.7</td><td>65.2</td><td colspan=\"4\">63.7 64.2 (89.1) 89.5</td><td>89.2</td><td>89.3</td><td colspan=\"2\">89.0 89.4</td></tr><tr><td colspan=\"4\">B 67.9 (67.6) 67.9</td><td>67.9</td><td colspan=\"5\">67.7 68.3 86.2 (86.1) 86.2</td><td>86.3</td><td colspan=\"2\">86.2 86.4</td></tr><tr><td colspan=\"2\">H 71.3</td><td colspan=\"3\">71.2 (69.7) 74.1</td><td colspan=\"3\">70.8 74.9 84.7</td><td colspan=\"3\">85.0 (82.8) 85.0</td><td colspan=\"2\">83.4 87.9</td></tr><tr><td>I</td><td>74.4</td><td>75.6</td><td colspan=\"5\">73.7 (74.4) 74.4 76.6 86.7</td><td>87.1</td><td colspan=\"4\">85.4 (85.6) 86.9 87.8</td></tr><tr><td>L</td><td>83.2</td><td>85.9</td><td>82.0</td><td colspan=\"4\">86.7 (80.5) 84.1 86.3</td><td>87.1</td><td>87.4</td><td colspan=\"3\">90.8 (86.0) 86.2</td></tr></table>", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |