Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
95 kB
{
"paper_id": "I11-1001",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:32:16.570696Z"
},
"title": "Analyzing the Dynamics of Research by Extracting Key Aspects of Scientific Papers",
"authors": [
{
"first": "Sonal",
"middle": [],
"last": "Gupta",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stanford University",
"location": {}
},
"email": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stanford University",
"location": {}
},
"email": "manning@cs.stanford.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a method for characterizing a research work in terms of its focus, domain of application, and techniques used. We show how tracing these aspects over time provides a novel measure of the influence of research communities on each other. We extract these characteristics by matching semantic extraction patterns, learned using bootstrapping, to the dependency trees of sentences in an article's abstract. We combine this information with pre-calculated article-to-community assignments to study the influence of a community on others in terms of techniques borrowed and the 'maturing' of some communities to solve other problems. As a case study, we show how the computational linguistics community and its sub-fields have changed over the years with respect to their foci, methods used, and domain problems. For instance, we show that part-of-speech tagging and parsing have increasingly been adopted as tools for solving problems in other domains. We also observe that speech recognition and probability theory have had the most seminal influence.",
"pdf_parse": {
"paper_id": "I11-1001",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a method for characterizing a research work in terms of its focus, domain of application, and techniques used. We show how tracing these aspects over time provides a novel measure of the influence of research communities on each other. We extract these characteristics by matching semantic extraction patterns, learned using bootstrapping, to the dependency trees of sentences in an article's abstract. We combine this information with pre-calculated article-to-community assignments to study the influence of a community on others in terms of techniques borrowed and the 'maturing' of some communities to solve other problems. As a case study, we show how the computational linguistics community and its sub-fields have changed over the years with respect to their foci, methods used, and domain problems. For instance, we show that part-of-speech tagging and parsing have increasingly been adopted as tools for solving problems in other domains. We also observe that speech recognition and probability theory have had the most seminal influence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The evolution of ideas and the dynamics of a research community can be studied using the scientific articles published by the community. For instance, we may be interested in how methods spread from one community to another, or the evolution of a topic from a focus of research to a problem-solving tool. We might want to find the balance between technique-driven and domaindriven research within a field. Establishing such a rich insight of the development and progress of scientific research requires an understanding of more than just the \"topics\" of discussion or citation links between articles, which have been used in the previous work to study trend and impact of articles. As an example, to determine whether technique-driven researchers have greater or lesser impact, we need to be able to identify styles of work. To achieve this level of detail and to be able to connect together how methods and ideas are being pursued, it is essential to move beyond bagof-words topical models. This requires an understanding of sentence and argument structure, and is therefore a form of information extraction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To study the application domains, the techniques used to approach the domain problems, and the focus of scientific articles in a community, we propose to extract the following concepts from the articles FOCUS: an article's main contribution TECHNIQUE: a method or a tool used in an article, for example, expectation maximization and conditional random fields DOMAIN: an article's application domain, such as speech recognition and classification of documents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For example, if an article concentrates on regularization in support vector machines and shows improvement in parsing accuracy, then its FOCUS and TECHNIQUE are regularization and support vector machines, and its DOMAIN is parsing. In contrast, an article that focuses on lexical features to improve parsing accuracy and uses support vector machines to train the model has FOCUS as lexical features and parsing, the TECHNIQUE being lexical features and support vector machines, and its DOMAIN still is parsing. 1 In this case, even though TECHNIQUEs and DOMAIN of both papers are very similar, the FOCUS phrases distinguish them from each other. Note that a DOMAIN of one article can be a TECHNIQUE of another, and viceversa. For example, an article that shows improvements in named entity recognition (NER) has DO-MAIN as NER, however, an article that uses named entities as an intermediary tool to extract relations has NER as one of its TECHNIQUEs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our work uses information extraction patterns to extract the above three category phrases from articles. The phrases are extracted by matching semantic patterns in dependency trees of sentences. The input to the extraction system are some seed patterns (see Table 1 for examples) and it learns more patterns using a bootstrapping approach. Using a bag-of-words based approach, such as topic models, for this problem is not straightforward; true to their name, topic models generally only identify the topic or area of a paper (such as 'parsing' or 'speech recognition'), and neither provide nor label different cross-cutting aspects like techniques used or the application domain of the paper.",
"cite_spans": [],
"ref_spans": [
{
"start": 258,
"end": 265,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As a case study, we examine the articles published in the computational linguistics community. We study the influence of the community's sub-fields, such as parsing and machine translation, using the FOCUS, TECHNIQUE, and DO-MAIN phrases extracted from the articles. We use the document collection from the ACL Anthology dataset 2 (Bird et al., 2008; Radev et al., 2009) , since it has full text of papers available. To get the the sub-fields of the community, we use latent Dirichlet allocation (Blei et al., 2003) to find topics and label them by hand. 3 However, our general approach can be used to study any case of the influence of academic communities, including looking more broadly at the influence of statistics or economics across the social sciences.",
"cite_spans": [
{
"start": 331,
"end": 350,
"text": "(Bird et al., 2008;",
"ref_id": "BIBREF1"
},
{
"start": 351,
"end": 370,
"text": "Radev et al., 2009)",
"ref_id": "BIBREF13"
},
{
"start": 496,
"end": 515,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We study how communities influence each other in terms of techniques that are reused, and show how some communities 'mature' so that the results they produce get adopted as tools for solving other problems. For example, the products of the part-of-speech tagging (POS) community have been adopted by many other communities that use POS tagging as an intermediary step, which is also confirmed in our results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We also show the timeline of influence of communities. For example, our results show that formal computational semantics and unificationbased grammars had a lot of influence in the late 1980s. The speech recognition and probability theory fields showed an upward trend of influence in the mid-1990s, and even though it has decreased in recent years, they still have a lot of influence on recent papers mainly due to techniques like expectation maximization and hidden Markov models. Therefore, our results show that overall they have been the most influential fields in the last two decades. Probability theory, unlike speech recognition, is traditionally not a separate sub-field of computational linguistics, but it is an important topic since many papers use and work on probabilistic approaches. We also show that the study of influence is different from studying popularity or hotness of communities, such as in (Griffiths and Steyvers, 2004; Hall et al., 2008) , which is based on the expected number of papers published in the community in a given year.",
"cite_spans": [
{
"start": 917,
"end": 947,
"text": "(Griffiths and Steyvers, 2004;",
"ref_id": "BIBREF7"
},
{
"start": 948,
"end": 966,
"text": "Hall et al., 2008)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Contributions We introduce a new categorization of key aspects of scientific articles, which is (1) FOCUS: main contribution, (2) TECHNIQUE: method or tool used, and (3) DOMAIN: application domain. We extract the aspects by matching semantic patterns to dependency trees and learn the patterns using bootstrapping. We propose a new definition of influence of a research community in terms of its key aspects adopted as techniques by the other communities. We present a case study on the computational linguistics community using the the three aspects extracted from its articles, both for verifying the results of our system, and for showing novel results for the dynamics and the overall influence of computational linguistics subfields. We introduce a dataset of abstracts labeled with the three categories. 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While there is some connection to keyphrase selection in text summarization (Radev et al., 2002) , extracting FOCUS, TECHNIQUE and DO-MAIN phrases is fundamentally a form of information extraction, and there has been a wide variety of prior work in this area. Some work, including the seminal (Hearst, 1992) , identified patterns (IS-A relations) using hand-written rules, while other work has learned patterns over dependency graphs (Bunescu and Mooney, 2005) . This work builds on previous successful use of bootstrapping learning techniques in NLP (Yarowsky, 1995; Collins and Singer, 1999; Riloff and Jones, 1999) ; in its use of dependency patterns it is perhaps especially close to (Yangarber et al., 2000) .",
"cite_spans": [
{
"start": 76,
"end": 96,
"text": "(Radev et al., 2002)",
"ref_id": "BIBREF12"
},
{
"start": 293,
"end": 307,
"text": "(Hearst, 1992)",
"ref_id": "BIBREF9"
},
{
"start": 434,
"end": 460,
"text": "(Bunescu and Mooney, 2005)",
"ref_id": "BIBREF3"
},
{
"start": 551,
"end": 567,
"text": "(Yarowsky, 1995;",
"ref_id": "BIBREF19"
},
{
"start": 568,
"end": 593,
"text": "Collins and Singer, 1999;",
"ref_id": "BIBREF4"
},
{
"start": 594,
"end": 617,
"text": "Riloff and Jones, 1999)",
"ref_id": "BIBREF14"
},
{
"start": 688,
"end": 712,
"text": "(Yangarber et al., 2000)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Topic models have been used to study popularity of communities (Griffiths and Steyvers, 2004) , the history of ideas (Hall et al., 2008) , and scholarly impact of papers (Gerrish and Blei, 2010). However, topic models do not extract detailed information from text as we do. Still, we use topicto-word distributions from topic models as a way of describing sub-fields.",
"cite_spans": [
{
"start": 63,
"end": 93,
"text": "(Griffiths and Steyvers, 2004)",
"ref_id": "BIBREF7"
},
{
"start": 117,
"end": 136,
"text": "(Hall et al., 2008)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Demner-Fushman and Lin (2007) used hand written knowledge extractors to extract information, such as population and intervention, in their clinical question-answering system to improve ranking of relevant abstracts. Our categorization of key aspects is applicable for broader range of communities, and we learn the patterns by bootstrapping. Li et al. (2010) used semantic metadata to create a semantic digital library for chemistry and identified experimental paragraphs using keywords features. Xu et al. (2006) and Ruch et al. (2007) proposed systems, in clinical-trials and biomedical domain, respectively, to classify sentences of abstracts corresponding to categories such as introduction, purpose, method, results and conclusion to improve article retrieval by using either structured abstracts, 5 or hand-labeled sentences. Some summarization systems also use machine learning approaches to find 'key sentences'. The systems built in these papers are complimentary to ours since one can find relevant paragraphs or sentences and then extract the key aspects from them. Note that a sentence can have multiple phrases corresponding to our three categories, and thus classification of sentences will not be enough.",
"cite_spans": [
{
"start": 342,
"end": 358,
"text": "Li et al. (2010)",
"ref_id": "BIBREF10"
},
{
"start": 497,
"end": 513,
"text": "Xu et al. (2006)",
"ref_id": "BIBREF17"
},
{
"start": 518,
"end": 536,
"text": "Ruch et al. (2007)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this section, we explain how to extract phrases for each of the three categories (FOCUS, TECH-NIQUE and DOMAIN) and how to compute the influence of communities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "3"
},
{
"text": "From an article's abstract and title, we use the dependency trees of sentences and a set of semantic extraction patterns to extract phrases in each of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pattern Matching and Learning",
"sec_num": "3.1"
},
{
"text": "FOCUS present \u2192 (direct object) work \u2192 (preposition on) propose \u2192 (direct object) TECHNIQUE using \u2192 (direct object) apply \u2192 (direct object) extend \u2192 (direct object) DOMAIN system \u2192 (preposition for)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pattern Matching and Learning",
"sec_num": "3.1"
},
{
"text": "task \u2192 (preposition of) framework \u2192 (preposition for) Table 1 : Some examples of semantic extraction patterns that extract information from dependency trees of sentences. A pattern is of the form T \u2192 (d), where T is the trigger word and d is the dependency that the trigger word's node has with its successor. Figure 1 : The dependency graph for 'We work on extracting information using dependency graphs'. Our semantic patterns (shown in Table 1 ) will extract 'extracting information using dependency graphs' as FOCUS, and 'dependency graphs' as TECHNIQUE.",
"cite_spans": [],
"ref_spans": [
{
"start": 54,
"end": 61,
"text": "Table 1",
"ref_id": null
},
{
"start": 310,
"end": 318,
"text": "Figure 1",
"ref_id": null
},
{
"start": 439,
"end": 446,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Pattern Matching and Learning",
"sec_num": "3.1"
},
{
"text": "FOCUS, TECHNIQUE and DOMAIN categories. A dependency tree of a sentence is a parse tree that gives dependencies (such as direct-object, subject) between words in the sentence. Figure 1 shows the dependency graph for the sentence 'We work on extracting information using dependency graphs.' Each semantic pattern is of the form T \u2192 d, where T is a trigger word (such as 'use', 'present') and d is a dependency (such as 'direct-object'). We start with a few handwritten patterns (some shown in Table 1 ) and learn more patterns automatically using a bootstrapping approach. We run an iterative algorithm that extracts phrases using semantic patterns and then learns new patterns from the extracted phrases. The details of each step are described below.",
"cite_spans": [],
"ref_spans": [
{
"start": 176,
"end": 184,
"text": "Figure 1",
"ref_id": null
},
{
"start": 492,
"end": 499,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Pattern Matching and Learning",
"sec_num": "3.1"
},
{
"text": "Extracting Phrases from Patterns A dependency tree matches a pattern T \u2192 (d), if (1) it contains T , and (2) the trigger word's node has a successor (dependent or granddependent upto 4 levels) whose dependency with its parent is d. In the rest of the paper, we call the subtree headed by the successor as the matched phrasetree. We extract the phrase corresponding to the matched phrase-tree and label it with the pattern's category. For example, the dependency tree in Figure 1 matches the FOCUS pattern [work \u2192 (preposition on)] and the TECHNIQUE pattern [using \u2192 (direct-object)]. Thus, the system labels the phrase corresponding to the phrase-tree headed by 'extracting', which is 'extracting information using dependency graphs', with the category FO-CUS, and similarly labels the phrase 'dependency graphs' as TECHNIQUE.",
"cite_spans": [],
"ref_spans": [
{
"start": 470,
"end": 478,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Pattern Matching and Learning",
"sec_num": "3.1"
},
{
"text": "We have special rules for paper titles since authors usually include the main contribution of the paper in the title. We label the whole title as FO-CUS if we are not able to extract a FOCUS phrase using the patterns. For titles from which we can extract a TECHNIQUE phrase, we label rest of the words (except for the trigger words) with DO-MAIN. For example, for title 'Studying the history of ideas using topic models', our system extracts 'topic models' as TECHNIQUE using the pattern [using \u2192 (direct-object)], and then labels 'Studying the history of ideas' as DOMAIN.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pattern Matching and Learning",
"sec_num": "3.1"
},
{
"text": "Learning Patterns from Phrases After extracting phrases with patterns, we want to be able to construct and learn new patterns. For each sentence whose dependency tree has a subtree corresponding to one of the extracted phrases, we construct a pattern T \u2192 (d) by considering the ancestor (parent or grandparent) of the subtree as the trigger word T , and the dependency between the head of the subtree and its parent as the dependency d. The weighting of newly constructed patterns is done as follows. For a set of phrases (P ) that extract a pattern (q), the weight of the pattern q for the category FOCUS is p\u2208P 1 zp count(p \u2208 FOCUS), where z p is the total frequency of the phrase p. Similarly, we get weights of the pattern for the other two categories. Note that we do not need smoothing since the phrase-category ratios are aggregated over all the phrases from which the pattern is constructed. After weighting all the patterns that have not been selected in the previous iterations, we select the top k patterns in each category (k=2 in our experiments). Table 3 shows some patterns learned through the iterative method.",
"cite_spans": [],
"ref_spans": [
{
"start": 1061,
"end": 1068,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Pattern Matching and Learning",
"sec_num": "3.1"
},
{
"text": "We define communities as fields or sub-fields that one wishes to study. To study communities using the articles published, we need to know which communities each article belongs to. The articleto-community assignment can be computed in several ways, such as by manual assignment, using metadata, or by text categorization of papers. In our case study, we use the topics formed by applying latent Dirichlet allocation (Blei et al., 2003) to the text of the papers by considering each topic as one community. In recent years, topic modeling has been widely used to get 'concepts' from text; it has the advantage of producing soft, probabilistic article-to-community assignment scores in an unsupervised manner. We combine these soft assignment scores with the phrases extracted in the previous section to score a phrase for each community and each category as follows. The score of a phrase p, which is extracted from an article a, for a community c and the category TECHNIQUE is calculated as",
"cite_spans": [
{
"start": 417,
"end": 436,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Communities and their Influence",
"sec_num": "3.2"
},
{
"text": "tScore(c, p, a) = (1) 1 zp count(p \u2208 TECHNIQUE | a)P (c | a, \u03b8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Communities and their Influence",
"sec_num": "3.2"
},
{
"text": "where the function P (c | a, \u03b8) gives the probability of a community (i.e., a topic) for the article a given the topic modeling parameters \u03b8. The normalization constant for the phrase, z p , is the frequency of the phrase in all the abstracts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Communities and their Influence",
"sec_num": "3.2"
},
{
"text": "We define influence such that communities receive higher scores if they use techniques earlier than other communities do or produce tools that are used to solve other problems. For example, since hidden Markov model introduced by the speech recognition community and part-of-speech tagging tools built by the part-of-speech community have been widely used as techniques in other communities, these communities should receive higher scores than the nascent or not-so-widelyused ones. Thus, we define influence of a community based on the number of times its FOCUS, TECHNIQUE or DOMAIN phrases have been used as a TECHNIQUE in other communities. To calculate the overall influence of one community on another, we first need to calculate influence because of individual articles in the community, which is calculated as follows. The influence of community c 1 on another community c 2 because of a phrase p extracted from an article a 1 is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Communities and their Influence",
"sec_num": "3.2"
},
{
"text": "tInf l(c1, c2, p, a1) = (2) allScore(c1, p, a1) a 2 \u2208D ya 2 >ya 1 tScore(c2, p, a2)C(a2, a1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Communities and their Influence",
"sec_num": "3.2"
},
{
"text": "where the function allScore(c, p, a) is computed the same way as in Eq. 1, but by using count(p \u2208 ALL | a), where ALL means the union of phrases extracted in all three categories. The variable D is the set of all articles, and y a 2 means year of publication of the article a 2 . The summation term computes the influence of the phrase p extracted from the article a 1 on all the articles from the community c 2 published at a later date. The function C(a 2 , a 1 ) is a weighting function based on citations, whose value is 1 if a 2 cites a 1 , and \u03bb otherwise. If \u03bb is 0, the system calculates influence based on just citations, which can be noisy and incomplete. In our experiments, we used \u03bb as 0.5 since we want to study the influence even when an article does not explicitly cite another article. The technique-influence score of community c 1 on community c 2 in year y is computed by summing up the previous equation for all phrases (P ) and for all articles in D. It is computed as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Communities and their Influence",
"sec_num": "3.2"
},
{
"text": "tInf l(c1, c2, y) = p\u2208P a\u2208D ya 1 =y tInf l(c1, c2, p, a) (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Communities and their Influence",
"sec_num": "3.2"
},
{
"text": "Straightforwardly, the overall influence of community c 1 on the community c 2 and on all other communities is calculated as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Communities and their Influence",
"sec_num": "3.2"
},
{
"text": "tInf l(c1, c2) = y tInf l(c1, c2, y) (4) tInf l(c1) = c 2 =c 1 tInf l(c1, c2) (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Communities and their Influence",
"sec_num": "3.2"
},
{
"text": "Next, we present a case study over the sub-fields of computational linguistics using the influence scores described above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Communities and their Influence",
"sec_num": "3.2"
},
{
"text": "Dataset We studied the computational linguistics community from 1965 to 2009 using titles and abstracts of 15,016 articles from the ACL Anthology Network and the ACL Anthology Reference corpus (Bird et al., 2008; Radev et al., 2009) . We found 52 pairs of abstracts that had more than 80% of words in common with each other, and thus while calculating the influence scores, we ignored the influence of earlier-published paper on the later-published paper in the pairs. We used the Stanford Parser (Marneffe et al., 2006) to generate dependency trees of sentences. For testing, we hand labeled 474 abstracts with the three categories to measure the precision and recall scores. For each abstract and each category, we compared the unique non-stop-words extracted from our algorithm to the hand labeled dataset. We calculated precision, recall measures for each abstract and averaged them to get the results for the dataset.",
"cite_spans": [
{
"start": 193,
"end": 212,
"text": "(Bird et al., 2008;",
"ref_id": "BIBREF1"
},
{
"start": 213,
"end": 232,
"text": "Radev et al., 2009)",
"ref_id": "BIBREF13"
},
{
"start": 497,
"end": 520,
"text": "(Marneffe et al., 2006)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "When extracting phrases from the matched phrase trees, we ignored tokens with part-ofspeech tags as pronoun, number, determiner, punctuation or symbol, and removed all subtrees in the matched phrase trees that had either relativeclause-modifier or clausal-complement dependency with their parents since, even though we want full phrases, including these sub-trees introduced extraneous phrases and clauses. We also added phrases from the subtrees of the matched phrase trees to the set of extracted phrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "We used 13 seed patterns for FOCUS, 7 for TECHNIQUE and 15 for DOMAIN. When constructing a new pattern, we ignored the ancestors that were not a noun or a verb since most trigger words are a noun or a verb (such as use, constraints). We also ignored conjunction, relativeclause-modifier, dependent (most generic dependency), quantifier-modifier, and abbreviation dependencies 6 since they either are too generic or introduced extraneous phrases and clauses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "Learning new patterns did not help in improving the FOCUS category phrases when tested over a hand labeled test set. It got relatively high scores when using just the seed patterns and the titles, and hence learning new patterns reduced the precision without any significant improvement in recall. Thus, we learned new patterns only for the TECHNIQUE and DOMAIN categories. We ran 50 iterations for both categories, which was chosen as a reasonable trade-off between pattern precision and recall based on some earlier pilot experiments. After extracting all the phrases, we removed common phrases that are frequently used in scientific articles, such as 'this technique' and 'the presence of', using a stop words list of 3,000 phrases. The list was created by taking the top most occurring 1 to 3 grams from 100,000 random articles with an abstract in the ISI web of knowledge database 7 . We ignored phrases that were either one character or more than 15 words long. In a step towards finding canonical names, we automatically detected abbreviations and their expanded forms from the full text of papers by searching for text between two parentheses, and considered the phrase before the parentheses as the expanded form (similar to (Schwartz and Hearst, 2003) ). We got a high precision list by picking the top most occurring pairs of abbreviations and their expanded forms and created groups of phrases by merging all the phrases that use same abbreviation. We then changed all the phrases in the extracted phrases dataset to their canonical names. for contextsensitive spelling correction; spelling Table 2 : Extracted phrases for some papers. The word 'model' is missing from the end of some phrases as it was removed during post-processing.",
"cite_spans": [
{
"start": 1234,
"end": 1261,
"text": "(Schwartz and Hearst, 2003)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 1603,
"end": 1610,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "We also removed 'model ', 'approach', 'method', 'algorithm', 'based', ' style' words and their variants when they occurred at the end of a phrase.",
"cite_spans": [
{
"start": 23,
"end": 71,
"text": "', 'approach', 'method', 'algorithm', 'based', '",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "Baseline To compare against a noninformation-extraction based baseline, we extracted all noun phrases, along with phrases from the sub-trees of the noun phrase trees, from the abstracts and labeled them with all the three categories. In addition, we labeled the titles (and their sub-trees) with the category FOCUS. We then scored the phrases with a tf-idf inspired measure, which was the ratio of the frequency of the phrase in the abstract and the sum of the total frequency of the individual words, and removed phrases that had the tf-idf measure less than 0.001 (best out of many experiments). We call this approach as 'Baseline tf-idf NPs'. 8 To get communities in the computational linguistics literature, we considered the topics generated using the same ACL Anthology dataset by Bethard and Jurafsky (2010) as communities. They ran latent Dirichlet allocation on the full text of the papers to get 100 topics. We hand labeled the topics and used 72 of them in our study; the rest of them were about common words. When calculating the scores in Eq. 1, we considered the value of P (c | a, \u03b8) to be 0 if it was less than 0.1. Table 4 : The precision, recall, and F1 scores of each category for the different approaches. Note that the inter-annotator agreement is calculated on a smaller set.",
"cite_spans": [
{
"start": 646,
"end": 647,
"text": "8",
"ref_id": null
},
{
"start": 787,
"end": 814,
"text": "Bethard and Jurafsky (2010)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 1132,
"end": 1139,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "The F1 scores for TECHNIQUE and DOMAIN categories after every five iterations. For reasons explained in the text, we do not learn new patterns for FOCUS. Table 4 compares precision, recall, and microaveraged F 1 scores for the three categories when we use: (1) only the seed patterns, (2) the combined set of learned and seed patterns, (3) the baseline, and (4) the inter-annotator agreement. We calculated inter-annotator agreement for 30 abstracts, where each abstract was labeled by 2 annotators, 9 and the precision-recall scores were calculated by randomly choosing one annotation as gold and another as predicted for each article. We The second figure shows the popularity of each community in each year (see (Hall et al., 2008) ), which is measured by summing up the article-to-topic scores for the articles published in that year. The scores are smoothed with weighted scores of 2 previous and 2 next years, and L1-normalized for each year. The scores are lower for all communities in late 2000s since the probability mass is more evenly distributed among many communities. can see in the table that both precision and recall scores increase for TECHNIQUE because of the learned patterns, though for DOMAIN, precision decreases but recall increases. The recall scores for the baseline are higher as expected but the precision is very low. Three possible reasons explain the mistakes made by our system: (1) authors sometimes use generic phrases to describe their system, which were not annotated with any of the three categories in the test set but were extracted by the system (such as 'simple method', 'faster model', 'new approach'); (2) the dependency trees of some sentences were wrong; and (3) some of the patterns learned for TECHNIQUE and DOMAIN were low-precision but high-recall. Figure 2 shows the F 1 scores for TECHNIQUE and DOMAIN after every 5 iterations. Table 5 shows the most influential communities overall (computed using Eq. 5) and their respective influential phrases that have been widely adopted as techniques by other communities. We can see that speech recognition is the most influential community because of the techniques like hidden Markov models and other stochastic methods it introduced in the computational linguistics literature, which shows that its long-term seeding influence is still present despite the limited recent Figure 3 . The statistical machine translation community, which is a topic from the topic model, is more phrase-based.",
"cite_spans": [
{
"start": 715,
"end": 734,
"text": "(Hall et al., 2008)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 154,
"end": 161,
"text": "Table 4",
"ref_id": null
},
{
"start": 1798,
"end": 1806,
"text": "Figure 2",
"ref_id": null
},
{
"start": 1879,
"end": 1886,
"text": "Table 5",
"ref_id": null
},
{
"start": 2366,
"end": 2374,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Figure 2:",
"sec_num": null
},
{
"text": "popularity. Probability theory also gets a high score since many papers in the last decade have used stochastic methods. The communities partof-speech tagging and parsing get high scores because they adopted some techniques that are used in other communities, and because other communities use part-of-speech tagging and parsing in the intermediary steps for solving other problems. Figure 3(a) shows the change in a community's influence over time, and Figure 3(b) shows the change in its popularity. The popularity of a community is the sum of article-to-topic scores for the community topic and for all articles published in a given year. 10 The scores in both figures are normalized such that the total score for all communities in a year sum to one. Compare the relative scores of communities in Figure 3(a) with the relative scores in Figure 3(b) . We can see influence of a community is different from the popularity of a community in a given year. As mentioned before, we observe that although influence score for speech recognition has declined in recent years, it still has a lot of influence, though the popularity of the community in recent years is very low. Machine learning classification has been both popular and influential in recent years. Named entity recognition's popularity has decreased since 2003, though its influence has either increased or remained same. Figure 4 compares the machine translation communities in the same way as we compare other communities in Figure 3 . We can see that statistical machine translation (more phrase-based) community's popularity has steeply increased in the last 5 years, however, its influ-Community Most Influential Phrases Score Speech Recognition (recognition, acoustic, error, speaker, rate, adaptation, recognizer, vocabulary, phone) expectation maximization; hidden markov; language; contextually; segment; context independent phone; snn hidden markov; n gram back off language; multiple reference speakers; cepstral; phoneme; least squares; speech recognition; intra; hi gram; bu; word dependent; tree structured; statistical decision trees 1.35",
"cite_spans": [
{
"start": 642,
"end": 644,
"text": "10",
"ref_id": null
},
{
"start": 1712,
"end": 1800,
"text": "(recognition, acoustic, error, speaker, rate, adaptation, recognizer, vocabulary, phone)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 383,
"end": 394,
"text": "Figure 3(a)",
"ref_id": "FIGREF1"
},
{
"start": 454,
"end": 465,
"text": "Figure 3(b)",
"ref_id": "FIGREF1"
},
{
"start": 801,
"end": 812,
"text": "Figure 3(a)",
"ref_id": "FIGREF1"
},
{
"start": 841,
"end": 852,
"text": "Figure 3(b)",
"ref_id": "FIGREF1"
},
{
"start": 1383,
"end": 1391,
"text": "Figure 4",
"ref_id": "FIGREF3"
},
{
"start": 1488,
"end": 1496,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Influence",
"sec_num": null
},
{
"text": "Probability Theory (probability, probabilities, distribution, probabilistic, estimation, estimate, entropy, statistical, likelihood, parameters) hidden markov; maximum entropy; language; expectation maximization; merging; expectation maximization hidden markov; natural language; variable memory markov; standard hidden markov; part of speech; inside outside; segmentation only; minimum description length principle; continuous density hidden markov; part of speech information; forward backward 1.31",
"cite_spans": [
{
"start": 19,
"end": 144,
"text": "(probability, probabilities, distribution, probabilistic, estimation, estimate, entropy, statistical, likelihood, parameters)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Influence",
"sec_num": null
},
{
"text": "Bilingual Word Alignment (alignment, alignments, aligned, pairs, align, pair, statistical, parallel, source, target, links, brown, ibm, null) hidden markov; expectation maximization; maximum entropy; spectral clustering; statistical alignment; conditional random fields , a discriminative; statistical word alignment; string to tree; state of the art statistical machine translation system; single word; synchronous context free grammar; inversion transduction grammar; ensemble; novel reordering 1.2 POS Tagging (tag, tagging, pos, tags, tagger, part-ofspeech, tagged, unknown, accuracy, part, taggers, brill, corpora, tagset) maximum entropy; machine learning; expectation maximization hidden markov; part of speech information; decision tree; hidden markov; transformation based error driven learning; entropy; part of speech tagging; part of speech; variable memory markov; viterbi; second stage classifiers; document; wide coverage lexicon; using inductive logic programming",
"cite_spans": [
{
"start": 25,
"end": 141,
"text": "(alignment, alignments, aligned, pairs, align, pair, statistical, parallel, source, target, links, brown, ibm, null)",
"ref_id": null
},
{
"start": 513,
"end": 627,
"text": "(tag, tagging, pos, tags, tagger, part-ofspeech, tagged, unknown, accuracy, part, taggers, brill, corpora, tagset)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Influence",
"sec_num": null
},
{
"text": "Machine Learning Classification (classification, classifier, examples, classifiers, kernel, class, svm, accuracy, decision, methods, labeled, vector, instances) support vector machines; ensemble; machine learning; gaussian mixture; expectation maximization; flat; weak classifiers; statistical machine learning; lexicalized tree adjoining grammar based features; natural language processing; standard text categorization collection; pca; semisupervised learning; standard hidden markov; supervised learning 1.12 Statistical Parsing (parse, treebank, trees, parses, penn, collins, parsers, charniak, accuracy, wsj, head, statistical, constituent, constituents) propbank; expectation maximization; supervised machine learning; maximumentropy classifier; ensemble; lexicalized tree adjoining grammar based features; neural network; generative probability; incomplete constituents; part of speech tagging; treebank; penn; 50 best parses; lexical functional grammar; maximum entropy; full comlex resource 0.92 Statistical Machine Translation (More-Phrase-Based) (bleu, statistical, source, target, phrases, smt, reordering, translations, phrase-based) maximum entropy; hidden markov; expectation maximization; language; linguistically structured; ihmm; cross language information retrieval; ter; factored language; billion word; hierarchical phrases; string to tree; state of the art statistical machine translation system; statistical alignment; ist inversion transduction grammar; bleu as a metric; statistical machine translation 0.82 Table 5 : The top most influential communities, along with the top most words that describe the communities obtained by the topic model, and the corresponding most influential phrases that have been widely used as techniques. The third column is the score of the community computed by Eq. 5. Table 6 : The community in the first column has been influenced the most by the communities in the second column. The scores are calculated using Eq. 4 ence has increased at a slower rate. On the other hand, the influence of bilingual word alignment (the most influential community in 2009) has increased during the same period, mainly because of its influence on statistical machine translation. The influence of non-statistical machine translation has been decreasing recently, though slower than its popularity. Table 6 shows the communities that have the most influence on a given community (the list is in descending order of scores by Eq. 4).",
"cite_spans": [
{
"start": 32,
"end": 160,
"text": "(classification, classifier, examples, classifiers, kernel, class, svm, accuracy, decision, methods, labeled, vector, instances)",
"ref_id": null
},
{
"start": 532,
"end": 659,
"text": "(parse, treebank, trees, parses, penn, collins, parsers, charniak, accuracy, wsj, head, statistical, constituent, constituents)",
"ref_id": null
},
{
"start": 1057,
"end": 1146,
"text": "(bleu, statistical, source, target, phrases, smt, reordering, translations, phrase-based)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1533,
"end": 1540,
"text": "Table 5",
"ref_id": null
},
{
"start": 1825,
"end": 1832,
"text": "Table 6",
"ref_id": null
},
{
"start": 2340,
"end": 2347,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "1.13",
"sec_num": null
},
{
"text": "We are working towards incorporating the date of publication of the articles to learn better patterns to increase precision and recall of the system. We are also exploring ways to use our system for studying citation and co-authorship networks. We plan to study the dynamics and impact of broader communities like biology, statistics and the social sciences. The approach can also be used to study innovation in interdisciplinary research, since we can track if interdisciplinary research results in applying old techniques from one community to solve problems in other community, or if it results in the evolution of better suited techniques.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Directions",
"sec_num": "6"
},
{
"text": "This paper presents a framework for extracting detailed information from scientific articles, such as main contributions, tools and techniques used, and domain problems addressed, by matching semantic extraction patterns in dependency trees. We start with a few hand written seed patterns and learn new patterns using a bootstrapping approach. We use this rich information extracted from articles to study the dynamics of research communities and to define a new way of measuring influence of one research community on another. We present a case study on the computational linguistics community, where we find the influence of its sub-fields and observed that speech recognition and probability theory have had the most seminal influence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "A community vs. a DOMAIN: a community can be as broad as computer science or statistics, whereas a DOMAIN is a specific application such as Chinese word segmentation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.aclweb.org/anthology 3 In this paper, we use the terms communities, subcommunities and sub-fields interchangeably.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The dataset is available at http://cs.stanford. edu/people/sonal/fta for the research community.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Structure abstracts, which are used by some journals, have multiple sections such as PURPOSE and METHOD.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "see(Marneffe et al., 2006) for details of dependencies 7 www.isiknowledge.com",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "As discussed in Section 1, using an unsupervised or weakly-supervised bag-of-words based approach is not straightforward for identifying FOCUS, TECHNIQUE and DO-MAIN of an article, and hence we do not compare against one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The first author annotated 30 abstracts and two doctoral candidates in computational linguistics annotated 15 each.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "See(Hall et al., 2008) for more analysis. Note that this analysis uses just bag-of-words based topic models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Who should I cite: learning literature search models from citation behavior",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Conference on Information and Knowledge Management",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Bethard and Dan Jurafsky. 2010. Who should I cite: learning literature search models from cita- tion behavior. In Proceedings of the Conference on Information and Knowledge Management.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The ACL anthology reference corpus: A reference dataset for bibliographic research in computational linguistics",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Dale",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [
"J"
],
"last": "Dorr",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Gibson",
"suffix": ""
},
{
"first": "Mark",
"middle": [
"T"
],
"last": "Joseph",
"suffix": ""
},
{
"first": "Dongwon",
"middle": [],
"last": "Min Yen Kan",
"suffix": ""
},
{
"first": "Brett",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Powley",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Dragomir",
"suffix": ""
},
{
"first": "Yee",
"middle": [
"Fan"
],
"last": "Radev",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tan",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Bird, Robert Dale, Bonnie J. Dorr, Bryan Gib- son, Mark T. Joseph, Min yen Kan, Dongwon Lee, Brett Powley, Dragomir R. Radev, and Yee Fan Tan. 2008. The ACL anthology reference corpus: A ref- erence dataset for bibliographic research in compu- tational linguistics. In Proceedings of the Confer- ence on Language Resources and Evaluation.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Latent Dirichlet allocation",
"authors": [
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Machine Learning Research",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet allocation. Journal of Ma- chine Learning Research.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A shortest path dependency kernel for relation extraction",
"authors": [
{
"first": "C",
"middle": [],
"last": "Razvan",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"J"
],
"last": "Bunescu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Razvan C. Bunescu and Raymond J. Mooney. 2005. A shortest path dependency kernel for relation ex- traction. In Proceedings of the Human Language Technology Conference and Conference on Empiri- cal Methods in Natural Language Processing.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Unsupervised models for named entity classification",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins and Yoram Singer. 1999. Unsuper- vised models for named entity classification. In Pro- ceedings of the Joint SIGDAT Conference on Empir- ical Methods in Natural Language Processing and Very Large Corpora.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Answering clinical questions with knowledge-based and statistical techniques",
"authors": [
{
"first": "Dina",
"middle": [],
"last": "Demner-Fushman",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dina Demner-Fushman and Jimmy Lin. 2007. An- swering clinical questions with knowledge-based and statistical techniques. Computational Linguis- tics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A language-based approach to measuring scholarly impact",
"authors": [
{
"first": "M",
"middle": [],
"last": "Sean",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gerrish",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Blei",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sean M. Gerrish and David M. Blei. 2010. A language-based approach to measuring scholarly impact. In Proceedings of the International Con- ference on Machine Learning.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Finding scientific topics",
"authors": [
{
"first": "T",
"middle": [
"L"
],
"last": "Griffiths",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Steyvers",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the National Academy of Sciences",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. L. Griffiths and M. Steyvers. 2004. Finding scien- tific topics. Proceedings of the National Academy of Sciences.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Studying the history of ideas using topic models",
"authors": [
{
"first": "David",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Hall, Daniel Jurafsky, and Christopher D. Man- ning. 2008. Studying the history of ideas using topic models. In Proceedings of the Conference on Empirical Methods in Natural Language Process- ing.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Automatic acquisition of hyponyms from large text corpora",
"authors": [
{
"first": "Marti",
"middle": [
"A"
],
"last": "Hearst",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the Conference on Computational linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marti A. Hearst. 1992. Automatic acquisition of hy- ponyms from large text corpora. In Proceedings of the Conference on Computational linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "OreChem ChemXSeer: a semantic digital library for chemistry",
"authors": [
{
"first": "Na",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Leilei",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Prasenjit",
"middle": [],
"last": "Mitra",
"suffix": ""
},
{
"first": "Karl",
"middle": [],
"last": "Mueller",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Poweleit",
"suffix": ""
},
{
"first": "C. Lee",
"middle": [],
"last": "Giles",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Joint Conference on Digital libraries",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Na Li, Leilei Zhu, Prasenjit Mitra, Karl Mueller, Eric Poweleit, and C. Lee Giles. 2010. OreChem ChemXSeer: a semantic digital library for chem- istry. In Proceedings of the Joint Conference on Digital libraries.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Generating typed dependency parses from phrase structure parses",
"authors": [
{
"first": "Marie-Catherine De",
"middle": [],
"last": "Marneffe",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Maccartney",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marie-Catherine De Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of the Conference on Language Re- sources and Evaluation.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Introduction to the special issue on summarization",
"authors": [
{
"first": "R",
"middle": [],
"last": "Dragomir",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Radev",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2002,
"venue": "Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dragomir R. Radev, Eduard Hovy, and Kathleen McK- eown. 2002. Introduction to the special issue on summarization. Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The acl anthology network corpus",
"authors": [
{
"first": "R",
"middle": [],
"last": "Dragomir",
"suffix": ""
},
{
"first": "Pradeep",
"middle": [],
"last": "Radev",
"suffix": ""
},
{
"first": "Vahed",
"middle": [],
"last": "Muthukrishnan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Qazvinian",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Workshop on Text and Citation Analysis for Scholarly Digital Libraries",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dragomir R. Radev, Pradeep Muthukrishnan, and Va- hed Qazvinian. 2009. The acl anthology network corpus. In Proceedings of the 2009 Workshop on Text and Citation Analysis for Scholarly Digital Li- braries.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Learning dictionaries for information extraction by multi-level bootstrapping",
"authors": [
{
"first": "Ellen",
"middle": [],
"last": "Riloff",
"suffix": ""
},
{
"first": "Rosie",
"middle": [],
"last": "Jones",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the National Conference on Artificial Intelligence (AAAI)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ellen Riloff and Rosie Jones. 1999. Learning dic- tionaries for information extraction by multi-level bootstrapping. In Proceedings of the National Con- ference on Artificial Intelligence (AAAI).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Using argumentation to extract key sentences from biomedical abstracts",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Ruch",
"suffix": ""
},
{
"first": "Clia",
"middle": [],
"last": "Boyer",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Chichester",
"suffix": ""
},
{
"first": "Imad",
"middle": [],
"last": "Tbahriti",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Geissbhler",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Fabry",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Gobeill",
"suffix": ""
},
{
"first": "Violaine",
"middle": [],
"last": "Pillet",
"suffix": ""
}
],
"year": 2007,
"venue": "International Journal of Medical Informatics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrick Ruch, Clia Boyer, Christine Chichester, Imad Tbahriti, Antoine Geissbhler, Paul Fabry, Julien Gobeill, Violaine Pillet, Dietrich Rebholz- Schuhmann, Christian Lovis, and Anne-Lise Veuthey. 2007. Using argumentation to extract key sentences from biomedical abstracts. International Journal of Medical Informatics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A simple algorithm for identifying abbreviation definitions in biomedical text",
"authors": [
{
"first": "Ariel",
"middle": [
"S"
],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Marti",
"middle": [
"A"
],
"last": "Hearst",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ariel S. Schwartz and Marti A. Hearst. 2003. A simple algorithm for identifying abbreviation definitions in biomedical text.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Combining text classification and hidden markov modeling techniques for categorizing sentences in randomized clinical trial abstracts",
"authors": [
{
"first": "Rong",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Kaustubh",
"middle": [],
"last": "Supekar",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Amar",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Garber",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Association of Moving Image Archivists Annual Symposium",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rong Xu, Kaustubh Supekar, Yang Huang, Amar Das, and Alan Garber. 2006. Combining text classifi- cation and hidden markov modeling techniques for categorizing sentences in randomized clinical trial abstracts. Proceedings of the Association of Moving Image Archivists Annual Symposium.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Unsupervised discovery of scenario-level patterns for information extraction",
"authors": [
{
"first": "Roman",
"middle": [],
"last": "Yangarber",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Grishman",
"suffix": ""
},
{
"first": "Pasi",
"middle": [],
"last": "Tapanainen",
"suffix": ""
},
{
"first": "Silja",
"middle": [],
"last": "Huttunen",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the Conference on Applied Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roman Yangarber, Ralph Grishman, Pasi Tapanainen, and Silja Huttunen. 2000. Unsupervised discovery of scenario-level patterns for information extraction. In Proceedings of the Conference on Applied Natu- ral Language Processing.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Unsupervised word sense disambiguation rivaling supervised methods",
"authors": [
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Yarowsky. 1995. Unsupervised word sense dis- ambiguation rivaling supervised methods. In Pro- ceedings of the Association for Computational Lin- guistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "(a) The influence of communities in each year. (b) Popularity of communities in each year."
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "The first figure shows influence scores of communities in each year."
},
"FIGREF2": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "(a) The influence of communities in each year.(b) Popularity of communities in a each year."
},
"FIGREF3": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Comparing machine translation related communities in the same way as in"
},
"TABREF1": {
"content": "<table><tr><td>TECHNIQUE</td><td colspan=\"2\">DOMAIN</td></tr><tr><td colspan=\"4\">model \u2192 (nn) rules \u2192 (nn) extracting \u2192 (direct-object) evaluation \u2192 (nn) improve \u2192 (direct-object) used \u2192 (preposition for) identify \u2192 (direct-object) parsing \u2192 (nn) constraints \u2192 (amod) domain \u2192 (nn) based \u2192 (preposition on) applied \u2192 (preposition to) Table 3: Examples of patterns learned using the iterative ex-traction algorithm. The dependency 'nn' is the noun com-pound modifier dependency.</td></tr><tr><td>Approach</td><td>F1</td><td colspan=\"2\">Precision Recall</td></tr><tr><td/><td>FOCUS</td><td/></tr><tr><td colspan=\"2\">Baseline tf-idf NPs Seed Patterns Inter-Annotator Agreement 53.33 35.60 55.29</td><td>24.36 44.67 50.80</td><td>66.07 72.54 56.14</td></tr><tr><td colspan=\"2\">TECHNIQUE 26.65 20.09 36.86 Inter-Annotator Agreement 72.02 Baseline tf-idf NPs Seed Patterns Iteration 50</td><td>17.87 23.46 30.46 66.81</td><td>52.41 21.72 46.68 78.11</td></tr><tr><td colspan=\"2\">DOMAIN 30.13 25.27 37.29 Inter-Annotator Agreement 72.31 Baseline tf-idf NPs Seed Patterns Iteration 50</td><td>19.90 30.55 27.60 75.58</td><td>62.03 26.29 57.50 69.32</td></tr></table>",
"type_str": "table",
"html": null,
"text": "The total numbers of phrases extracted were 25,525 for FOCUS, 24,430 for TECHNIQUE, and 33,203 for DOMAIN. The total numbers of phrases after including the phrases extracted from subtrees of the matched phrase trees were 64,041, 38,220 and 46,771, respectively. Examples of phrases extracted from some papers are shown inTable 2.",
"num": null
}
}
}
}