ACL-OCL / Base_JSON /prefixW /json /wosp /2020.wosp-1.6.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T06:44:27.116970Z"
},
"title": "The Normalized Impact Index for Keywords in Scholarly Papers to Detect Subtle Research Topics",
"authors": [
{
"first": "Daisuke",
"middle": [],
"last": "Ikeda",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Yuta",
"middle": [],
"last": "Taniguchi",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Kazunori",
"middle": [],
"last": "Koga",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Mainly due to the open access movement, the number of scholarly papers we can freely access is drastically increasing. A huge amount of papers is a promising resource for text mining and machine learning. Given a set of papers, for example, we can grasp past or current trends in a research community. Compared to the trend detection, it is more difficult to forecast trends in the near future, since the number of occurrences of some features, which are major cues for automatic detection, such as the word frequency, is quite small before such a trend will emerge. As a first step toward trend forecasting, this paper is devoted to finding subtle trends. To do this, the authors propose an index for keywords, called normalized impact index, and visualize keywords and their indices as a heat map. The authors have conducted case studies using some keywords already known as popular, and we found some keywords whose frequencies are not so large but whose indices are large.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Mainly due to the open access movement, the number of scholarly papers we can freely access is drastically increasing. A huge amount of papers is a promising resource for text mining and machine learning. Given a set of papers, for example, we can grasp past or current trends in a research community. Compared to the trend detection, it is more difficult to forecast trends in the near future, since the number of occurrences of some features, which are major cues for automatic detection, such as the word frequency, is quite small before such a trend will emerge. As a first step toward trend forecasting, this paper is devoted to finding subtle trends. To do this, the authors propose an index for keywords, called normalized impact index, and visualize keywords and their indices as a heat map. The authors have conducted case studies using some keywords already known as popular, and we found some keywords whose frequencies are not so large but whose indices are large.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Thanks to the recent open access movement, we can freely access to a huge amount of papers on scholarly repositories, such as institutional repositories maintained by academic institutions. According to IRUS-UK, 1 there exits about 2M items on more than 200 repositories in the UK, as of May 2020. According to NII, 2 there exist more than 2.4M full-text papers on 734 institutional repositories in Japan, as of March 2020. In addition to institutional repositories, we also have disciplinary repositories, such as arXiv. 3 We can also use a global aggregation servie, which collects papers on repositories. For exam-ple, CORE 4 collects papers from more than one thousand data providers in about 150 countries, and provides search APIs, dump files, and search facility for collected papers (Knoth and Zdrahal, 2012) . The latest dump file provided by CORE contains 123M metadata items, 85.6M abstracts, and 9.8M full text papers. Some commercial publishers also began to provide APIs for automatic processing. 5 Basically, items on scholarly repositories are readable PDF files. When research results were published on paper, research papers were final outcomes of the researches. In case of digital media, however, contents of the papers can be an input for automatic processing. We can find many researches which use scholarly papers as input for computer algorithms. For example, some entities, like dataset names, used in papers are automatically extracted (Ikeda and Seguchi, 2017; Ikeda and Taniguchi, 2019) , and papers are used to predict research impacts of a new given paper (Baba et al., 2019) and to predict new materials (Tshitoyan et al., 2019) .",
"cite_spans": [
{
"start": 522,
"end": 523,
"text": "3",
"ref_id": null
},
{
"start": 791,
"end": 816,
"text": "(Knoth and Zdrahal, 2012)",
"ref_id": "BIBREF7"
},
{
"start": 1011,
"end": 1012,
"text": "5",
"ref_id": null
},
{
"start": 1462,
"end": 1487,
"text": "(Ikeda and Seguchi, 2017;",
"ref_id": "BIBREF4"
},
{
"start": 1488,
"end": 1514,
"text": "Ikeda and Taniguchi, 2019)",
"ref_id": "BIBREF5"
},
{
"start": 1586,
"end": 1605,
"text": "(Baba et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 1635,
"end": 1659,
"text": "(Tshitoyan et al., 2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The final goal of our research is to forecast popular trends in the near future. A typical method for this is to use a clustering algorithm, which is unsupervised learning, and divides target items into groups based on a predefined distance metric. Some approaches use clustering algorithms to divide words in papers into groups, such as the topic model (Griffiths and Steyvers, 2004; Bolelli et al., 2009) . Once we introduce a distance metric to data, a target data item is defined as a point in the space defined by the metric, and thus we can compare similarities between any two points. In this sense, this approach uses an absolute distance. There also exit relative approaches, like network structures, in which we know that two items are adjancent. In par-ticular, we can naturally construct multiple network structures from papers, like networks of authors, citations, words, and their combinations (Duvvuru et al., 2012; Salatino et al., 2017) . However, these researches assume that there are already a number of publications (Salatino et al., 2018) . In this sense, these approaches are for topic detection, not for topic forecast.",
"cite_spans": [
{
"start": 354,
"end": 384,
"text": "(Griffiths and Steyvers, 2004;",
"ref_id": "BIBREF3"
},
{
"start": 385,
"end": 406,
"text": "Bolelli et al., 2009)",
"ref_id": "BIBREF1"
},
{
"start": 908,
"end": 930,
"text": "(Duvvuru et al., 2012;",
"ref_id": "BIBREF2"
},
{
"start": 931,
"end": 953,
"text": "Salatino et al., 2017)",
"ref_id": "BIBREF10"
},
{
"start": 1037,
"end": 1060,
"text": "(Salatino et al., 2018)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we try to find small topics as a first step toward forecasting future topics. To this end, we propose an index for keywords to measure their impact, assuming a keyword denotes a research topic. We use a relative frequency in the definition of the index to find small topics. As far as the authors know, the frequency of keywords is not directly used to detect topics in research papers, unlike topic or trend detection in general text data. The authors think that this is because a frequency based method requires a list of stop words to remove unnecessary keywords, but it is too costly to construct it for each discipline in case of research papers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To evaluate the proposed index, we use some popular keywords in one discipline, and we check if the proposed indices for them can grasp their popularity. Using this approach, we do not have to consider the issue of stop words. In other words, we try to find some properties among popular topics with the proposed index. For comparison, we also show topic detection by absolute frequency and a standard clustering algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We assume the range of publication years, y 1 , y 2 , . . . , y N , and let Y = {y 1 , y 2 , . . . , y N }. For y \u2208 Y , D(y) denotes the set of papers published in y.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Normalized Impact Index",
"sec_num": "2"
},
{
"text": "For a word w and a year y \u2208 Y , the normalized impact index, denoted by h(w, y), is defined as as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Normalized Impact Index",
"sec_num": "2"
},
{
"text": "h(w, y) = f (w, y) |D(y)| y N t=y 1 f (w, t) ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Normalized Impact Index",
"sec_num": "2"
},
{
"text": "where f (w, y) is the number of occurrences (frequencies) of w in D(y).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Normalized Impact Index",
"sec_num": "2"
},
{
"text": "The proposed index for w and y is a relative frequency, normalized by both the number of publications in y and the total frequency of w among all years. Therefore, we can compare h(w 1 , y 1 ) and h(w 2 , y 2 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Normalized Impact Index",
"sec_num": "2"
},
{
"text": "To understand the meaning of the index, let us assume that |D(y)| = 1 tentatively. Then we can treat h(w, y) as a probability since we have y h(w, y) = 1. So, when we depict this index as a bar chart for some w whose height is h(w, y i ), the total area of the bars for w is normalized to 1. Therefore, we can compare any two words w 1 and w 2 , in the view point of their trends.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Normalized Impact Index",
"sec_num": "2"
},
{
"text": "When we consider trends of keywords, it is natural to see temporal changes of the index from some reference year y 1 , that is,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Normalized Impact Index",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h(w, y) \u2212 h(w, y 1 ),",
"eq_num": "(1)"
}
],
"section": "Normalized Impact Index",
"sec_num": "2"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Normalized Impact Index",
"sec_num": "2"
},
{
"text": "y > y 1 for y \u2208 Y \u2212 {y 1 }. For some y( = y 1 ), if h(w, y) \u2212 h(w, y 1 ) > 0 (resp. < 0)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Normalized Impact Index",
"sec_num": "2"
},
{
"text": ", then the relative usage of w in y becomes larger (resp. smaller) than that in y 1 . This leads to a heat map of the proposed index for keywords.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Normalized Impact Index",
"sec_num": "2"
},
{
"text": "In this section, we apply the proposed index to a real dataset to confirm its efficacy. As described in Section 1, a frequency based method suffers from the issue of stop words. To avoid the issue, we check the values of the proposed index for some keywords the authors selected from some specific field. These keywords are already known as popular topics. Therefore, it means that we only check positive examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case Study",
"sec_num": "3"
},
{
"text": "Since the proposed index is defined with relative frequencies, we show the result of topic detection with absolute frequencies for comparison (see Section 3.2). Then, we apply a clustering algorithm to our dataset in Section 3.3, to confirm that a clustering algorithm for keywords can find large topics, not small ones as described in Section 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case Study",
"sec_num": "3"
},
{
"text": "We use a set of abstracts, not the whole papers, from 2000 to 2018, obtained by searching \"plasma chemical vapor deposition\" at Web of Science. The number of abstracts we obtained is 69,384.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3.1"
},
{
"text": "In addition to stop words of English, we also removed tokens starting or ending with special symbols, such as \"[\" and \"+\". Then we converted capital letters to lower-case ones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3.1"
},
{
"text": "As the first case study, we check if a method based on frequency can find a potentially popular topic. Figure 1 contains four graphs, showing the numbers of papers found by queries at Web of Science. One common line is contained in all graphs in Figure 1 , which is the number of papers found by \"plasma chemical vapor deposition\". In other Figure 1 : Each graph shows the change of the number of papers found by the corresponding query with \"plasma chemical vapor deposition\", such as \"nitride plasma chemical vapor deposition\", as the publication year advances (some data originally from Fig. 6 and 7 in (Iwase et al., 2019) ).",
"cite_spans": [
{
"start": 606,
"end": 626,
"text": "(Iwase et al., 2019)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 103,
"end": 111,
"text": "Figure 1",
"ref_id": null
},
{
"start": 246,
"end": 254,
"text": "Figure 1",
"ref_id": null
},
{
"start": 341,
"end": 349,
"text": "Figure 1",
"ref_id": null
},
{
"start": 590,
"end": 596,
"text": "Fig. 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Topic Detection by Frequency",
"sec_num": "3.2"
},
{
"text": "words, this line shows the year-by-year changes of the number of papers containing this query. We call the line for this query the base line of this field.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic Detection by Frequency",
"sec_num": "3.2"
},
{
"text": "Each of the other lines shows the number of papers found by \"plasma chemical vapor deposition\" plus the corresponding keyword. For example, the red line in the top graph is obtained by \"oxide plasma chemical vapor deposition\". These searches are search within the original query, and thus these lines are below the base line. One of the authors chose these additional keywords, based on the heat map in Figure 2 in addition to his expertise. Basically, they are known to be popular topics.",
"cite_spans": [],
"ref_spans": [
{
"start": 403,
"end": 411,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Topic Detection by Frequency",
"sec_num": "3.2"
},
{
"text": "In the four graphs, an upper graph contains keywords whose frequencies are larger. In the top graph of \"nitride\", \"carbon\", \"oxide\", and \"amorphous silicon\", we see that these keywords are large topics in this field and the shapes of graphs are similar to the base line. Compared to the top graph, the second one contains smaller topics, but they have emerged in early 90s, and increased its publications steadily.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic Detection by Frequency",
"sec_num": "3.2"
},
{
"text": "Compared to the two top graphs, keywords for the other two graphs are relatively new topics, and thus the numbers of papers containing these topics are much smaller. In particular, the number of the papers about \"2D material\", meaning 2 dimensional materials, is quite small. In spite of its small frequency, this topic has potential to be big in this field because \"2D material\" is a more conceptual word than \"graphene\", which is a 2D material, and the Nobel Prize was awarded to researchers studied graphene in 2010.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic Detection by Frequency",
"sec_num": "3.2"
},
{
"text": "Therefore, methods based on the frequency of a keyword can not find such a trend at very early stages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic Detection by Frequency",
"sec_num": "3.2"
},
{
"text": "Next, we consider a clustering algorithm as a method to find research topics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic Detection by Clustering",
"sec_num": "3.3"
},
{
"text": "For a clustering algorithm, we used Nonnegative Matrix Factorization (NMF), which decomposes a given matrix V into two matrices W H, where all emelements in those matrices are required to be non-negative (Lee and Seung, 1999) .",
"cite_spans": [
{
"start": 204,
"end": 225,
"text": "(Lee and Seung, 1999)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Topic Detection by Clustering",
"sec_num": "3.3"
},
{
"text": "Using the set of abstracts, we can construct a term-document matrix V , where w ij is the frequency for the ith term in the jth document, that is the jth document d j has w 1j , w 2j , . . . as its elements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic Detection by Clustering",
"sec_num": "3.3"
},
{
"text": "Let D and V be the number of documents and one of vocabularies, respectively. Then, the size of V is D \u00d7 V . When we apply NMF to V , we have to specify a parameter K, which defines the sizes of two matrices: D \u00d7 K and K \u00d7 V for W and H.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic Detection by Clustering",
"sec_num": "3.3"
},
{
"text": "We can see W as a weight matrix and H as a base matrix, and an original document is expressed as a weighted linear combination of base elements. In this expression, we can see that a base matrix consits of K base vectors. Table 1 shows the top 10 keywords with largest weights for each base vector, where we set K = 10. There exist K topics, each of which has 10 keywords with the top 10 largest weights in the topic.",
"cite_spans": [],
"ref_spans": [
{
"start": 222,
"end": 229,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Topic Detection by Clustering",
"sec_num": "3.3"
},
{
"text": "From this table, we can find many major topics in this field. For example, the first cluster contains \"chemical vapor deposition\", and the second and 10th ones \"carbon nanotubes\" and \"thin film\", respectively, both of which are major materials used in this field. However, we can not find minor topics from this decomposition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic Detection by Clustering",
"sec_num": "3.3"
},
{
"text": "In this section, we detect topics using the normalized impact index and its visualization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic Detection by the Proposed Index and Heat Map",
"sec_num": "3.4"
},
{
"text": "No. The top 10 keywords with largest weights in a topic 1 deposition, chemical, vapor, rate, process, high, gas, using, PECVD, pressure 2 carbon, growth, nanotubes, CNTs, field, emission, electron, catalyst, grown, chemical 3 silicon, layer, solar, amorphous, cells, layers, chemical, cell, nitride, high 4 films, deposited, thin, properties, spectroscopy, optical, amorphous, content, using, surfaces, roughness, layer, chemical, contact, treatment, energy, morphology, atomic 6 plasma, power, gas, density, treatment, enhanced, using, pressure, hydrogen, discharge 7 C, degrees, temperature, annealing, growth, substrate, temperatures, si, low, rights 8 diamond, growth, microwave, substrate, CVD, high, nucleation, quality, substrates, grown 9 coatings, coating, properties, DLC, chemical, using, deposited, wear, elsevier, reserved 10 film, thin, thickness, substrate, deposited, stress, structure, dielectric, nm, ratio Figure 2 shows a heat map, defined by (1), for keywords in our dataset. One column corresponds Figure 2 : The heat map shows values of (1) for each keyword extracted from our dataset, where one column corresponds to one keyword, and a cell in the column indicates the value of (1).",
"cite_spans": [
{
"start": 56,
"end": 69,
"text": "1 deposition,",
"ref_id": null
},
{
"start": 70,
"end": 79,
"text": "chemical,",
"ref_id": null
},
{
"start": 80,
"end": 86,
"text": "vapor,",
"ref_id": null
},
{
"start": 87,
"end": 92,
"text": "rate,",
"ref_id": null
},
{
"start": 93,
"end": 101,
"text": "process,",
"ref_id": null
},
{
"start": 102,
"end": 107,
"text": "high,",
"ref_id": null
},
{
"start": 108,
"end": 112,
"text": "gas,",
"ref_id": null
},
{
"start": 113,
"end": 119,
"text": "using,",
"ref_id": null
},
{
"start": 120,
"end": 126,
"text": "PECVD,",
"ref_id": null
},
{
"start": 127,
"end": 145,
"text": "pressure 2 carbon,",
"ref_id": null
},
{
"start": 146,
"end": 153,
"text": "growth,",
"ref_id": null
},
{
"start": 154,
"end": 164,
"text": "nanotubes,",
"ref_id": null
},
{
"start": 165,
"end": 170,
"text": "CNTs,",
"ref_id": null
},
{
"start": 171,
"end": 177,
"text": "field,",
"ref_id": null
},
{
"start": 178,
"end": 187,
"text": "emission,",
"ref_id": null
},
{
"start": 188,
"end": 197,
"text": "electron,",
"ref_id": null
},
{
"start": 198,
"end": 207,
"text": "catalyst,",
"ref_id": null
},
{
"start": 208,
"end": 214,
"text": "grown,",
"ref_id": null
},
{
"start": 215,
"end": 234,
"text": "chemical 3 silicon,",
"ref_id": null
},
{
"start": 235,
"end": 241,
"text": "layer,",
"ref_id": null
},
{
"start": 242,
"end": 248,
"text": "solar,",
"ref_id": null
},
{
"start": 249,
"end": 259,
"text": "amorphous,",
"ref_id": null
},
{
"start": 260,
"end": 266,
"text": "cells,",
"ref_id": null
},
{
"start": 267,
"end": 274,
"text": "layers,",
"ref_id": null
},
{
"start": 275,
"end": 284,
"text": "chemical,",
"ref_id": null
},
{
"start": 285,
"end": 290,
"text": "cell,",
"ref_id": null
},
{
"start": 291,
"end": 299,
"text": "nitride,",
"ref_id": null
},
{
"start": 300,
"end": 313,
"text": "high 4 films,",
"ref_id": null
},
{
"start": 314,
"end": 324,
"text": "deposited,",
"ref_id": null
},
{
"start": 325,
"end": 330,
"text": "thin,",
"ref_id": null
},
{
"start": 331,
"end": 342,
"text": "properties,",
"ref_id": null
},
{
"start": 343,
"end": 356,
"text": "spectroscopy,",
"ref_id": null
},
{
"start": 357,
"end": 365,
"text": "optical,",
"ref_id": null
},
{
"start": 366,
"end": 376,
"text": "amorphous,",
"ref_id": null
},
{
"start": 377,
"end": 385,
"text": "content,",
"ref_id": null
},
{
"start": 386,
"end": 392,
"text": "using,",
"ref_id": null
},
{
"start": 393,
"end": 402,
"text": "surfaces,",
"ref_id": null
},
{
"start": 403,
"end": 413,
"text": "roughness,",
"ref_id": null
},
{
"start": 414,
"end": 420,
"text": "layer,",
"ref_id": null
},
{
"start": 421,
"end": 430,
"text": "chemical,",
"ref_id": null
},
{
"start": 431,
"end": 439,
"text": "contact,",
"ref_id": null
},
{
"start": 440,
"end": 450,
"text": "treatment,",
"ref_id": null
},
{
"start": 451,
"end": 458,
"text": "energy,",
"ref_id": null
},
{
"start": 459,
"end": 470,
"text": "morphology,",
"ref_id": null
},
{
"start": 471,
"end": 487,
"text": "atomic 6 plasma,",
"ref_id": null
},
{
"start": 488,
"end": 494,
"text": "power,",
"ref_id": null
},
{
"start": 495,
"end": 499,
"text": "gas,",
"ref_id": null
},
{
"start": 500,
"end": 508,
"text": "density,",
"ref_id": null
},
{
"start": 509,
"end": 519,
"text": "treatment,",
"ref_id": null
},
{
"start": 520,
"end": 529,
"text": "enhanced,",
"ref_id": null
},
{
"start": 530,
"end": 536,
"text": "using,",
"ref_id": null
},
{
"start": 537,
"end": 546,
"text": "pressure,",
"ref_id": null
},
{
"start": 547,
"end": 556,
"text": "hydrogen,",
"ref_id": null
},
{
"start": 557,
"end": 571,
"text": "discharge 7 C,",
"ref_id": null
},
{
"start": 572,
"end": 580,
"text": "degrees,",
"ref_id": null
},
{
"start": 581,
"end": 593,
"text": "temperature,",
"ref_id": null
},
{
"start": 594,
"end": 604,
"text": "annealing,",
"ref_id": null
},
{
"start": 605,
"end": 612,
"text": "growth,",
"ref_id": null
},
{
"start": 613,
"end": 623,
"text": "substrate,",
"ref_id": null
},
{
"start": 624,
"end": 637,
"text": "temperatures,",
"ref_id": null
},
{
"start": 638,
"end": 641,
"text": "si,",
"ref_id": null
},
{
"start": 642,
"end": 646,
"text": "low,",
"ref_id": null
},
{
"start": 647,
"end": 664,
"text": "rights 8 diamond,",
"ref_id": null
},
{
"start": 665,
"end": 672,
"text": "growth,",
"ref_id": null
},
{
"start": 673,
"end": 683,
"text": "microwave,",
"ref_id": null
},
{
"start": 684,
"end": 694,
"text": "substrate,",
"ref_id": null
},
{
"start": 695,
"end": 699,
"text": "CVD,",
"ref_id": null
},
{
"start": 700,
"end": 705,
"text": "high,",
"ref_id": null
},
{
"start": 706,
"end": 717,
"text": "nucleation,",
"ref_id": null
},
{
"start": 718,
"end": 726,
"text": "quality,",
"ref_id": null
},
{
"start": 727,
"end": 738,
"text": "substrates,",
"ref_id": null
},
{
"start": 739,
"end": 756,
"text": "grown 9 coatings,",
"ref_id": null
},
{
"start": 757,
"end": 765,
"text": "coating,",
"ref_id": null
},
{
"start": 766,
"end": 777,
"text": "properties,",
"ref_id": null
},
{
"start": 778,
"end": 782,
"text": "DLC,",
"ref_id": null
},
{
"start": 783,
"end": 792,
"text": "chemical,",
"ref_id": null
},
{
"start": 793,
"end": 799,
"text": "using,",
"ref_id": null
},
{
"start": 800,
"end": 810,
"text": "deposited,",
"ref_id": null
},
{
"start": 811,
"end": 816,
"text": "wear,",
"ref_id": null
},
{
"start": 817,
"end": 826,
"text": "elsevier,",
"ref_id": null
},
{
"start": 827,
"end": 844,
"text": "reserved 10 film,",
"ref_id": null
},
{
"start": 845,
"end": 850,
"text": "thin,",
"ref_id": null
},
{
"start": 851,
"end": 861,
"text": "thickness,",
"ref_id": null
},
{
"start": 862,
"end": 872,
"text": "substrate,",
"ref_id": null
},
{
"start": 873,
"end": 883,
"text": "deposited,",
"ref_id": null
},
{
"start": 884,
"end": 891,
"text": "stress,",
"ref_id": null
},
{
"start": 892,
"end": 902,
"text": "structure,",
"ref_id": null
},
{
"start": 903,
"end": 914,
"text": "dielectric,",
"ref_id": null
},
{
"start": 915,
"end": 918,
"text": "nm,",
"ref_id": null
},
{
"start": 919,
"end": 924,
"text": "ratio",
"ref_id": null
}
],
"ref_spans": [
{
"start": 925,
"end": 933,
"text": "Figure 2",
"ref_id": null
},
{
"start": 1020,
"end": 1028,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Topic Detection by the Proposed Index and Heat Map",
"sec_num": "3.4"
},
{
"text": "to one keyword, and each row to one year. We only show the left and right parts of the heat map because the original figure is too wide since there are many keywords.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic Detection by the Proposed Index and Heat Map",
"sec_num": "3.4"
},
{
"text": "Each cell shows the difference between the normalized impact index of that year and the reference year, 2000, for some word. That is, it shows the value of (1), where blue (resp. red) cells are positive (resp. negative) values, meaning the relative frequency of the corresponding year for the word is larger (resp. smaller) than that of the reference year. Figure 3 shows temporal changes of the proposed indices for some selected keywords, some of which appear in Figure 1 and the other ones are chosen from the heat map.",
"cite_spans": [],
"ref_spans": [
{
"start": 357,
"end": 365,
"text": "Figure 3",
"ref_id": "FIGREF0"
},
{
"start": 465,
"end": 473,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Topic Detection by the Proposed Index and Heat Map",
"sec_num": "3.4"
},
{
"text": "\"graphene\", \"2D\", \"nanotube\", \"low-k\" (low dielectric constant), \"h-BN\" (hexagonal boron nitride), and \"GaAs\" are names of materials, and \"interconnect\" and \"fuel\" are the keywords of the plasma chemical vapor deposition (CVD for short) applications, where \"interconnect\" refers as inter- connect in semiconductor devices and \"fuel\" as fuel cells. For interconnect, the proposed index was negative and decreased from 2000. Plasma CVD as interconnect process technology has been losing interest. The proposed index for fuel increases continuously and there was temporary booming in 2000 and 2015.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic Detection by the Proposed Index and Heat Map",
"sec_num": "3.4"
},
{
"text": "Both \"nonotube\" and \"low-k\" appeared in the third graph of Figure 1 . From this graph, we can see sharp rises of their frequencies. However, from the proposed index for these keywords, we can not say these topics are actively examined in papers.",
"cite_spans": [],
"ref_spans": [
{
"start": 59,
"end": 67,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Topic Detection by the Proposed Index and Heat Map",
"sec_num": "3.4"
},
{
"text": "As shown in Figure 1 , \"2D\" has its small frequency although it has potential to be a big trend because unique characteristics of 2D materials have been found then the research of 2D materials seems to become active as the trigger of the graphene Nobel Prize. On the other hand, the proposed index for \"2D\" rises sharply in Figure 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 20,
"text": "Figure 1",
"ref_id": null
},
{
"start": 324,
"end": 332,
"text": "Figure 3",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Topic Detection by the Proposed Index and Heat Map",
"sec_num": "3.4"
},
{
"text": "The index for \"h-BN\" has negative values until 2012, which seems to have lost the interest of researchers, but after that it increases rapidly. In fact, \"h-BN\" has been studied as a 2D semiconductor material recently. In this sense, \"h-BN\" can be seen as a 2D material family, and so it is convincing the sharp rise for \"h-BN\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic Detection by the Proposed Index and Heat Map",
"sec_num": "3.4"
},
{
"text": "In this paper, we have introduced an index to find keywords, which express small topics, using relative frequencies. As visualization, the difference of the proposed index from the reference year, 2000 in this paper, is depicted as a heat map. Therefore, we can easily find subtle topics even if their absolute frequencies are not so large. We have conducted case studies using the proposed index, and confirmed that some keywords, which are already known as popular, show sharp rises of the proposed index.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "As described in Section 3, we have only checked popular keywords. So it is an important future work to check all keywords whose values of the proposed index.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "Even if we find some keywords with high values of the proposed index, you might want to check their absolute frequencies. Therefore, it is also important to develop a visualization tool which enables to check both the absolute frequency and the proposed index. Similarly, it is an important future work for the tool to introduce a grouping facility, which groups a different keywords in a hierarchical way, and then we can grasp transitions of topics with flexible granularity with the tool. To do so, we can use some vocabulary system, like one in (Salatino et al., 2019) , or word embeddings to measure the distances between two keywords.",
"cite_spans": [
{
"start": 549,
"end": 572,
"text": "(Salatino et al., 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "https://irus.jisc.ac.uk/ 2 https://www.nii.ac.jp/irp/en/archive/ statistic/ 3 https://arxiv.org/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://core.ac.uk/ 5 https://www.elsevier.com/about/ policies/text-and-data-mining",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "In this paper, the authors used data from Web of Science, a product of Clarivate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Citation Count Prediction using Abstracts",
"authors": [
{
"first": "Takahiro",
"middle": [],
"last": "Baba",
"suffix": ""
},
{
"first": "Kensuke",
"middle": [],
"last": "Baba",
"suffix": ""
},
{
"first": "Daisuke",
"middle": [],
"last": "Ikeda",
"suffix": ""
}
],
"year": 2019,
"venue": "Journal of Web Engineering",
"volume": "18",
"issue": "1-3",
"pages": "207--228",
"other_ids": {
"DOI": [
"10.13052/jwe1540-9589.18136"
]
},
"num": null,
"urls": [],
"raw_text": "Takahiro Baba, Kensuke Baba, and Daisuke Ikeda. 2019. Citation Count Prediction using Abstracts. Journal of Web Engineering, 18(1-3):207-228.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Topic and Trend Detection in Text Collections Using Latent Dirichlet Allocation",
"authors": [
{
"first": "Levent",
"middle": [],
"last": "Bolelli",
"suffix": ""
},
{
"first": "C. Lee",
"middle": [],
"last": "Eyda Ertekin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Giles",
"suffix": ""
}
],
"year": 2009,
"venue": "Advances in Information Retrieval (ECIR 2009)",
"volume": "5478",
"issue": "",
"pages": "776--780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Levent Bolelli, \u015e eyda Ertekin, and C. Lee Giles. 2009. Topic and Trend Detection in Text Collections Us- ing Latent Dirichlet Allocation. In Advances in In- formation Retrieval (ECIR 2009), Lecture Notes in Artificial Intelligence 5478, pages 776-780.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Undercovering research trends: Network analysis of keywords in scholarly articles",
"authors": [
{
"first": "Arjun",
"middle": [],
"last": "Duvvuru",
"suffix": ""
},
{
"first": "Sagar",
"middle": [],
"last": "Kamarthi",
"suffix": ""
},
{
"first": "Sivarit",
"middle": [],
"last": "Sultornsanee",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of Ninth International Conference on Computer Science and Software Engineering",
"volume": "",
"issue": "",
"pages": "265--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arjun Duvvuru, Sagar Kamarthi, and Sivarit Sultornsa- nee. 2012. Undercovering research trends: Network analysis of keywords in scholarly articles. In Pro- ceedings of Ninth International Conference on Com- puter Science and Software Engineering, pages 265- 270.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Finding Scientific Topics",
"authors": [
{
"first": "L",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Griffiths",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Steyvers",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the National Academy of Sciences",
"volume": "101",
"issue": "",
"pages": "5228--5235",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas L. Griffiths and Mark Steyvers. 2004. Find- ing Scientific Topics. Proceedings of the National Academy of Sciences, 101:5228-5235.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Automatically Extracting Keywords from Documents for Rich Indexes of Searchable Data Repositories",
"authors": [
{
"first": "Daisuke",
"middle": [],
"last": "Ikeda",
"suffix": ""
},
{
"first": "Daisuke",
"middle": [],
"last": "Seguchi",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 12th International Conference of Open Repositories",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daisuke Ikeda and Daisuke Seguchi. 2017. Auto- matically Extracting Keywords from Documents for Rich Indexes of Searchable Data Repositories. In Proceedings of the 12th International Conference of Open Repositories.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Toward Automatic Identification of Dataset Names in Scholarly Articles",
"authors": [
{
"first": "Daisuke",
"middle": [],
"last": "Ikeda",
"suffix": ""
},
{
"first": "Yuta",
"middle": [],
"last": "Taniguchi",
"suffix": ""
}
],
"year": 2019,
"venue": "Developments in Open Science and Research Data Management: 8th International Conference on Data Science and Institutional Research",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daisuke Ikeda and Yuta Taniguchi. 2019. Toward Au- tomatic Identification of Dataset Names in Scholarly Articles. In Developments in Open Science and Re- search Data Management: 8th International Confer- ence on Data Science and Institutional Research.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Tatsuo Ishijima, and Kenji Ishikawa. 2019. Progress and perspectives in dry processes for emerging multidisciplinary applications: how can we improve our use of dry processes?",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Iwase",
"suffix": ""
},
{
"first": "Yoshito",
"middle": [],
"last": "Kamaji",
"suffix": ""
},
{
"first": "Song",
"middle": [
"Yun"
],
"last": "Kang",
"suffix": ""
},
{
"first": "Kazunori",
"middle": [],
"last": "Koga",
"suffix": ""
},
{
"first": "Nobuyuki",
"middle": [],
"last": "Kuboi",
"suffix": ""
},
{
"first": "Moritaka",
"middle": [],
"last": "Nakamura",
"suffix": ""
},
{
"first": "Nobuyuki",
"middle": [],
"last": "Negishi",
"suffix": ""
},
{
"first": "Tomohiro",
"middle": [],
"last": "Nozaki",
"suffix": ""
},
{
"first": "Shota",
"middle": [],
"last": "Nunomura",
"suffix": ""
},
{
"first": "Daisuke",
"middle": [],
"last": "Ogawa",
"suffix": ""
},
{
"first": "Mitsuhiro",
"middle": [],
"last": "Omura",
"suffix": ""
},
{
"first": "Tetsuji",
"middle": [],
"last": "Shimizu",
"suffix": ""
},
{
"first": "Kazunori",
"middle": [],
"last": "Shinoda",
"suffix": ""
},
{
"first": "Yasushi",
"middle": [],
"last": "Sonoda",
"suffix": ""
},
{
"first": "Haruka",
"middle": [],
"last": "Suzuki",
"suffix": ""
},
{
"first": "Kazuo",
"middle": [],
"last": "Takahashi",
"suffix": ""
},
{
"first": "Takayoshi",
"middle": [],
"last": "Tsutsumi",
"suffix": ""
},
{
"first": "Kenichi",
"middle": [],
"last": "Yoshikawa",
"suffix": ""
}
],
"year": null,
"venue": "Japanese Journal of Applied Physics",
"volume": "",
"issue": "SE",
"pages": "",
"other_ids": {
"DOI": [
"10.7567/1347-4065/ab163a"
]
},
"num": null,
"urls": [],
"raw_text": "Taku Iwase, Yoshito Kamaji, Song Yun Kang, Kazunori Koga, Nobuyuki Kuboi, Moritaka Naka- mura, Nobuyuki Negishi, Tomohiro Nozaki, Shota Nunomura, Daisuke Ogawa, Mitsuhiro Omura, Tet- suji Shimizu, Kazunori Shinoda, Yasushi Sonoda, Haruka Suzuki, Kazuo Takahashi, Takayoshi Tsut- sumi, Kenichi Yoshikawa, Tatsuo Ishijima, and Kenji Ishikawa. 2019. Progress and perspectives in dry processes for emerging multidisciplinary ap- plications: how can we improve our use of dry processes? Japanese Journal of Applied Physics, 58(SE).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "CORE: Three Access Levels to Underpin Open Access. D-Lib Magazine",
"authors": [
{
"first": "Petr",
"middle": [],
"last": "Knoth",
"suffix": ""
},
{
"first": "Zdenek",
"middle": [],
"last": "Zdrahal",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "18",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1045/november2012-knoth"
]
},
"num": null,
"urls": [],
"raw_text": "Petr Knoth and Zdenek Zdrahal. 2012. CORE: Three Access Levels to Underpin Open Access. D-Lib Magazine, 18(11/12).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Learning the parts of objects by non-negative matrix factorization",
"authors": [
{
"first": "D",
"middle": [],
"last": "Daniel",
"suffix": ""
},
{
"first": "H",
"middle": [
"Sebastian"
],
"last": "Lee",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Seung",
"suffix": ""
}
],
"year": 1999,
"venue": "Nature",
"volume": "401",
"issue": "",
"pages": "788--791",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel D. Lee and H. Sebastian Seung. 1999. Learning the parts of objects by non-negative matrix factoriza- tion. Nature, 401:788-791.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The CSO Classifier: Ontology-Driven Detection of Research Topics in Scholarly Articles",
"authors": [
{
"first": "Angelo",
"middle": [],
"last": "Salatino",
"suffix": ""
},
{
"first": "Francesco",
"middle": [],
"last": "Osborne",
"suffix": ""
},
{
"first": "Thiviyan",
"middle": [],
"last": "Thanapalasingam",
"suffix": ""
},
{
"first": "Enrico",
"middle": [],
"last": "Motta",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 23rd International Conference on Theory and Practice of Digital Libraries",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Angelo Salatino, Francesco Osborne, Thiviyan Thana- palasingam, and Enrico Motta. 2019. The CSO Clas- sifier: Ontology-Driven Detection of Research Top- ics in Scholarly Articles. In Proceedings of the 23rd International Conference on Theory and Practice of Digital Libraries, Lecture Notes in Computer Sci- ence 11799.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "How are topics born? understanding the research dynamics preceding the emergence of new areas",
"authors": [
{
"first": "Angelo",
"middle": [
"A"
],
"last": "Salatino",
"suffix": ""
},
{
"first": "Francesco",
"middle": [],
"last": "Osborne",
"suffix": ""
},
{
"first": "Enrico",
"middle": [],
"last": "Motta",
"suffix": ""
}
],
"year": 2017,
"venue": "PeerJ Computer Science",
"volume": "",
"issue": "e119",
"pages": "",
"other_ids": {
"DOI": [
"10.7717/peerj-cs.119"
]
},
"num": null,
"urls": [],
"raw_text": "Angelo A. Salatino, Francesco Osborne, and Enrico Motta. 2017. How are topics born? understanding the research dynamics preceding the emergence of new areas. PeerJ Computer Science, 3(e119).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "AUGUR: Forecasting the Emergence of New Research Topics",
"authors": [
{
"first": "Angelo",
"middle": [
"A"
],
"last": "Salatino",
"suffix": ""
},
{
"first": "Francesco",
"middle": [],
"last": "Osborne",
"suffix": ""
},
{
"first": "Enrico",
"middle": [],
"last": "Motta",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 18th ACM/IEEE on Joint Conference on Digital Libraries",
"volume": "",
"issue": "",
"pages": "303--312",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Angelo A. Salatino, Francesco Osborne, and Enrico Motta. 2018. AUGUR: Forecasting the Emergence of New Research Topics. In Proceedings of the 18th ACM/IEEE on Joint Conference on Digital Li- braries, pages 303-312.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Unsupervised Word Embeddings Capture Latent Knowledge from Materials Science Literature",
"authors": [
{
"first": "Vahe",
"middle": [],
"last": "Tshitoyan",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Dagdelen",
"suffix": ""
},
{
"first": "Leigh",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Dunn",
"suffix": ""
},
{
"first": "Ziqin",
"middle": [],
"last": "Rong",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Kononova",
"suffix": ""
},
{
"first": "Kristin",
"middle": [
"A"
],
"last": "Persson",
"suffix": ""
},
{
"first": "Gerbrand",
"middle": [],
"last": "Ceder",
"suffix": ""
},
{
"first": "Anubhav",
"middle": [],
"last": "Jain",
"suffix": ""
}
],
"year": 2019,
"venue": "Nature",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vahe Tshitoyan, John Dagdelen, Leigh Weston, Alexander Dunn, Ziqin Rong, Olga Kononova, Kristin A. Persson, Gerbrand Ceder, and Anubhav Jain. 2019. Unsupervised Word Embeddings Cap- ture Latent Knowledge from Materials Science Lit- erature. Nature.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "The graph shows the temporal change of the proposed index for some keywords, such as \"low-k\".",
"num": null,
"type_str": "figure"
},
"TABREF0": {
"content": "<table/>",
"text": "The top 10 keywords with largest weights in a topic found by NMF.",
"type_str": "table",
"num": null,
"html": null
}
}
}
}