{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T06:44:26.080579Z" }, "title": "Scubed at 3C task A -A simple baseline for citation context purpose classification", "authors": [ { "first": "Shubhanshu", "middle": [], "last": "Mishra", "suffix": "", "affiliation": { "laboratory": "", "institution": "Indian Institute of Technology Kanpur Kanpur", "location": { "country": "India" } }, "email": "mishra@shubhanshu.com" }, { "first": "Sudhanshu", "middle": [], "last": "Mishra", "suffix": "", "affiliation": { "laboratory": "", "institution": "Indian Institute of Technology Kanpur Kanpur", "location": { "country": "India" } }, "email": "sdhanshu@iitk.ac.in" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present our team Scubed's approach in the 3C Citation Context Classification Task, Subtask A, citation context purpose classification. Our approach relies on text based features transformed via tf-idf features followed by training a variety of models which are capable of capturing non-linear features. Our best model on the leaderboard is a multi-layer perceptron which also performs best during our rerun. Our submission code for replicating experiments is at: https://github.com/nap sternxg/Citation Context Classific ation.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "We present our team Scubed's approach in the 3C Citation Context Classification Task, Subtask A, citation context purpose classification. Our approach relies on text based features transformed via tf-idf features followed by training a variety of models which are capable of capturing non-linear features. Our best model on the leaderboard is a multi-layer perceptron which also performs best during our rerun. Our submission code for replicating experiments is at: https://github.com/nap sternxg/Citation Context Classific ation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The number of research papers has increased exponentially in recent years. In order to efficiently access this scientific resource, we need automated solutions for extracting information from these records. Citations in research papers are important for multiple reasons e.g. comparing novelty (Mishra and Torvik, 2016) , expertise (Mishra et al., 2018a) , and self-citation patterns (Mishra et al., 2018b) . For people new to the field, they are an important resource to increase their knowledge whereas for experts in the field they act as useful pointers to summarize the paper. Citations are also used to measure various indexes which showcase the influence and reach of the researchers in their field. However, these indexes give equal weight to each citation. It has been established that all citations are not equal (N. Kunnath et al., 2020; Mishra et al., 2018b) . In many cases, cited papers are used as examples. Often, they are not influential to the paper itself.", "cite_spans": [ { "start": 294, "end": 319, "text": "(Mishra and Torvik, 2016)", "ref_id": "BIBREF7" }, { "start": 332, "end": 354, "text": "(Mishra et al., 2018a)", "ref_id": "BIBREF5" }, { "start": 384, "end": 406, "text": "(Mishra et al., 2018b)", "ref_id": "BIBREF6" }, { "start": 823, "end": 848, "text": "(N. Kunnath et al., 2020;", "ref_id": "BIBREF9" }, { "start": 849, "end": 870, "text": "Mishra et al., 2018b)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper we describe our team, Scubed's entry for the citation context purpose classification shared task (N. Kunnath et al., 2020) . This work aims to develop models that can identify the purpose of citations in the research papers, and hence can then be used to produce better indexes and make research more easily accessible to everyone.", "cite_spans": [ { "start": 111, "end": 136, "text": "(N. Kunnath et al., 2020)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There has been a significant amount of work done in this area to better understand the significance of citations in a paper (N. Kunnath et al., 2020) . As the number of research papers increase with time, the algorithms for suggesting research papers become more and more important. These algorithms are a deciding factor for lots of measures of a researcher's influence in a field. The no. of citations of a paper are important for deciding measures such as h-index (Hirsch, 2005) and g-index (Egghe, 2006) . These are influential measures for describing the significance of a researcher in a field. Scholars have argued that all of the citations in a paper should not have the same weight while determining the impact and reach of a paper. Moras et. al (Moravcsik and Murugesan, 1975) showed, that many references in research papers are redundant and quite often share little context with the citing paper. There have been many techniques for classifying citations as influential. However, one of the strongest baseline for this task is the prior citation count of the cited paper. Works of (Chubin and Moitra, 1975) show the effectiveness of citation count in determining influence. The work of (Zhu et al., 2015) points out suitable features for this task. They evaluated the performance of 5 classes of features, count, position, similarity, context and miscellaneous. They determined that counting the number of times a citation is referenced in a paper is the best estimator to determine the influence of a citation. (Hou et al., 2011) also showed that the count of a citation in a research paper is a simple and effective technique to assign its scientific contribution and influence. (Nazir et al., 2020) applied SVM, Random Forests and Kernel Linear Regression classifiers to identify important and non-important citations. They used citation count and similarity scores using tf-idf features to train their models. Their results show that these techniques produce an improved precision score of 0.84 in these tasks.", "cite_spans": [ { "start": 124, "end": 149, "text": "(N. Kunnath et al., 2020)", "ref_id": "BIBREF9" }, { "start": 467, "end": 481, "text": "(Hirsch, 2005)", "ref_id": "BIBREF2" }, { "start": 494, "end": 507, "text": "(Egghe, 2006)", "ref_id": "BIBREF1" }, { "start": 755, "end": 786, "text": "(Moravcsik and Murugesan, 1975)", "ref_id": "BIBREF8" }, { "start": 1093, "end": 1118, "text": "(Chubin and Moitra, 1975)", "ref_id": "BIBREF0" }, { "start": 1198, "end": 1216, "text": "(Zhu et al., 2015)", "ref_id": "BIBREF14" }, { "start": 1524, "end": 1542, "text": "(Hou et al., 2011)", "ref_id": "BIBREF3" }, { "start": 1693, "end": 1713, "text": "(Nazir et al., 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "1.1" }, { "text": "This paper focuses on the WOSP 3C shared subtask B. In this sub-task, we were required to classify the citation context in research papers on the basis of their influence and purpose in the paper. For this shared task we used the ACL-ARC dataset (Jurgens et al., 2018) . The dataset consisted of 3000 labeled data-points annotated using the ACT platform (Pride et al., 2019) . The data provided contains the following fields:", "cite_spans": [ { "start": 246, "end": 268, "text": "(Jurgens et al., 2018)", "ref_id": "BIBREF4" }, { "start": 354, "end": 374, "text": "(Pride et al., 2019)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Task and Data Description", "sec_num": "2" }, { "text": "\u2022 Unique Identifier \u2022 COREID of Citing Paper \u2022 Citing Paper Title \u2022 Citing Paper Author \u2022 Cited Paper Title \u2022 Cited Paper Author \u2022 Citation Context \u2022 Citation Class Label \u2022 Citation Influence Label", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task and Data Description", "sec_num": "2" }, { "text": "To identify the citation being considered a #AU-THORTAG is placed in the citation. For this task the Citation Class Label field was ignored. This was a multi-label classification task, where the following target labels were used :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task and Data Description", "sec_num": "2" }, { "text": "\u2022 BACKGROUND \u2022 COMPARES CONTRASTS \u2022 EXTENSION \u2022 FUTURE \u2022 MOTIVATION \u2022 USES", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task and Data Description", "sec_num": "2" }, { "text": "To evaluate the models the macro-F1 score was used on the test data. The final score that was used to rank was not the public score but a different subset of data that was not visible to the participating teams. The teams were advised to make submissions that would perform the best overall and not just on the public subset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task and Data Description", "sec_num": "2" }, { "text": "We utilize a simple approach based on text classification baseline methods. For the original submission we utilized a limited set of models. However, we trained additional models to conduct exhaustive evaluation for this paper. Below, we describe our workflow for pre-processing, feature extraction, and model-training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "The data provided was in raw text format which is not suitable for making predictions directly. In order to make useful predictions, it has to be first converted into numerical vector form that our models can process. The raw data consisted of columns having different attributes for which different feature extraction techniques had to be applied. For example, the citing and cited title consisted of a titles of the research papers whereas the citation context consisted of a description of the citation context. In order to efficiently process each column separately we used the ColumnTransformer module from the scikit-learn library (Pedregosa et al., 2011) . Each of the column contained text data. To extract useful features from this text data we used the TfidfVectorizer from the scikit-learn (Pedregosa et al., 2011 ) library on each column. This generates the term frequency inverse document frequency(tf-idf ) score for each of the texts in each column. The tf-idf score is a normalized count for the words occurring in the corpus. This type of feature however does not account for the position and inter-dependence of words. The tf-idf score is calculated as follows:", "cite_spans": [ { "start": 637, "end": 661, "text": "(Pedregosa et al., 2011)", "ref_id": "BIBREF11" }, { "start": 801, "end": 824, "text": "(Pedregosa et al., 2011", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Pre-Processing and Feature Extraction", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "tf \u2212 idf (t, d) = tf (t, d) * idf (t) (1) idf (t) = log 1 + n 1 + df (t) + 1", "eq_num": "(2)" } ], "section": "Pre-Processing and Feature Extraction", "sec_num": "3.1" }, { "text": "In the above equations, tf stands for term frequency which refers to the number of times a term t occurs in a document d. The n in (2) refers to the total number of documents present in the document set. (Df(t)) refers to the document frequency which calculates the number of documents in the document set that contain the term t. The tf-idf score is a better feature compared to the count of words in a sentence. The tf-idf score down weights uninformative words like pronouns compared to more rare but informative words present in the document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pre-Processing and Feature Extraction", "sec_num": "3.1" }, { "text": "In the end we ended up using two version of text features for our models:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pre-Processing and Feature Extraction", "sec_num": "3.1" }, { "text": "1. Citing Context only (v1): uses only features extracted from citation context column. Our hypothesis here is that citation context should have the highest signal for identifying how the citation is used. 2. All features (v2): uses features extracted from citation context as well as citing and cited title column. Our hypothesis here is that using the combination of features from both citing and cited paper should improve the signal for identifying how the citation is used. However, we are also aware that this may also increase the proportion of noisy features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pre-Processing and Feature Extraction", "sec_num": "3.1" }, { "text": "For this shared task we were allowed to submit a maximum of 5 models for evaluation on test data 1 . Our goal was to investigate usage of the most simple models based on proven linear and nonlinear models which are faster and easier to train and deploy compared to the recent more powerful but resource hungry deep learning models. The following models were submitted for evaluation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prediction Models", "sec_num": "3.2" }, { "text": "\u2022 Logistic Regression Classifier (LR): A simple logistic regression model trained on the tf-idf features of 3 columns. All the models were trained using the scikit-learn library. Table 1 shows the the public and private leader board scores for each of our submissions for this 1 https://www.kaggle.com/c/3c-shared-taskinfluence/rules task. Our MLP (v2) model performed best on the leader-board while similar to the top performing model (within 0.02 F1 score). After the final leader board ranking, we decided to replicate the model performance on the actual test set provided to us by the shared task organizers. Our evaluation scores may not match with the submitted solutions as the model changes on each run and we did not record the random seed for the original submission. This analysis was conducted to generate comparable results for all models across the training and test sets (see table 2), and to further inspect the performance of the model on each label (see table 3) . First, table 2 shows the evaluation scores of all the models on the test set. One consistent pattern emerges, v1 models which use only the citation context text as its feature, consistently perform much better than v2 models. Next, the best v1 as well as v2 models are MLP and MLP-3. It appears that inclusion of extra features leads to over-fitting which is also evident from the training evaluation scores. All models perform best on the Background label which is also the most frequent label. Overall, it appears that these baseline models are quite good at learning this task compared to other submissions, while being fast and easy to implement.", "cite_spans": [], "ref_spans": [ { "start": 179, "end": 186, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 972, "end": 980, "text": "table 3)", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Prediction Models", "sec_num": "3.2" }, { "text": "Our results show that tradition tf-idf features give good performance for this shared task resulting in a strong baseline to compare against. Simple machine learning models like logistic regression, random forests, and gradient boosted trees perform well for this task but are superseeded by multi-layer prerceptron models. Furthermore, the citation context contains the maximum signal for predicting citation usage. We were able to achieve one of the top performances in the task within the number of submissions required in the task. Due to the small dataset, multiple submissions increase the likelihood of the models to over-fit to the test set. Furthermore, our methods show that deep learning methods (e.g. mlp and mlp-3) do give significant advantage over simpler machine learning methods. The minor loss in performance is acceptable compared to the increased speed and low computation of simple machine learning models. Further analysis reveals that MLP based models are indeed over-fitting to the training data as shown by near perfect F1-score on the training data (see 2). Additionally, GBT models consistently achieve much better performance on the test set compared to other models, including RF model which was our best entry on the leader board. Furthermore, the highest performing label is the Influential label. All models (except LR) perform the worse on the Incidental when using all text features but when only using citation context, the label performance is similar across labels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "Our team 'Scubed' submitted 3 models for the citation context classification based on purpose task. Out of the submitted models the multi-layer perceptron classifier performed the best on the test set achieving third position in this task. This model gave a private score of 0.18146 on the test set. We were able to achieve competitive results under minimum trials using fast and computationally cheap machine learning models. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Content analysis of references: Adjunct or alternative to citation counting?", "authors": [ { "first": "Daryl", "middle": [ "E" ], "last": "Chubin", "suffix": "" }, { "first": "Soumyo", "middle": [ "D" ], "last": "Moitra", "suffix": "" } ], "year": 1975, "venue": "Social Studies of Science", "volume": "5", "issue": "4", "pages": "423--441", "other_ids": { "DOI": [ "10.1177/030631277500500403" ] }, "num": null, "urls": [], "raw_text": "Daryl E. Chubin and Soumyo D. Moitra. 1975. Con- tent analysis of references: Adjunct or alternative to citation counting? Social Studies of Science, 5(4):423-441.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Theory and practise of the g-index", "authors": [ { "first": "Leo", "middle": [ "Egghe" ], "last": "", "suffix": "" } ], "year": 2006, "venue": "Scientometrics", "volume": "69", "issue": "1", "pages": "131--152", "other_ids": {}, "num": null, "urls": [], "raw_text": "Leo Egghe. 2006. Theory and practise of the g-index. Scientometrics, 69(1):131-152.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "An index to quantify an individual's scientific research output", "authors": [ { "first": "Jorge", "middle": [], "last": "Hirsch", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the National Academy of Sciences of the United States of America", "volume": "102", "issue": "", "pages": "16569--72", "other_ids": { "DOI": [ "10.1073/pnas.0507655102" ] }, "num": null, "urls": [], "raw_text": "Jorge Hirsch. 2005. An index to quantify an individ- ual's scientific research output. Proceedings of the National Academy of Sciences of the United States of America, 102:16569-72.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Counting citations in texts rather than reference lists to improve the accuracy of assessing scientific contribution", "authors": [ { "first": "Wen-Ru", "middle": [], "last": "Hou", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Li", "suffix": "" }, { "first": "Deng-Ke", "middle": [], "last": "Niu", "suffix": "" } ], "year": 2011, "venue": "BioEssays", "volume": "33", "issue": "10", "pages": "724--727", "other_ids": { "DOI": [ "10.1002/bies.201100067" ] }, "num": null, "urls": [], "raw_text": "Wen-Ru Hou, Ming Li, and Deng-Ke Niu. 2011. Counting citations in texts rather than reference lists to improve the accuracy of assessing scientific con- tribution. BioEssays, 33(10):724-727.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Measuring the evolution of a scientific field through citation frames", "authors": [ { "first": "David", "middle": [], "last": "Jurgens", "suffix": "" }, { "first": "Srijan", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Raine", "middle": [], "last": "Hoover", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Mc-Farland", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2018, "venue": "Transactions of the Association of Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Jurgens, Srijan Kumar, Raine Hoover, Dan Mc- Farland, and Dan Jurafsky. 2018. Measuring the evo- lution of a scientific field through citation frames. Transactions of the Association of Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Expertise as an aspect of author contributions", "authors": [ { "first": "Shubhanshu", "middle": [], "last": "Mishra", "suffix": "" }, { "first": "Brent", "middle": [ "D" ], "last": "Fegley", "suffix": "" }, { "first": "Jana", "middle": [], "last": "Diesner", "suffix": "" }, { "first": "Vetle", "middle": [ "I" ], "last": "Torvik", "suffix": "" } ], "year": 2018, "venue": "WORKSHOP ON IN-FORMETRIC AND SCIENTOMETRIC RESEARCH (SIG/MET)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shubhanshu Mishra, Brent D. Fegley, Jana Diesner, and Vetle I. Torvik. 2018a. Expertise as an aspect of author contributions. In WORKSHOP ON IN- FORMETRIC AND SCIENTOMETRIC RESEARCH (SIG/MET), Vancouver.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Self-citation is the hallmark of productive authors, of any gender", "authors": [ { "first": "Shubhanshu", "middle": [], "last": "Mishra", "suffix": "" }, { "first": "Brent", "middle": [ "D" ], "last": "Fegley", "suffix": "" }, { "first": "Jana", "middle": [], "last": "Diesner", "suffix": "" }, { "first": "Vetle", "middle": [ "I" ], "last": "Torvik", "suffix": "" } ], "year": 2018, "venue": "PLOS ONE", "volume": "13", "issue": "9", "pages": "", "other_ids": { "DOI": [ "10.1371/journal.pone.0195773" ] }, "num": null, "urls": [], "raw_text": "Shubhanshu Mishra, Brent D. Fegley, Jana Diesner, and Vetle I. Torvik. 2018b. Self-citation is the hall- mark of productive authors, of any gender. PLOS ONE, 13(9):e0195773.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Quantifying Conceptual Novelty in the Biomedical Literature. D-Lib magazine : the magazine of the Digital Library Forum", "authors": [ { "first": "Shubhanshu", "middle": [], "last": "Mishra", "suffix": "" }, { "first": "Vetle", "middle": [ "I" ], "last": "Torvik", "suffix": "" } ], "year": 2016, "venue": "", "volume": "22", "issue": "", "pages": "9--10", "other_ids": { "DOI": [ "10.1045/september2016-mishra" ] }, "num": null, "urls": [], "raw_text": "Shubhanshu Mishra and Vetle I. Torvik. 2016. Quanti- fying Conceptual Novelty in the Biomedical Litera- ture. D-Lib magazine : the magazine of the Digital Library Forum, 22(9-10).", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Some results on the function and quality of citations", "authors": [ { "first": "J", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Poovanalingam", "middle": [], "last": "Moravcsik", "suffix": "" }, { "first": "", "middle": [], "last": "Murugesan", "suffix": "" } ], "year": 1975, "venue": "Social Studies of Science", "volume": "5", "issue": "1", "pages": "86--92", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael J. Moravcsik and Poovanalingam Murugesan. 1975. Some results on the function and quality of citations. Social Studies of Science, 5(1):86-92.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Overview of the 2020 wosp 3c citation context classification task", "authors": [ { "first": "N", "middle": [], "last": "Suchetha", "suffix": "" }, { "first": "David", "middle": [], "last": "Kunnath", "suffix": "" }, { "first": "Bikash", "middle": [], "last": "Pride", "suffix": "" }, { "first": "Petr", "middle": [], "last": "Gyawali", "suffix": "" }, { "first": "", "middle": [], "last": "Knoth", "suffix": "" } ], "year": 2020, "venue": "Proceedings of The 8th International Workshop on Mining Scientific Publications, ACM/IEEE Joint Conference on Digital Libraries", "volume": "2020", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Suchetha N. Kunnath, David Pride, Bikash Gyawali, and Petr Knoth. 2020. Overview of the 2020 wosp 3c citation context classification task. In Proceed- ings of The 8th International Workshop on Mining Scientific Publications, ACM/IEEE Joint Conference on Digital Libraries (JCDL 2020, Wuhan, China.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Muhammad Tanvir Afzal, and Hanan Aljuaid. 2020. Important citation identification by exploiting content and section-wise in-text citation count", "authors": [ { "first": "Shahzad", "middle": [], "last": "Nazir", "suffix": "" }, { "first": "Muhammad", "middle": [], "last": "Asif", "suffix": "" }, { "first": "Shahbaz", "middle": [], "last": "Ahmad", "suffix": "" }, { "first": "Faisal", "middle": [], "last": "Bukhari", "suffix": "" } ], "year": null, "venue": "PLOS ONE", "volume": "15", "issue": "3", "pages": "1--19", "other_ids": { "DOI": [ "10.1371/journal.pone.0228885" ] }, "num": null, "urls": [], "raw_text": "Shahzad Nazir, Muhammad Asif, Shahbaz Ahmad, Faisal Bukhari, Muhammad Tanvir Afzal, and Hanan Aljuaid. 2020. Important citation identifica- tion by exploiting content and section-wise in-text citation count. PLOS ONE, 15(3):1-19.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Scikit-learn: Machine learning in Python", "authors": [ { "first": "F", "middle": [], "last": "Pedregosa", "suffix": "" }, { "first": "G", "middle": [], "last": "Varoquaux", "suffix": "" }, { "first": "A", "middle": [], "last": "Gramfort", "suffix": "" }, { "first": "V", "middle": [], "last": "Michel", "suffix": "" }, { "first": "B", "middle": [], "last": "Thirion", "suffix": "" }, { "first": "O", "middle": [], "last": "Grisel", "suffix": "" }, { "first": "M", "middle": [], "last": "Blondel", "suffix": "" }, { "first": "P", "middle": [], "last": "Prettenhofer", "suffix": "" }, { "first": "R", "middle": [], "last": "Weiss", "suffix": "" }, { "first": "V", "middle": [], "last": "Dubourg", "suffix": "" }, { "first": "J", "middle": [], "last": "Vanderplas", "suffix": "" }, { "first": "A", "middle": [], "last": "Passos", "suffix": "" }, { "first": "D", "middle": [], "last": "Cournapeau", "suffix": "" }, { "first": "M", "middle": [], "last": "Brucher", "suffix": "" }, { "first": "M", "middle": [], "last": "Perrot", "suffix": "" }, { "first": "E", "middle": [], "last": "Duchesnay", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2825--2830", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duch- esnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Act: An annotation platform for citation typing at scale", "authors": [ { "first": "D", "middle": [], "last": "Pride", "suffix": "" }, { "first": "P", "middle": [], "last": "Knoth", "suffix": "" }, { "first": "J", "middle": [], "last": "Harag", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Pride, P. Knoth, and J. Harag. 2019. Act: An anno- tation platform for citation typing at scale. In 2019", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "ACM/IEEE Joint Conference on Digital Libraries (JCDL)", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "329--330", "other_ids": {}, "num": null, "urls": [], "raw_text": "ACM/IEEE Joint Conference on Digital Libraries (JCDL), pages 329-330.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Measuring academic influence: Not all citations are equal", "authors": [ { "first": "Xiaodan", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Peter", "middle": [ "D" ], "last": "Turney", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Lemire", "suffix": "" }, { "first": "Andr\u00e9", "middle": [], "last": "Vellino", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaodan Zhu, Peter D. Turney, Daniel Lemire, and Andr\u00e9 Vellino. 2015. Measuring academic in- fluence: Not all citations are equal. CoRR, abs/1501.06587.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Random Forest (RF): Random Forest model with 100 trees in the forest and boot-strapping trained on the tf-idf features.\u2022 Gradient Boosting Classifier (GBT): A gradient boosted classifier with 100 boosting stages trained on the tf-idf features. \u2022 Multi-layer Perceptron Classifier (MLP):A 1 hidden layer multi-layer perceptron classifier with 100 nodes and Relu activation, optimized using Adam optimizer with a learning rate of 0.001 and momentum of 0.99. \u2022 Multi-layer Perceptron Classifier (MLP-3): A 3 hidden layer multi-layer perceptron classifier with 256, 256, and 128 nodes in the first, second and third layers with Relu activation optimized using Adam optimizer with a learning rate of 0.001 and momentum of 0.99.", "type_str": "figure", "uris": null, "num": null }, "TABREF0": { "text": "Results for the Purpose Sub-task. 4* implies that according to the leader board our entry is better than the 4th position entry. The non-highlighted rankings are made on the basis of the leader board private scores visible to us.", "type_str": "table", "content": "
S.No Model Private Public Rank
1GBT0.1440.1504*
2RF20.1440.1424*
3MLPC 0.1820.1763
6Best0.206-1
4.1 Replication model performance after
leader board submission
", "html": null, "num": null }, "TABREF1": { "text": "Model evaluation scores on the test data on retraining models after leader board ranking.", "type_str": "table", "content": "
modelv1v2
testtraintesttrain
lr0.135 0.296 0.120 0.281
rf0.140 0.954 0.136 0.958
gbt0.151 0.719 0.148 0.770
mlp-3 0.186 0.995 0.177 1.000
mlp0.187 0.995 0.185 1.000
Second, in table 3 we investigate the per label
", "html": null, "num": null }, "TABREF2": { "text": "Per label model evaluation on the test data.", "type_str": "table", "content": "", "html": null, "num": null } } } }