{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:07:47.695867Z" }, "title": "Evidence Inference 2.0: More Data, Better Models", "authors": [ { "first": "Jay", "middle": [], "last": "Deyoung", "suffix": "", "affiliation": { "laboratory": "", "institution": "Northeastern University \u03a6 Kings College London", "location": {} }, "email": "deyoung.j@northeastern.edu" }, { "first": "Eric", "middle": [], "last": "Lehman", "suffix": "", "affiliation": { "laboratory": "", "institution": "Northeastern University \u03a6 Kings College London", "location": {} }, "email": "lehman.e@northeastern.edu" }, { "first": "Ben", "middle": [], "last": "Nye", "suffix": "", "affiliation": { "laboratory": "", "institution": "Northeastern University \u03a6 Kings College London", "location": {} }, "email": "nye.b@northeastern.edu" }, { "first": "Iain", "middle": [ "J" ], "last": "Marshall", "suffix": "", "affiliation": { "laboratory": "", "institution": "Northeastern University \u03a6 Kings College London", "location": {} }, "email": "" }, { "first": "Byron", "middle": [ "C" ], "last": "Wallace", "suffix": "", "affiliation": { "laboratory": "", "institution": "Northeastern University \u03a6 Kings College London", "location": {} }, "email": "b.wallace@northeastern.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "How do we most effectively treat a disease or condition? Ideally, we could consult a database of evidence gleaned from clinical trials to answer such questions. Unfortunately, no such database exists; clinical trial results are instead disseminated primarily via lengthy natural language articles. Perusing all such articles would be prohibitively time-consuming for healthcare practitioners; they instead tend to depend on manually compiled systematic reviews of medical literature to inform care. NLP may speed this process up, and eventually facilitate immediate consult of published evidence. The Evidence Inference dataset (Lehman et al., 2019) was recently released to facilitate research toward this end. This task entails inferring the comparative performance of two treatments, with respect to a given outcome, from a particular article (describing a clinical trial) and identifying supporting evidence. For instance: Does this article report that chemotherapy performed better than surgery for five-year survival rates of operable cancers? In this paper, we collect additional annotations to expand the Evidence Inference dataset by 25%, provide stronger baseline models, systematically inspect the errors that these make, and probe dataset quality. We also release an abstract only (as opposed to full-texts) version of the task for rapid model prototyping. The updated corpus, documentation, and code for new baselines and evaluations are available at http:", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "How do we most effectively treat a disease or condition? Ideally, we could consult a database of evidence gleaned from clinical trials to answer such questions. Unfortunately, no such database exists; clinical trial results are instead disseminated primarily via lengthy natural language articles. Perusing all such articles would be prohibitively time-consuming for healthcare practitioners; they instead tend to depend on manually compiled systematic reviews of medical literature to inform care. NLP may speed this process up, and eventually facilitate immediate consult of published evidence. The Evidence Inference dataset (Lehman et al., 2019) was recently released to facilitate research toward this end. This task entails inferring the comparative performance of two treatments, with respect to a given outcome, from a particular article (describing a clinical trial) and identifying supporting evidence. For instance: Does this article report that chemotherapy performed better than surgery for five-year survival rates of operable cancers? In this paper, we collect additional annotations to expand the Evidence Inference dataset by 25%, provide stronger baseline models, systematically inspect the errors that these make, and probe dataset quality. We also release an abstract only (as opposed to full-texts) version of the task for rapid model prototyping. The updated corpus, documentation, and code for new baselines and evaluations are available at http:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "As reports of clinical trials continue to amass at rapid pace, staying on top of all current literature to inform evidence-based practice is next to impossible. As of 2010, about seventy clinical trial reports were published daily, on average (Bastian et al., 2010) . This has risen to over one hundred thirty trials per day. 1 Motivated by the rapid growth in clinical trial publications, there now exist a plethora of tools to partially automate the systematic review task (Marshall and Wallace, 2019) . However, efforts at fully integrating the PICO framework into this process have been limited (Eriksen and Frandsen, 2018) . What if we could build a database of Participants, 2 Interventions, Comparisons, and Outcomes studied in these trials, and the findings reported concerning these? If done accurately, this would provide direct access to which treatments the evidence supports. In the near-term, such technologies may mitigate the tedious work necessary for manual synthesis.", "cite_spans": [ { "start": 243, "end": 265, "text": "(Bastian et al., 2010)", "ref_id": "BIBREF1" }, { "start": 475, "end": 503, "text": "(Marshall and Wallace, 2019)", "ref_id": "BIBREF10" }, { "start": 599, "end": 627, "text": "(Eriksen and Frandsen, 2018)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recent efforts in this direction include the EBM-NLP project (Nye et al., 2018) , and Evidence Inference (Lehman et al., 2019) , both of which comprise annotations collected on reports of Randomized Control Trials (RCTs) from PubMed. 3 Here we build upon the latter, which tasks systems with inferring findings in full-text reports of RCTs with respect to particular interventions and outcomes, and extracting evidence snippets supporting these.", "cite_spans": [ { "start": 61, "end": 79, "text": "(Nye et al., 2018)", "ref_id": "BIBREF12" }, { "start": 105, "end": 126, "text": "(Lehman et al., 2019)", "ref_id": "BIBREF8" }, { "start": 234, "end": 235, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We expand the Evidence Inference dataset and evaluate transformer-based models (Vaswani et al., 2017; Devlin et al., 2018) on the task. Concretely, our contributions are:", "cite_spans": [ { "start": 79, "end": 101, "text": "(Vaswani et al., 2017;", "ref_id": "BIBREF15" }, { "start": 102, "end": 122, "text": "Devlin et al., 2018)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We describe the collection of an additional 2,503 unique 'prompts' (see Section 2) with matched full-text articles; this is a 25% expansion of the original evidence inference dataset that we will release. We additionally have collected an abstract-only subset of data intended to facilitate rapid iterative design of models, as working over full-texts can be prohibitively time-consuming.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We introduce and evaluate new models, achieving SOTA performance for this task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We ablate components of these models and characterize the types of errors that they tend to still make, pointing to potential directions for further improving models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the Evidence Inference task (Lehman et al., 2019) , a model is provided with a full-text article describing a randomized controlled trial (RCT) and a 'prompt' that specifies an Intervention (e.g., aspirin), a Comparator (e.g., placebo), and an Outcome (e.g., duration of headache). We refer to these as ICO prompts. The task then is to infer whether a given article reports that the Intervention resulted in a significant increase, significant decrease, or produced no significant difference in the Outcome, as compared to the Comparator. Our annotation process largely follows that outlined in Lehman et al. (2019) ; we summarize this briefly here. Data collection comprises three steps:", "cite_spans": [ { "start": 31, "end": 52, "text": "(Lehman et al., 2019)", "ref_id": "BIBREF8" }, { "start": 598, "end": 618, "text": "Lehman et al. (2019)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "2" }, { "text": "(1) prompt generation; (2) prompt and article annotation; and (3) verification. All steps are performed by Medical Doctors (MDs) hired through Upwork. 4 Annotators were divided into mutually exclusive groups performing these tasks, described below.", "cite_spans": [ { "start": 151, "end": 152, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "2" }, { "text": "Combining this new data with the dataset introduced in Lehman et al. (2019) yields in total 12,616 unique prompts stemming from 3,346 unique articles, increasing the original dataset by 25%. 5 To acquire the new annotations, we hired 11 doctors: 1 for prompt generation, 6 for prompt annotation, and 4 for verification.", "cite_spans": [ { "start": 55, "end": 75, "text": "Lehman et al. (2019)", "ref_id": "BIBREF8" }, { "start": 191, "end": 192, "text": "5", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "2" }, { "text": "In this collection phase, a single doctor is asked to read an article and identify triplets of interventions, comparators, and outcomes; we refer to these as ICO prompts. Each doctor is assigned a unique article, so as to not overlap with one another. Doctors were asked to find a maximum of 5 prompts per article as a practical trade-off between the expense of exhaustive annotation and acquiring annotations over a variety of articles. This resulted in our collecting 3.77 prompts per article, on average. We asked doctors to derive at least 1 prompt from the body (rather than the abstract) of the article. A large difficulty of the task stems from the wide variety of treatments and outcomes used in the trials: 35.8% of interventions, 24.0% of comparators, and 81.6% of outcomes are unique to one another.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prompt Generation", "sec_num": "2.1" }, { "text": "In addition to these ICO prompts, doctors were asked to report the relationship between the intervention and comparator with respect to the outcome, and cite what span from the article supports their reasoning. We find that 48.4% of the collected prompts can be answered using only the abstract. However, 63.0% of the evidence spans supporting judgments (provided by both the prompt generator and prompt annotator), are from outside of the abstract. Additionally, 13.6% of evidence spans cover more than one sentence in length.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prompt Generation", "sec_num": "2.1" }, { "text": "Following the guidelines presented in Lehman et al. (2019) , each prompt was assigned to a single doctor. They were asked to report the difference between the specified intervention and comparator, with respect to the given outcome. In particular, options for this relationship were: \"increase\", \"decrease\", \"no difference\" or \"invalid prompt.\" Annotators were also asked to mark a span of text supporting their answers: a rationale. However, unlike Lehman et al. (2019) , here, annotators were not restricted via the annotation platform to only look at the abstract at first. They were free to search the article as necessary.", "cite_spans": [ { "start": 38, "end": 58, "text": "Lehman et al. (2019)", "ref_id": "BIBREF8" }, { "start": 450, "end": 470, "text": "Lehman et al. (2019)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Prompt Annotation", "sec_num": "2.2" }, { "text": "Because trials tend to investigate multiple interventions and measure more than one outcome, articles will usually correspond to multiple -potentially many -valid ICO prompts (with correspondingly different findings). In the data we collected, 62.9% of articles comprise at least two ICO prompts with different associated labels (for the same article).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prompt Annotation", "sec_num": "2.2" }, { "text": "Given both the answers and rationales of the prompt generator and prompt annotator, a third doctor -the verifier -was asked to determine the validity of both of the previous stages. 6 We estimate the accuracy of each task with respect to these verification labels. For prompt generation, answers Figure 1 : BERT to BERT pipeline. Evidence identification and classification stages are trained separately. The identifier is trained via negative samples against the positive instances, the classifier via only those same positive evidence spans. Decoding assigns a score to every sentence in the document, and the sentence with the highest evidence score is passed to the classifier.", "cite_spans": [ { "start": 182, "end": 183, "text": "6", "ref_id": null } ], "ref_spans": [ { "start": 296, "end": 304, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Verification", "sec_num": "2.3" }, { "text": "were 94.0% accurate, and rationales were 96.1% accurate. For prompt annotation, the answers were 90.0% accurate, and accuracy of the rationales was 88.8%. The drop in accuracy between prompt generation answers and prompt annotation answers is likely due to confusion with respect to the scope of the intervention, comparator, and outcome.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Verification", "sec_num": "2.3" }, { "text": "We additionally calculated agreement statistics amongst the doctors across all stages, yielding a Krippendorf's \u03b1 of \u03b1 = 0.854. In contrast, the agreement between prompt generator and annotator (excluding verifier) had a \u03b1 = 0.784.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Verification", "sec_num": "2.3" }, { "text": "We subset the articles and their content, yielding 9,680 of 24,686 annotations, or approximately 40%. This leaves 6375 prompts, 50.5% of the total.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract Only Subset", "sec_num": "2.4" }, { "text": "We consider a simple BERT-based (Devlin et al., 2018) pipeline comprising two independent models, as depicted in Figure 1 . The first identifies evidence bearing sentences within an article for a given ICO. The second model then classifies the reported findings for an ICO prompt using the evidence extracted by this first model. These models place a dense layer on top of representations yielded from (Gururangan et al., 2020), 7 a variant of RoBERTa (Liu et al., 2019) pre-trained over scientific corpora, 8 followed by a Softmax.", "cite_spans": [ { "start": 32, "end": 53, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF4" }, { "start": 452, "end": 470, "text": "(Liu et al., 2019)", "ref_id": null } ], "ref_spans": [ { "start": 113, "end": 121, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Models", "sec_num": "3" }, { "text": "Specifically, we first perform sentence segmentation over full-text articles using ScispaCy (Neumann et al., 2019). We use this segmentation to recover evidence bearing sentences. We train an evidence identifier by learning to discriminate between evidence bearing sentences and randomly sampled non-evidence sentences. 9 We then train an evidence classifier over the evidence bearing sentences to characterize the trial's finding as reporting that the Intervention significantly decreased, did not significantly change, or significantly increased the Outcome compared to the Comparator in an ICO. When making a prediction for an (ICO, document) pair we use the highest scoring evidence sentence from the identifier, feeding this to the evidence classifier for a final result. Note that the evidence classifier is conditioned on the ICO frame; we prepend the ICO embedding (from Biomed RoBERTa) to the embedding of the identified evidence snippet. Reassuringly, removing this signal degrades performance (Table 1) .", "cite_spans": [ { "start": 320, "end": 321, "text": "9", "ref_id": null } ], "ref_spans": [ { "start": 1004, "end": 1013, "text": "(Table 1)", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Models", "sec_num": "3" }, { "text": "For all models we fine-tuned the underlying BERT parameters. We trained all models using the Adam optimizer (Kingma and Ba, 2014) with a BERT learning rate 2e-5. We train these models for 10 epochs, keeping the best performing version on a nested held-out set with respect to macro-averaged f-scores. When training the evidence identifier, we experiment with different numbers of random samples per positive instance. We used Scikit-Learn (Pedregosa et al., 2011) for evaluation and diagnostics, and implemented all models in PyTorch (Paszke et al., 2019) . We additionally reproduce the end-to-end system from Lehman et al. (2019) : a gated recurrent unit (Cho et al., 2014) to encode the document, attention (Bahdanau et al., 2015) conditioned on the ICO, with the resultant vector (plus the ICO) fed into an MLP for a final significance decision.", "cite_spans": [ { "start": 439, "end": 463, "text": "(Pedregosa et al., 2011)", "ref_id": "BIBREF14" }, { "start": 534, "end": 555, "text": "(Paszke et al., 2019)", "ref_id": "BIBREF13" }, { "start": 611, "end": 631, "text": "Lehman et al. (2019)", "ref_id": "BIBREF8" }, { "start": 657, "end": 675, "text": "(Cho et al., 2014)", "ref_id": "BIBREF3" }, { "start": 710, "end": 733, "text": "(Bahdanau et al., 2015)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "3" }, { "text": "Our main results are reported in Table 1 . We make a few key observations. First, the gains over the prior state-of-the-art model -which was not BERT based -are substantial: 20+ absolute points in F-score, even beyond what one might expect to see shifting to large pre-trained models. 10 Second, conditioning on the ICO prompt is key; failing to do so results in substantial performance drops. Finally, we seem to have reached a plateau in terms of the performance of the BERT pipeline model; adding the newly collected training data does not budge performance (evaluated on the augmented test set). This suggests that to realize stronger performance here, we perhaps need a less naive architecture that better models the domain. We next probe specific aspects of our design and training decisions.", "cite_spans": [], "ref_spans": [ { "start": 33, "end": 40, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Experiments and Results", "sec_num": "4" }, { "text": "Impact of Negative Sampling As negative sampling is a crucial part of the pipeline, we vary the number of samples and evaluate performance. We provide detailed results in Appendix A, but to summarize briefly: we find that two to four negative samples (per positive) performs the best for the end-to-end task, with little change in both AUROC and accuracy of the best fit evidence sentence. This is likely because the model needs only to maximize discriminative capability, rather than calibration.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "4" }, { "text": "Distribution Shift In addition to comparable Krippendorf-\u03b1 values computed above, we measure the impact of the new data on pipeline performance. We compare performance of the pipeline with all data \"Biomed RoBERTa (BR) Pipeline\" vs. just the old data \"Biomed RoBERTA (BR) BERT Pipeline 1.0\" in Table 1 . As performance stays relatively constant, we believe the new data 10 To verify the impact of architecture changes, we experiment with randomly initialized and fine-tuned BERTs. We find that these perform worse than the original models in all instances and elide more detailed results.", "cite_spans": [ { "start": 370, "end": 372, "text": "10", "ref_id": null } ], "ref_spans": [ { "start": 294, "end": 301, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Experiments and Results", "sec_num": "4" }, { "text": "Cond to be well-aligned with the existing release. This also suggests that the performance of the current simple pipeline model may have plateaued; better performance perhaps requires inductive biases via domain knowledge or improved strategies for evidence identification. Oracle Evidence We report two types of Oracle evidence experiments -one using ground truth evidence spans \"Oracle spans\", the other using sentences for classification. In the former experiment, we choose an arbitrary evidence span 11 for each prompt for decoding. For the latter, we arbitrarily choose a sentence contained within a span. Both experiments are trained to use a matching classifier. We find that using a span versus a sentence causes a marginal change in score. Both diagnostics provide an upper bound on this model type, improve over the original Oracle baseline by approximately 10 points. Using Oracle evidence as opposed to a trained evidence identifier leaves an end-to-end performance gap of approximately 0.08 F1 score. Conditioning As the pipeline can optionally condition on the ICO, we ablate over both the ICO and the actual document text. We find that using the ICO alone performs about as effectively as an unconditioned end-to-end pipeline, 0.51 F1 score (Table 1) . However, when fed Oracle sentences, the unconditioned pipeline performance jumps to 0.80 F1. As shown in Table 3 (Appendix A), this large decrease in score can be attributed to the model losing the ability to identify the correct evidence sentence.", "cite_spans": [], "ref_spans": [ { "start": 1257, "end": 1266, "text": "(Table 1)", "ref_id": "TABREF2" }, { "start": 1374, "end": 1381, "text": "Table 3", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "? P R F BR", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "Mistake Breakdown We further perform an analysis of model mistakes in Table 2 . We find that the BERT-to-BERT model is somewhat better at identifying significantly decreased spans than it is at identifying spans for the significantly increased or no significant difference evidence classes. Spans for the no significant difference tend to be classified correctly, and spans for the significantly increased category tend to be confused in a similar pattern to the significantly decreased class. End-to-end mistakes are relatively balanced between all possible confusion classes.", "cite_spans": [], "ref_spans": [ { "start": 70, "end": 77, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "Abstract Only Results We report a full suite of experiments over the abstracts-only subset in Appendix B. We find that the pipeline models perform similarly on the abstract-only subset; differing in score by less than .01F1. Somewhat surprisingly, we find that the abstracts oracle model falls behind the full document oracle model, perhaps due to a difference in language reporting general results vs. more detailed conclusions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "We have introduced an expanded version of the Evidence Inference dataset. We have proposed and evaluated BERT-based models for the evidence inference task (which entails identifying snippets of evidence for particular ICO prompts in long documents and then classifying the reported finding on the basis of these), achieving state of the art results on this task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "5" }, { "text": "With this expanded dataset, we hope to support further development of NLP for assisting Evidence Based Medicine. Our results demonstrate promise for the task of automatically inferring results from Randomized Control Trials, but still leave room for improvement. In our future work, we intend to jointly automate the identification of ICO triplets and inference concerning these. We are also keen to investigate whether pre-training on related scientific 'fact verification' tasks might improve performance (Wadden et al., 2020) . ", "cite_spans": [ { "start": 507, "end": 528, "text": "(Wadden et al., 2020)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "5" }, { "text": "We repeat the experiments described in Section 4. Our primary findings are that the abstract-only task is easier and sixteen negative samples perform better than four. Otherwise results follow a similar trend to the full-document task. We document these in Table 4 , 5, 6 and Figure 3 .", "cite_spans": [], "ref_spans": [ { "start": 257, "end": 264, "text": "Table 4", "ref_id": "TABREF8" }, { "start": 276, "end": 284, "text": "Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "B Abstract Only Results", "sec_num": null }, { "text": "We report original SciBERT results in Tables 7, 8, 9 and Figures 4, 5. at the time experiment configurations were determined. Biomed RoBERTa experiments use the v2.0 set for calibration. We find that Biomed RoBERTa generally performs better, with a notable exception in performance on abstracts-only Oracle span classification.", "cite_spans": [], "ref_spans": [ { "start": 38, "end": 53, "text": "Tables 7, 8, 9", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "C SciBERT Results", "sec_num": null }, { "text": "We report SciBERT negative sampling results in Table 9 and Figure 4 .", "cite_spans": [], "ref_spans": [ { "start": 47, "end": 54, "text": "Table 9", "ref_id": "TABREF15" }, { "start": 59, "end": 67, "text": "Figure 4", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "C.1 Negative Sampling Results", "sec_num": null }, { "text": "We repeat the experiments described in Section 4 and report results in Tables 10, 11, 12 and Figure 5 . Our primary findings are that the abstract-only task is easier and eight negative samples perform better than four. Otherwise results follow a similar trend to the full-document task. Table 6 : Breakdown of the abstract-only conditioned Biomed RoBERTa pipeline model mistakes and performance by evidence class. ID Acc. is breakdown by final evidence truth. To the right is a confusion matrix for end-to-end predictions. Table 2 for SciBERT. Breakdown of the conditioned BERT pipeline model mistakes and performance by evidence class. ID Acc. is the \"identification accuracy\", or percentage of . To the right is a confusion matrix for end-to-end predictions. 'Sig ' indicates significantly decreased, 'Sig \u223c' indicates no significant difference, 'Sig \u2295' indicates significantly increased. Table 12 : Breakdown of the abstract-only conditioned SciBERT pipeline model mistakes and performance by evidence class. ID Acc. is breakdown by final evidence truth. To the right is a confusion matrix for end-to-end predictions.", "cite_spans": [], "ref_spans": [ { "start": 93, "end": 101, "text": "Figure 5", "ref_id": null }, { "start": 288, "end": 295, "text": "Table 6", "ref_id": null }, { "start": 524, "end": 531, "text": "Table 2", "ref_id": "TABREF4" }, { "start": 892, "end": 900, "text": "Table 12", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "C.2 Abstract Only Results", "sec_num": null }, { "text": "Figure 5: End to end pipeline scores on the abstractonly subset for different negative sampling strategies for SciBERT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C.2 Abstract Only Results", "sec_num": null }, { "text": "See https://ijmarshall.github.io/sote/. 2 We omit Participants in this work as we focus on the document level task of inferring study result directionality, and the Participants are inherent to the study, i.e., studies do not typically consider multiple patient populations.3 https://pubmed.ncbi.nlm.nih.gov/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://upwork.com.5 We use the first release of the data by Lehman et al., which included 10,137 prompts. A subsequent release contained 10,113 prompts, as the authors removed prompts where the answer and rationale were produced by different doctors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The verifier can also discard low-quality or incorrect prompts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "An earlier version of this work used SciBERT(Beltagy et al., 2019); we preserve these results in Appendix C.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We use the [CLS] representations.9 We train this via negative sampling because the vast majority of sentences are not evidence-bearing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Evidence classification operates on a single sentence, but an annotator's selection is span based. Furthermore, the prompt annotation stage may produce different evidence spans than prompt generation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank the anonymous BioNLP reviewers.This work was supported by the National Science Foundation, CAREER award 1750978.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": "We report negative sampling results for Biomed RoBERTa pipelines in ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Negative Sampling Results", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2015, "venue": "3rd International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Seventy-five trials and eleven systematic reviews a day: how will we ever keep up", "authors": [ { "first": "Hilda", "middle": [], "last": "Bastian", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Glasziou", "suffix": "" }, { "first": "Iain", "middle": [ "Chalmers" ], "last": "", "suffix": "" } ], "year": 2010, "venue": "PLoS Med", "volume": "7", "issue": "9", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hilda Bastian, Paul Glasziou, and Iain Chalmers. 2010. Seventy-five trials and eleven systematic reviews a day: how will we ever keep up? PLoS Med, 7(9):e1000326.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Scibert: Pretrained contextualized embeddings for scientific text", "authors": [ { "first": "Iz", "middle": [], "last": "Beltagy", "suffix": "" }, { "first": "Arman", "middle": [], "last": "Cohan", "suffix": "" }, { "first": "Kyle", "middle": [], "last": "Lo", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1903.10676" ] }, "num": null, "urls": [], "raw_text": "Iz Beltagy, Arman Cohan, and Kyle Lo. 2019. Scibert: Pretrained contextualized embeddings for scientific text. arXiv preprint arXiv:1903.10676.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation", "authors": [ { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Bart", "middle": [], "last": "Van Merri\u00ebnboer", "suffix": "" }, { "first": "Caglar", "middle": [], "last": "Gulcehre", "suffix": "" }, { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Fethi", "middle": [], "last": "Bougares", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1724--1734", "other_ids": { "DOI": [ "10.3115/v1/D14-1179" ] }, "num": null, "urls": [], "raw_text": "Kyunghyun Cho, Bart van Merri\u00ebnboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 1724- 1734, Doha, Qatar. Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The impact of patient, intervention, comparison, outcome (PICO) as a search strategy tool on literature search quality: a systematic review", "authors": [ { "first": "Mette", "middle": [], "last": "Brandt Eriksen", "suffix": "" }, { "first": "Tove", "middle": [], "last": "Faber Frandsen", "suffix": "" } ], "year": 2018, "venue": "Journal of the Medical Library Association", "volume": "106", "issue": "4", "pages": "", "other_ids": { "DOI": [ "10.5195/jmla.2018.345" ] }, "num": null, "urls": [], "raw_text": "Mette Brandt Eriksen and Tove Faber Frandsen. 2018. The impact of patient, intervention, comparison, out- come (PICO) as a search strategy tool on literature search quality: a systematic review. Journal of the Medical Library Association, 106(4).", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "2020. Don't stop pretraining: Adapt language models to domains and tasks", "authors": [ { "first": "Ana", "middle": [], "last": "Suchin Gururangan", "suffix": "" }, { "first": "Swabha", "middle": [], "last": "Marasovi", "suffix": "" }, { "first": "Kyle", "middle": [], "last": "Swayamdipta", "suffix": "" }, { "first": "Iz", "middle": [], "last": "Lo", "suffix": "" }, { "first": "Doug", "middle": [], "last": "Beltagy", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Downey", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Suchin Gururangan, Ana Marasovi, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Inferring which medical treatments work from reports of clinical trials", "authors": [ { "first": "Eric", "middle": [], "last": "Lehman", "suffix": "" }, { "first": "Jay", "middle": [], "last": "Deyoung", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Byron", "middle": [ "C" ], "last": "Wallace", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "3705--3717", "other_ids": { "DOI": [ "10.18653/v1/N19-1371" ] }, "num": null, "urls": [], "raw_text": "Eric Lehman, Jay DeYoung, Regina Barzilay, and By- ron C. Wallace. 2019. Inferring which medical treat- ments work from reports of clinical trials. In Pro- ceedings of the 2019 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol- ume 1 (Long and Short Papers), pages 3705-3717, Minneapolis, Minnesota. Association for Computa- tional Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Toward systematic review automation: A practical guide to using machine learning tools in research synthesis", "authors": [ { "first": "Iain", "middle": [ "J" ], "last": "Marshall", "suffix": "" }, { "first": "Byron", "middle": [ "C" ], "last": "Wallace", "suffix": "" } ], "year": 2019, "venue": "Systematic Reviews", "volume": "8", "issue": "1", "pages": "", "other_ids": { "DOI": [ "10.1186/s13643-019-1074-9" ] }, "num": null, "urls": [], "raw_text": "Iain J. Marshall and Byron C. Wallace. 2019. Toward systematic review automation: A practical guide to using machine learning tools in research synthesis. Systematic Reviews, 8(1):163.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Iz Beltagy, and Waleed Ammar. 2019. Scispacy: Fast and robust models for biomedical natural language processing", "authors": [ { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "King", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Neumann, Daniel King, Iz Beltagy, and Waleed Ammar. 2019. Scispacy: Fast and robust models for biomedical natural language processing.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A corpus with multi-level annotations of patients, interventions and outcomes to support language processing for medical literature", "authors": [ { "first": "Benjamin", "middle": [], "last": "Nye", "suffix": "" }, { "first": "Junyi", "middle": [ "Jessy" ], "last": "Li", "suffix": "" }, { "first": "Roma", "middle": [], "last": "Patel", "suffix": "" }, { "first": "Yinfei", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Iain", "middle": [], "last": "Marshall", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "197--207", "other_ids": { "DOI": [ "10.18653/v1/P18-1019" ] }, "num": null, "urls": [], "raw_text": "Benjamin Nye, Junyi Jessy Li, Roma Patel, Yinfei Yang, Iain Marshall, Ani Nenkova, and Byron Wal- lace. 2018. A corpus with multi-level annotations of patients, interventions and outcomes to support language processing for medical literature. In Pro- ceedings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 197-207, Melbourne, Australia. As- sociation for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Pytorch: An imperative style, high-performance deep learning library", "authors": [ { "first": "Adam", "middle": [], "last": "Paszke", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Gross", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Massa", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Lerer", "suffix": "" }, { "first": "James", "middle": [], "last": "Bradbury", "suffix": "" }, { "first": "Gregory", "middle": [], "last": "Chanan", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Killeen", "suffix": "" }, { "first": "Zeming", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Natalia", "middle": [], "last": "Gimelshein", "suffix": "" }, { "first": "Luca", "middle": [], "last": "Antiga", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "8024--8035", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. In Ad- vances in Neural Information Processing Systems, pages 8024-8035.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Scikit-learn: Machine learning in Python", "authors": [ { "first": "F", "middle": [], "last": "Pedregosa", "suffix": "" }, { "first": "G", "middle": [], "last": "Varoquaux", "suffix": "" }, { "first": "A", "middle": [], "last": "Gramfort", "suffix": "" }, { "first": "V", "middle": [], "last": "Michel", "suffix": "" }, { "first": "B", "middle": [], "last": "Thirion", "suffix": "" }, { "first": "O", "middle": [], "last": "Grisel", "suffix": "" }, { "first": "M", "middle": [], "last": "Blondel", "suffix": "" }, { "first": "P", "middle": [], "last": "Prettenhofer", "suffix": "" }, { "first": "R", "middle": [], "last": "Weiss", "suffix": "" }, { "first": "V", "middle": [], "last": "Dubourg", "suffix": "" }, { "first": "J", "middle": [], "last": "Vanderplas", "suffix": "" }, { "first": "A", "middle": [], "last": "Passos", "suffix": "" }, { "first": "D", "middle": [], "last": "Cournapeau", "suffix": "" }, { "first": "M", "middle": [], "last": "Brucher", "suffix": "" }, { "first": "M", "middle": [], "last": "Perrot", "suffix": "" }, { "first": "E", "middle": [], "last": "Duchesnay", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2825--2830", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duch- esnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Fact or fiction: Verifying scientific claim", "authors": [ { "first": "David", "middle": [], "last": "Wadden", "suffix": "" }, { "first": "Kyle", "middle": [], "last": "Lo", "suffix": "" }, { "first": "Lucy", "middle": [ "Lu" ], "last": "Wang", "suffix": "" }, { "first": "Shanchuan", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Madeleine", "middle": [], "last": "Van Zuylen", "suffix": "" }, { "first": "Arman", "middle": [], "last": "Cohan", "suffix": "" }, { "first": "Hannaneh", "middle": [], "last": "Hajishirzi", "suffix": "" } ], "year": 2020, "venue": "Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Wadden, Kyle Lo, Lucy Lu Wang, Shanchuan Lin, Madeleine van Zuylen, Arman Cohan, and Han- naneh Hajishirzi. 2020. Fact or fiction: Verifying scientific claim. In Association for Computational Linguistics (ACL).", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "num": null, "text": "End to end pipeline scores for different negative sampling strategies with Biomed RoBERTa." }, "FIGREF1": { "uris": null, "type_str": "figure", "num": null, "text": "End to end pipeline scores on the abstractonly subset for different negative sampling strategies with Biomed RoBERTa." }, "FIGREF2": { "uris": null, "type_str": "figure", "num": null, "text": "End to end pipeline scores for different negative sampling strategies for SciBERT." }, "TABREF0": { "html": null, "type_str": "table", "text": "", "num": null, "content": "
Evidence IdentificationICO + Ev Inference
[BERT + MLP][BERT + MLP]
TrainingSupervised postive Sample negativesFully supervised
ICO
IincreasedOw.r.t.CArticle BackgroundEv
Support:EvDuis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat
nulla pariatur.
Results
Lorem ipsum dolor sit amet,
consectetur adipiscing elit, sed do
eiusmod tempor incididunt.
TestingRank sentences by p(evidence)
mortality rate was significantly lower
Doesprogesteroneaffectadverse events were not foundargmax
mortalityw.r.t.placebo?intracranial pressure values were lower
results suggest further clinical trials
Decreased
" }, "TABREF2": { "html": null, "type_str": "table", "text": "", "num": null, "content": "" }, "TABREF4": { "html": null, "type_str": "table", "text": "Breakdown of the conditioned Biomed RoBERTa pipeline model mistakes and performance by evidence class. ID Acc. is the \"identification accuracy\", or percentage of . To the right is a confusion matrix for end-to-end predictions. 'Sig ' indicates significantly decreased, 'Sig \u223c' indicates no significant difference, 'Sig \u2295' indicates significantly increased.", "num": null, "content": "
" }, "TABREF6": { "html": null, "type_str": "table", "text": "Evidence Inference v2.0 evidence identification validation scores varying across negative sampling strategies using Biomed RoBERTa in the pipeline.", "num": null, "content": "
" }, "TABREF7": { "html": null, "type_str": "table", "text": "", "num": null, "content": "
contains the Biomed
RoBERTa numbers for comparison. Note that orig-
inal SciBERT experiments use the evidence infer-
ence v1.0 dataset as v2.0 collection was incomplete
" }, "TABREF8": { "html": null, "type_str": "table", "text": "Classification Scores. Biomed RoBERTa Abstract only version ofTable 1. All evidence identification models trained with sixteen negative samples.", "num": null, "content": "
Neg. Samples Cond? AUROC Top1 Acc
10.9830.647
20.9820.664
40.9810.680
80.9780.656
160.9800.673
10.9440.351
20.9530.373
40.9470.334
80.9380.273
160.9470.308
" }, "TABREF9": { "html": null, "type_str": "table", "text": "", "num": null, "content": "
: Abstract only (v2.0) evidence identification
validation scores varying across negative sampling
strategies using Biomed RoBERTa.
" }, "TABREF12": { "html": null, "type_str": "table", "text": "Replica of Table 1 with both SciBERT and Biomed RoBERTa results. Classification Scores.", "num": null, "content": "
BR Pipeline: Biomed RoBERTa BERT Pipeline, SB
Pipeline: SciBERT Pipeline. abs: Abstracts only. Base-
line: model from Lehman et al. (2019). Diagnostic
models: Baseline scores Lehman et al. (2019), BR
Pipeline when trained using the Evidence Inference 1.0
data, BR classifier when presented with only the ICO
element, an entire human selected evidence span, or
a human selected evidence sentence. Full document
BR models are trained with four negative samples; ab-
stracts are trained with sixteen; Baseline oracle span re-
sults from Lehman et al. (2019). In all cases: 'Cond?'
indicates whether or not the model had access to the
ICO elements; P/R/F scores are macro-averaged over
classes.
Predicted Class
Ev. Cls ID Acc.SigSig \u223c Sig \u2295
Sig.711.697.143.160
Sig \u223c.643.076.838.086
Sig \u2295.635.146.141.713
" }, "TABREF13": { "html": null, "type_str": "table", "text": "Replica of", "num": null, "content": "" }, "TABREF15": { "html": null, "type_str": "table", "text": "Evidence Inference v1.0 evidence identification validation scores varying across negative sampling strategies for SciBERT.", "num": null, "content": "
ModelCond?PRF
BERT Pipeline.803 .798 .799
BERT Pipeline.528 .513 .510
Diagnostics:
ICO Only.480 .480 .479
Oracle Spans.866 .862 .863
Oracle Sentence.848 .842 .844
Oracle Spans.804 .802 .801
Oracle Sentence.817 .776 .783
" }, "TABREF16": { "html": null, "type_str": "table", "text": "Classification Scores. SciBERT/Abstract only version ofTable 1. All evidence identification models trained with eight negative samples.", "num": null, "content": "
Neg. Samples Cond? AUROC Top1 Acc
10.9800.573
20.9780.596
40.9770.623
80.9500.609
160.9750.615
10.9460.340
20.9390.342
40.9120.286
80.9380.313
160.9400.282
" }, "TABREF17": { "html": null, "type_str": "table", "text": "Abstract only (v1.0) evidence identification validation scores varying across negative sampling strategies for SciBERT.", "num": null, "content": "
Conf. Cls
Ev. Cls ID Acc.SigSig \u223c Sig \u2295
Sig.767.750.044.206
Sig \u223c.686.092.816.092
Sig \u2295.591.109.064.827
" } } } }