{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:20:28.675920Z" }, "title": "Trialstreamer: Mapping and Browsing Medical Evidence in Real-Time", "authors": [ { "first": "Benjamin", "middle": [ "E" ], "last": "Nye", "suffix": "", "affiliation": {}, "email": "nye.b@husky.neu.edu" }, { "first": "Ani", "middle": [], "last": "Nenkova", "suffix": "", "affiliation": {}, "email": "nenkova@seas.upenn.edu" }, { "first": "Iain", "middle": [ "J" ], "last": "Marshall", "suffix": "", "affiliation": {}, "email": "iain.marshall@kcl.ac.uk" }, { "first": "Byron", "middle": [ "C" ], "last": "Wallace", "suffix": "", "affiliation": {}, "email": "b.wallace@northeastern.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We introduce Trialstreamer, a living database of clinical trial reports. Here we mainly describe the evidence extraction component; this extracts from biomedical abstracts key pieces of information that clinicians need when appraising the literature, and also the relations between these. Specifically, the system extracts descriptions of trial participants, the treatments compared in each arm (the interventions), and which outcomes were measured. The system then attempts to infer which interventions were reported to work best by determining their relationship with identified trial outcome measures. In addition to summarizing individual trials, these extracted data elements allow automatic synthesis of results across many trials on the same topic. We apply the system at scale to all reports of randomized controlled trials indexed in MEDLINE, powering the automatic generation of evidence maps, which provide a global view of the efficacy of different interventions combining data from all relevant clinical trials on a topic. We make all code and models freely available 1 alongside a demonstration of the web interface. 2", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "We introduce Trialstreamer, a living database of clinical trial reports. Here we mainly describe the evidence extraction component; this extracts from biomedical abstracts key pieces of information that clinicians need when appraising the literature, and also the relations between these. Specifically, the system extracts descriptions of trial participants, the treatments compared in each arm (the interventions), and which outcomes were measured. The system then attempts to infer which interventions were reported to work best by determining their relationship with identified trial outcome measures. In addition to summarizing individual trials, these extracted data elements allow automatic synthesis of results across many trials on the same topic. We apply the system at scale to all reports of randomized controlled trials indexed in MEDLINE, powering the automatic generation of evidence maps, which provide a global view of the efficacy of different interventions combining data from all relevant clinical trials on a topic. We make all code and models freely available 1 alongside a demonstration of the web interface. 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The highest-quality evidence to inform healthcare practice comes from randomized controlled trials (RCTs). The results of the vast majority of these trials are communicated in the form of unstructured text in journal articles. Such results accumulate quickly, with over 100 articles describing RCTs published daily, on average. It is difficult for healthcare providers and patients to make sense of and keep up with this torrent of unstructured literature.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction and Motivation", "sec_num": "1" }, { "text": "Consider a patient who has been newly diagnosed with diabetes. She would like to con- Figure 1 : A portion of an example evidence mapping Interventions and their inferred efficacy for Outcomes, given the condition (or Population) of Type II Diabetes. These maps are generated automatically using the NLP system we describe in this work. sult (in collaboration with her healthcare provider) the available evidence regarding her treatment options. But she may not even be aware of what her treatment options are. Further, she may only care about particular outcomes (for instance, managing her blood pressure). Currently, it is not straightforward to retrieve and browse the evidence pertaining to a given condition, and in particular to ascertain which treatments are best supported for a specific outcome of interest.", "cite_spans": [], "ref_spans": [ { "start": 86, "end": 94, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction and Motivation", "sec_num": "1" }, { "text": "Trialstreamer is a first attempt to solve this problem, making evidence more browseable via NLP technologies. Figure 1 shows one of the key features of the system: an automatically generated evidence map that displays treatments (vertical axis) and outcomes (horizontal) identified for a condition specified by the user (here, migraines). We elaborate on this particular example to illustrate the use of the system in Section 3.", "cite_spans": [], "ref_spans": [ { "start": 110, "end": 118, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction and Motivation", "sec_num": "1" }, { "text": "Trialstreamer aims to facilitate efficient evidence mapping with a user friendly method of presenting a search across a broad field (here, being a clinical condition) (Miake-Lye et al., 2016) . We use NLP technologies to provide browseable, interactive overviews of large volumes of literature, on-demand. These may then inform subsequent, formal syntheses, or they may simply guide ex-ploration of the primary literature. In this work we describe an open-source prototype that enables evidence mapping, using NLP to generate interactive overviews and visualizations of all RCT reports indexed by MEDLINE (and accessible via PubMed).", "cite_spans": [ { "start": 167, "end": 191, "text": "(Miake-Lye et al., 2016)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction and Motivation", "sec_num": "1" }, { "text": "When mapping the evidence one is generally interested in the following basic questions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction and Motivation", "sec_num": "1" }, { "text": "\u2022 What interventions and outcomes have been studied for a given condition (population)?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction and Motivation", "sec_num": "1" }, { "text": "\u2022 How much evidence exists, both in terms of the number of trials and the number of participants within these?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction and Motivation", "sec_num": "1" }, { "text": "\u2022 Does the evidence seem to support use of a particular intervention for a given condition?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction and Motivation", "sec_num": "1" }, { "text": "In the remainder of this paper we describe a prototype system that facilitates interactive exploration and mapping of the evidence base, with an emphasis on answering the above questions. The Trialstreamer mapping interface allows structured search over study populations, interventions/comparators, and outcomes -collectively referred to as PICO elements (Huang et al., 2006) . It then displays key clinical attributes automatically extracted from the set of retrieved trials. This is made possible via NLP modules trained on recently released corpora (Nye et al., 2018; Lehman et al., 2019) , described below.", "cite_spans": [ { "start": 356, "end": 376, "text": "(Huang et al., 2006)", "ref_id": "BIBREF3" }, { "start": 553, "end": 571, "text": "(Nye et al., 2018;", "ref_id": "BIBREF8" }, { "start": 572, "end": 592, "text": "Lehman et al., 2019)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction and Motivation", "sec_num": "1" }, { "text": "The evidence extraction pipeline is composed of four primary phases. First, text snippets that convey information about the trial's treatments (or interventions), outcome measures, and results are extracted from abstracts. Relations between these snippets are then inferred to identify which treatments were compared against each other, and which outcomes were measured for these comparisons. The extracted relations and evidence statements are then used to infer an overall conclusion about the comparative efficacy of the trial's interventions. Finally, the clinical concepts expressed in the extracted spans are normalized to a structured vocabulary in order to ground them in an existing knowledge base and allow for aggregations across trials.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Overview", "sec_num": "2" }, { "text": "A typical RCT report would pertain to a single clinical condition (the population), but might report multiple numerical results, each concerning a particular intervention, comparator, and outcome measure (which we describe as an ICO triplet).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Overview", "sec_num": "2" }, { "text": "Because the end-to-end task combines NLP subtasks that are supported by different datasets, we collected new development and test sets -160 abstracts in all, exhaustively annotated -in order to evaluate the overall performance of our system. Two medical doctors 3 annotated these documents with the all of the expressed entities, their mentions in the text, the relations between them, the conclusions reported for each ICO triplet and the sentence that contains the supporting evidence for this (Lehman et al., 2019) .", "cite_spans": [ { "start": 496, "end": 517, "text": "(Lehman et al., 2019)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "System Overview", "sec_num": "2" }, { "text": "We were unable to obtain normalized concept labels for the ICO triplets due to the excessive difficulty of the task for the annotators.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Overview", "sec_num": "2" }, { "text": "Modeling decisions were informed by the 60 document development set, and we present evaluations of the first four information extraction modules with regard to the 100 documents in the unseen test set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Overview", "sec_num": "2" }, { "text": "Enabling search over RCT reports requires first compiling and indexing all such studies. This is, perhaps surprisingly, non-trivial. One may rely on \"Publication Type\" (PT) tags that codify study designs of articles, but these are manually applied by staff at the National Library of Medicine. Consequently, there is a lag between when a new study is published and when a PT tag is applied. Relying on these tags may thus hinder access to the most up-to-date evidence available. Therefore, we instead use an automated tagging system that uses machine learning to classify articles as RCT reports (or not). This model has been validated extensively in prior work (Marshall et al., 2018), and we do not describe it further here.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preprocessing", "sec_num": "2.1" }, { "text": "Next, we replace all abbreviations with their long forms using the Ab3P algorithm (Sohn et al., 2008) . Using long forms has the complementary advantages of improving PICO labeling accuracy while also reducing the amount of context needed for prediction by downstream model components.", "cite_spans": [ { "start": 82, "end": 101, "text": "(Sohn et al., 2008)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Preprocessing", "sec_num": "2.1" }, { "text": "In order to identify the spans of text corresponding to the PICO elements of the trial, we use the EBM-NLP corpus (Nye et al., 2018) . This is a dataset Figure 2 : Overview of the evidence extraction pipeline, applied to all RCT article abstracts automatically identified. Text spans are first extracted from these abstracts, then assembled into relations that reflect the structure of the trials, and finally used to infer the effect interventions were reported to have on measured outcomes, as compared to the control treatment. comprising \u223c5,000 abstracts of RCT reports that have been annotated to demarcate textual spans that describe the respective PICO elements. In addition to these spans, it contains more granular annotations on information within spans (e.g., specific Population attributes like age and sex).", "cite_spans": [ { "start": 114, "end": 132, "text": "(Nye et al., 2018)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 153, "end": 161, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "PICO Elements", "sec_num": null }, { "text": "We follow our prior work (Nye et al., 2018) in training a BiLSTM-CRF model that learns to jointly predict each PICO element using EBM-NLP. Recent work has shown the efficacy of BERT (Devlin et al., 2018) representations in this space, e.g., Beltagy et al. achieved state-ofthe-art performance on EBM-NLP using this approach (2019). Therefore, for all text encoding we use BioBERT (Lee et al., 2019) , which was pretrained on PubMed documents. 4 Results for Interventions/Comparators and Outcomes on our test set are reported in Table 1 . Since these spans will serve as inputs to downstream models in the pipeline, high recall at the expense of precision is preferable; we will allow subsequent classifiers to discard spurious spans. We achieve 0.87 recall at the clinical concept level. 4 For PICO tagging on EBM-NLP we found that BioBERT performed comparably to SciBERT (Beltagy et al., 2019) .", "cite_spans": [ { "start": 25, "end": 43, "text": "(Nye et al., 2018)", "ref_id": "BIBREF8" }, { "start": 182, "end": 203, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF2" }, { "start": 380, "end": 398, "text": "(Lee et al., 2019)", "ref_id": "BIBREF4" }, { "start": 443, "end": 444, "text": "4", "ref_id": null }, { "start": 788, "end": 789, "text": "4", "ref_id": null }, { "start": 872, "end": 894, "text": "(Beltagy et al., 2019)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 528, "end": 535, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "PICO Elements", "sec_num": null }, { "text": "Precision Recall Evidence 0.69 0.53 0.97 ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "F1", "sec_num": null }, { "text": "In addition to PICO elements, we extract all sentences in the abstract that are predicted to contain evidence concerning the relative efficacy of an Intervention. Our training data for this model is sourced from the Evidence-Inference corpus (Lehman et al., 2019) , which comprises \u223c10,000 annotated 'prompts' across \u223c2,400 unique fulltext articles. Each prompt specifies an Intervention, a Comparator, and an Outcome. Doctors have annotated the prompts for each article, supplying an extracted snippet that presents the conclusion for these ICO elements, as well as an inference concerning whether the Outcome increased, decreased, or remained the same in the intervention group (as compared to the comparator group). We frame evidence identification as a sentence classication task, and train a linear classification layer on top of BioBERT outputs. Our positive training examples are the sentences containing evidence snippets in Evidence-Inference, and we draw an equal number of length-matched negatives randomly from the rest of the document. As shown in Table 2 , we achieve extremely high recall on the test set, but only middling precision. On inspection, many of these false positives are sentences from the conclusion that provide a high-level summary of the evidence, but aren't the best evidence statement -as provided by the annotator -for any given ICO prompt.", "cite_spans": [ { "start": 242, "end": 263, "text": "(Lehman et al., 2019)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 1061, "end": 1068, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Evidence Statements", "sec_num": null }, { "text": "To transform the extracted spans into a semantic representation of the trial that can be used to construct an evidence map, we must identify all instances of an outcome being reported, and infer which two treatments were being directly compared as the intervention and comparator with respect to said outcome. Finally, given each assembled ICO prompt, we can then predict the trial's findings regarding whether the outcome increased, decreased, or was not statistically different under the intervention versus the comparator. In effect, we are aiming to jointly extract ICO prompts and infer the directionality of the results reported concerning these, whereas prior work (Nye et al., 2018; Lehman et al., 2019) has considered these problems only in isolation.", "cite_spans": [ { "start": 672, "end": 690, "text": "(Nye et al., 2018;", "ref_id": "BIBREF8" }, { "start": 691, "end": 711, "text": "Lehman et al., 2019)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Relation Extraction", "sec_num": "2.3" }, { "text": "Our strategy for assembling ICO prompts is informed by the style in which results are commonly described in abstracts.When results are described in an article the outcome is typically referenced explicitly, while the intervention and especially the comparator are often referenced either indirectly (\"Mean headache duration was similar between groups\"), or not at all (\"No significant difference was observed for recovery time\"). In the fully annotated dev set collected for this work, 87% of outcomes were described explicitly in an evidence span, while only 28% of treatments were explicit.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relation Extraction", "sec_num": "2.3" }, { "text": "Motivated by this observation, we use the (explicit) outcomes extracted from an evidence snippet as a starting point; for each of these outcomes, the associated intervention and comparator are then inferred. This has the significant advantage of explicitly linking each outcome to the evidence that will be used to infer the directionality of the reported finding. This also provides the end-user with an interpretable rationale for the inference concerning treatment efficacy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relation Extraction", "sec_num": "2.3" }, { "text": "To link candidate extracted treatments to specific outcome mentions, we train a model that takes in a candidate treatment, an evidence statement containing the outcome, and the surrounding context from the document, and predicts if the treatment is the participating intervention, the participating comparator, or if it is not involved in [SEP] . We experimented with different slices of the document as the context, and achieved the highest dev performance using the first four sentences of the article. The class probabilities from this model are used to rank the possible interventions and comparators for each outcome, and when sufficiently probable candidates are identified we generate a complete ICO prompt.", "cite_spans": [ { "start": 339, "end": 344, "text": "[SEP]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Relation Extraction", "sec_num": "2.3" }, { "text": "After assembling all ICO prompts in a document, we feed them to a final classifier to predict the directionality of findings for each outcome, with respect to the given intervention and comparator. This model is trained over the evidenceinference corpus using the provided I, C, and O spans coupled with the sentences that contain the corresponding evidence statement. Empirically, we found that signal for the classifier is dominated by the outcome text and evidence span, with almost no contribution from the intervention and comparator. This is unsurprising given the regularity of the language used to describe conclusions. The reported directionality of the result is almost exclusively framed with respect to the intervention, and only 4.0% of all outcomes ever have different results for another I+C linking within the same document. The best performing model input was simply [CLS] OUTCOME [SEP] EVIDENCE [SEP] , and the results on the test set are reported in Table 3 . Table 4 : Performance for predicting an article's exact MeSH terms using the rule-base system, run on both the automatically extracted spans and the expertprovided test spans.", "cite_spans": [ { "start": 913, "end": 918, "text": "[SEP]", "ref_id": null } ], "ref_spans": [ { "start": 969, "end": 976, "text": "Table 3", "ref_id": "TABREF5" }, { "start": 979, "end": 986, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Relation Extraction", "sec_num": "2.3" }, { "text": "In order to standardize the language used to categorize the articles with respect to their PICO elements, we turn to the structured vocabulary provided by the National Libaray of Medicine (NLM) in the form of Medical Subject Heading (MeSH) terms. This resource codifies a comprehensive set of medical concepts into an ontology that includes their descriptions, properties, and the structured relationships between them. Each article in the MEDLINE database maintained by the NLM is annotated with the relevant MeSH terms by expert library scientists (subject to the same lag that necessitates an RCT classifier instead of relying on annotated Publication Types).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Normalizing PICO Terms", "sec_num": "2.4" }, { "text": "To induce relevant MeSH terms for an extracted text span, we reproduced the method described in the Metamap Lite paper (Demner-Fushman et al., 2017) to extract MeSH terms describing the PICO elements. In short, we generated a large dictionary of synonyms for medical terms algorithmically using data from the UMLS Metathesaurus, with synonyms being matched to unique identifiers pertaining to concepts in the MeSH vocabulary. We used this dictionary to map matching strings in our extracted PICO text to MeSH terms, yielding a set of normalized concepts describing each of the population, intervention, and outcome spans in the documents.", "cite_spans": [ { "start": 119, "end": 148, "text": "(Demner-Fushman et al., 2017)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Normalizing PICO Terms", "sec_num": "2.4" }, { "text": "To evaluate the accuracy of this approach, we compare the differences between the MeSH terms produced by our system against those provided by the NLM for the 191 articles that comprise the test set for EBM-NLP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Normalizing PICO Terms", "sec_num": "2.4" }, { "text": "The test articles are provided with an average of 14.8 MeSH terms per article, while our system induces 14.0 terms on average. The strictest evaluation for this module is to require exact matches between the predicted MeSH terms and the official MEDLINE terms -a daunting task given the 30, 000 possible labels we have to chose from. However, because the concepts in the ontology exist in varying levels of specificity (for example Migraine with Aura is a subset of Migraine Disorders), it is often the case that the predicted MeSH term is sufficiently close to the provided MeSH term for practical purposes, but differs in the level of specificity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Normalizing PICO Terms", "sec_num": "2.4" }, { "text": "To better characterize the performance of our approach, we therefore also consider relaxing the equivalence criteria to include matching immediate parents or children in the MeSH hierarchy. This modification results in a 42% relative increase in recall and a 23% increase in precision, as shown in Table 4 .", "cite_spans": [], "ref_spans": [ { "start": 298, "end": 305, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Normalizing PICO Terms", "sec_num": "2.4" }, { "text": "We observe that while the absolute accuracy is not high, this technique generally captures the key terms for the PICO elements. The most common mistakes, shown in Table 5 , mostly involve missing age or publication type terms, and systematic differences between the general MeSH terms commonly applied to articles (for example, we might apply Patients rather than Humans).", "cite_spans": [], "ref_spans": [ { "start": 163, "end": 170, "text": "Table 5", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Normalizing PICO Terms", "sec_num": "2.4" }, { "text": "A more sophisticated aligment between the way MeSH terms are applied by experts and the terms produced by our system has the potential to improve the overall effectiveness of the tool; we intend to pursue this in future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Normalizing PICO Terms", "sec_num": "2.4" }, { "text": "To illustrate the envisioned use of our automatic mapping system, we return to the example we began with at the outset of this paper: seeking evidence concerning treatment of Type II Diabetes. To begin, the user specifies a condition (Population) of interest. We rely on Medical Subject Headings (MeSH) terms, 5 which as dis- cussed above is a structured vocabularly maintained by the NLM. We allow users to enter a search string and provide auto-complete options from the MeSH vocabulary. Users can additionally provide interventions or outcomes of interest to further narrow the search. We show an example of a constructed set of filters in Figure 3 .", "cite_spans": [], "ref_spans": [ { "start": 643, "end": 651, "text": "Figure 3", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Illustrative Example", "sec_num": "3" }, { "text": "Once a set of search terms is specified, relevant RCTs are retrieved from the comprehensive and up-to-date database. 6 The interface then displays counts of unique interventions and outcomes covered by the retrieved trials. Each bar in these plots can be clicked to explicitly include that concept in the search terms, allowing for a data-driven approach to building up the search parameters via iterative refinement.", "cite_spans": [ { "start": 117, "end": 118, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Illustrative Example", "sec_num": "3" }, { "text": "At this point, the evidence map shown in Figure 1 is also displayed, providing a summary of the evidence available for the effectiveness of the selected interventions with respect to their cooccurring outcomes. The user can mouse-over plot elements to view tooltips that include snippets of contributing evidence from the underlying abstracts, or click through to browse these texts annotated with all of the extracted information, as shown in Figure 4 . 6 We update this database nightly by scanning MEDLINE for new RCT reports using our RCT classifier .", "cite_spans": [ { "start": 455, "end": 456, "text": "6", "ref_id": null } ], "ref_spans": [ { "start": 41, "end": 47, "text": "Figure", "ref_id": null }, { "start": 444, "end": 452, "text": "Figure 4", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Illustrative Example", "sec_num": "3" }, { "text": "To evaluate the system's utility for a real-world task, we provided the tool to a team of researchers at Cures Within Reach for Cancer (CWR4C). 7 Domain experts reviewed the extracted ICO conclusions and automatically generated plots for a randomly selected subset of documents pertaining to cancer trials, a domain that is particularly challenging given the prevalence of complex compound interventions that often share individual components between trial arms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "User Study", "sec_num": "4" }, { "text": "The reviewers were asked to evaluate the types of mistakes made by the system as well as the overall precision and recall of the extracted conclusions for each document. Across 21 documents average precision was 54% and average recall was 75%, and the team expressed excitement about the efficacy of the system for their purposes. CWR4C has continued to work with this tool as a source of information about cancer-related clinical trials.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "User Study", "sec_num": "4" }, { "text": "We have presented the evidence extraction component of Trialstreamer, an open-source prototype that performs end-to-end identification of published RCT reports, extracts key elements from the texts (intervention and outcomes descriptions), and performs relation extraction between these, i.e., attempts to determine which intervention was reported to work for which outcomes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "We use this pipeline to provide fast, on-demand overviews of all published evidence pertaining to a condition of interest. Moving forward, we hope to refine the linking of extracted snippets to structured vocabularies to run a more comprehensive user-study to evaluate the use of the system in practice by different types of users. We also hope to develop a joint extraction and inference model, rather than relying on the current pipelined approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "https://github.com/bepnye/evidence extraction/ 2 http://bit.ly/trialstreamer", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Hired via Upwork (http://www.upwork.com).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.ncbi.nlm.nih.gov/mesh", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.cwr4c.org/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was funded in part by the National Institutes of Health (NIH) under the National Library of Medicine (NLM) grant 2R01LM012086, and by the National Science Foundation (NSF) CA-REER award 1750978.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Scibert: Pretrained contextualized embeddings for scientific text", "authors": [ { "first": "Iz", "middle": [], "last": "Beltagy", "suffix": "" }, { "first": "Arman", "middle": [], "last": "Cohan", "suffix": "" }, { "first": "Kyle", "middle": [], "last": "Lo", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1903.10676" ] }, "num": null, "urls": [], "raw_text": "Iz Beltagy, Arman Cohan, and Kyle Lo. 2019. Scibert: Pretrained contextualized embeddings for scientific text. arXiv preprint arXiv:1903.10676.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Metamap lite: an evaluation of a new java implementation of metamap", "authors": [ { "first": "Dina", "middle": [], "last": "Demner-Fushman", "suffix": "" }, { "first": "J", "middle": [], "last": "Willie", "suffix": "" }, { "first": "Alan", "middle": [ "R" ], "last": "Rogers", "suffix": "" }, { "first": "", "middle": [], "last": "Aronson", "suffix": "" } ], "year": 2017, "venue": "Journal of the American Medical Informatics Association", "volume": "24", "issue": "4", "pages": "841--844", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dina Demner-Fushman, Willie J Rogers, and Alan R Aronson. 2017. Metamap lite: an evaluation of a new java implementation of metamap. Journal of the American Medical Informatics Association, 24(4):841-844.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Evaluation of pico as a knowledge representation for clinical questions", "authors": [ { "first": "Xiaoli", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Dina", "middle": [], "last": "Demner-Fushman", "suffix": "" } ], "year": 2006, "venue": "AMIA annual symposium proceedings", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaoli Huang, Jimmy Lin, and Dina Demner-Fushman. 2006. Evaluation of pico as a knowledge represen- tation for clinical questions. In AMIA annual sym- posium proceedings, volume 2006, page 359. Amer- ican Medical Informatics Association.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Biobert: a pre-trained biomedical language representation model for biomedical text mining", "authors": [ { "first": "Jinhyuk", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Wonjin", "middle": [], "last": "Yoon", "suffix": "" }, { "first": "Sungdong", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Donghyeon", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Sunkyu", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Chan", "middle": [], "last": "Ho So", "suffix": "" }, { "first": "Jaewoo", "middle": [], "last": "Kang", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. Biobert: a pre-trained biomedical language representation model for biomedical text mining. CoRR, abs/1901.08746.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Inferring Which Medical Treatments Work from Reports of Clinical Trials", "authors": [ { "first": "Eric", "middle": [], "last": "Lehman", "suffix": "" }, { "first": "Jay", "middle": [], "last": "Deyoung", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Byron", "middle": [ "C" ], "last": "Wallace", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)", "volume": "", "issue": "", "pages": "3705--3717", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric Lehman, Jay DeYoung, Regina Barzilay, and By- ron C. Wallace. 2019. Inferring Which Medical Treatments Work from Reports of Clinical Trials. In Proceedings of the Conference of the North Amer- ican Chapter of the Association for Computational Linguistics (NAACL), pages 3705-3717.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Machine learning for identifying randomized controlled trials: an evaluation and practitioner's guide", "authors": [ { "first": "J", "middle": [], "last": "Iain", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Marshall", "suffix": "" }, { "first": "Jo\u00ebl", "middle": [], "last": "Noel-Storr", "suffix": "" }, { "first": "James", "middle": [], "last": "Kuiper", "suffix": "" }, { "first": "Byron C", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "", "middle": [], "last": "Wallace", "suffix": "" } ], "year": 2018, "venue": "Research synthesis methods", "volume": "9", "issue": "4", "pages": "602--614", "other_ids": {}, "num": null, "urls": [], "raw_text": "Iain J Marshall, Anna Noel-Storr, Jo\u00ebl Kuiper, James Thomas, and Byron C Wallace. 2018. Machine learning for identifying randomized controlled tri- als: an evaluation and practitioner's guide. Research synthesis methods, 9(4):602-614.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "What is an evidence map? a systematic review of published evidence maps and their definitions, methods, and products", "authors": [ { "first": "Susanne", "middle": [], "last": "Isomi M Miake-Lye", "suffix": "" }, { "first": "Roberta", "middle": [], "last": "Hempel", "suffix": "" }, { "first": "Paul", "middle": [ "G" ], "last": "Shanman", "suffix": "" }, { "first": "", "middle": [], "last": "Shekelle", "suffix": "" } ], "year": 2016, "venue": "Syst. Rev", "volume": "5", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Isomi M Miake-Lye, Susanne Hempel, Roberta Shan- man, and Paul G Shekelle. 2016. What is an ev- idence map? a systematic review of published evi- dence maps and their definitions, methods, and prod- ucts. Syst. Rev., 5:28.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A corpus with multi-level annotations of patients, interventions and outcomes to support language processing for medical literature", "authors": [ { "first": "Benjamin", "middle": [], "last": "Nye", "suffix": "" }, { "first": "Junyi", "middle": [ "Jessy" ], "last": "Li", "suffix": "" }, { "first": "Roma", "middle": [], "last": "Patel", "suffix": "" }, { "first": "Yinfei", "middle": [], "last": "Yang", "suffix": "" }, { "first": "J", "middle": [], "last": "Iain", "suffix": "" }, { "first": "Ani", "middle": [], "last": "Marshall", "suffix": "" }, { "first": "Byron C", "middle": [], "last": "Nenkova", "suffix": "" }, { "first": "", "middle": [], "last": "Wallace", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the conference. Association for Computational Linguistics. Meeting", "volume": "2018", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Benjamin Nye, Junyi Jessy Li, Roma Patel, Yinfei Yang, Iain J Marshall, Ani Nenkova, and Byron C Wallace. 2018. A corpus with multi-level annota- tions of patients, interventions and outcomes to sup- port language processing for medical literature. In Proceedings of the conference. Association for Com- putational Linguistics. Meeting, volume 2018, page 197. NIH Public Access.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Abbreviation definition identification based on automatic precision estimates", "authors": [ { "first": "Sunghwan", "middle": [], "last": "Sohn", "suffix": "" }, { "first": "C", "middle": [], "last": "Donald", "suffix": "" }, { "first": "Won", "middle": [], "last": "Comeau", "suffix": "" }, { "first": "W John", "middle": [], "last": "Kim", "suffix": "" }, { "first": "", "middle": [], "last": "Wilbur", "suffix": "" } ], "year": 2008, "venue": "BMC bioinformatics", "volume": "9", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sunghwan Sohn, Donald C Comeau, Won Kim, and W John Wilbur. 2008. Abbreviation definition iden- tification based on automatic precision estimates. BMC bioinformatics, 9(1):402.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "text": "View of a collected set of concepts used to specify trials of interest. The search interface allows concepts to be combined using and/or operators.", "type_str": "figure" }, "FIGREF1": { "num": null, "uris": null, "text": "Detailed view of selected abstracts that contribute to the evidence map. These are automatically annotated with all extracted information.", "type_str": "figure" }, "TABREF0": { "type_str": "table", "html": null, "num": null, "text": "BackgroundLorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam.Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proidentResultsLorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.", "content": "
Label SpansExtract ICO RelationsInfer Conclusions
Outcomes
Interventions/AbstractRank I+CargmaxI: C: O: Ev:Inferincreased Support:with respect to.
I: C: O:decreased Support:with respect to.
Ev:
I: C:didn't affectwith respect to.
O:Support:
Ev:
" }, "TABREF2": { "type_str": "table", "html": null, "num": null, "text": "Macro-averaged scores for ICO span prediction at both the token and clinical entity level.", "content": "" }, "TABREF3": { "type_str": "table", "html": null, "num": null, "text": "Performance for identifying evidence-bearing sentences.", "content": "
" }, "TABREF5": { "type_str": "table", "html": null, "num": null, "text": "Per-class prediction scores for each outcome in the test set.", "content": "
this particular evaluation. We use the evidence-
inference corpus to provide training examples for
the first two classes, and manually generate neg-
ative samples for the final class. The negatives
are constructed to mimic common errors that the
treatment extraction module made on the dev set,
including: mislabeling an outcome as a treatment;
extracting compound phrases containing multiple
individual treatments; and, finally, extracting spu-
rious spans that don't represent a study descriptor.
The model is a linear classifier on top of
BioBERT. Inputs are constructed as: [CLS]
" }, "TABREF8": { "type_str": "table", "html": null, "num": null, "text": "Ten most common over-and under-predicted MeSH terms for the test set of 191 articles.", "content": "" } } } }