AMSR / conferences_raw /akbc19 /AKBC.ws_2019_Conference_SylxCx5pTQ.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame
No virus
15.7 kB
{"forum": "SylxCx5pTQ", "submission_url": "https://openreview.net/forum?id=SylxCx5pTQ", "submission_content": {"title": "MedMentions: A Large Biomedical Corpus Annotated with UMLS Concepts", "authors": ["Sunil Mohan", "Donghui Li"], "authorids": ["sunilm_k2@yahoo.com", "dli@chanzuckerberg.com"], "keywords": ["gold-standard corpus", "biomedical concept recognition", "named entity recognition and linking"], "TL;DR": "The paper introduces a new gold-standard corpus corpus of biomedical scientific literature manually annotated with UMLS concept mentions.", "abstract": "This paper presents the formal release of {\\em MedMentions}, a new manually annotated resource for the recognition of biomedical concepts. What distinguishes MedMentions from other annotated biomedical corpora is its size (over 4,000 abstracts and over 350,000 linked mentions), as well as the size of the concept ontology (over 3 million concepts from UMLS 2017) and its broad coverage of biomedical disciplines. In addition to the full corpus, a sub-corpus of MedMentions is also presented, comprising annotations for a subset of UMLS 2017 targeted towards document retrieval. To encourage research in Biomedical Named Entity Recognition and Linking, data splits for training and testing are included in the release, and a baseline model and its metrics for entity linking are also described.", "archival status": "Archival", "subject areas": ["Natural Language Processing", "Information Extraction", "Applications: Biomedicine"], "pdf": "/pdf/b95a220f55c45a28f2eb4dd31cdf8eac5863e8c1.pdf", "paperhash": "mohan|medmentions_a_large_biomedical_corpus_annotated_with_umls_concepts", "_bibtex": "@inproceedings{\nmohan2019medmentions,\ntitle={MedMentions: A Large Biomedical Corpus Annotated with {\\{}UMLS{\\}} Concepts},\nauthor={Sunil Mohan and Donghui Li},\nbooktitle={Automated Knowledge Base Construction (AKBC)},\nyear={2019},\nurl={https://openreview.net/forum?id=SylxCx5pTQ}\n}"}, "submission_cdate": 1542459592042, "submission_tcdate": 1542459592042, "submission_tmdate": 1580939653766, "submission_ddate": null, "review_id": ["H1gAZoslfN", "SyejyHp1MN", "rkeJBbeEl4"], "review_url": ["https://openreview.net/forum?id=SylxCx5pTQ&noteId=H1gAZoslfN", "https://openreview.net/forum?id=SylxCx5pTQ&noteId=SyejyHp1MN", "https://openreview.net/forum?id=SylxCx5pTQ&noteId=rkeJBbeEl4"], "review_cdate": [1546857222252, 1546798306685, 1544974646961], "review_tcdate": [1546857222252, 1546798306685, 1544974646961], "review_tmdate": [1550269638156, 1550269637941, 1550269637682], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["SylxCx5pTQ", "SylxCx5pTQ", "SylxCx5pTQ"], "review_content": [{"title": "MedMentions: A Large Biomedical Corpus Annotated with UMLS Concepts", "review": "The paper \u201cMedMentions: A Large Biomedical Corpus Annotated with UMLS Concepts\u201d details the construction of a manually annotated dataset covering biomedical concepts. The novelty of this resource is its size in terms of abstracts and linked mentions as well as the size of the ontology applied (UMLS). \nThe manuscript is clearly written and easy to follow. Although other resources of this type already exist, the authors create a larger dataset covered by a larger ontology. Thus, allowing for the recognition of multiple medical entities at a greater scale than previously created datasets (e.g. CRAFT).\nDespite the clarity, this manuscript can improve the following:\nSection 2.3 \u2013 How many annotators were used?\nSection 2.4, point 2 - The process used to determine biomedical relevance is not detailed. Section 4.1 - No reason is given for the choice of TaggerOne. In addition, other datasets could have been tested with TaggerOne for comparison with the MedMentions ST21pv results.\nMisspelling and errors in section 2.3: \u201cRreviewers\u201d, \u201cIN MEDMENTIONS\u201d\nOverall, this paper fits the conference topics and provides a good contribution in the form of a large annotated biomedical resource.", "rating": "7: Good paper, accept", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}, {"title": "Solid biomedical entity extraction/linking dataset", "review": "In this paper the authors introduce MedMentions, a new dataset of biomedical abstracts (PubMed) labeled with biomedical concepts/entities. The concepts some from the broad-coverage UMLS ontology, which contains ~3 million concepts. They also annotate a subset of the data with a filtered version of UMLS more suitable for document retrieval. The authors present data splits and results using an out-of-the-box baseline model (semi-Markov model TaggerOne (Leaman and Lu, 2016)) for end-to-end biomedical entity/concept recognition and linking using MedMentions.\n\nThe paper describes the data and its curation in great detail. The explicit comparison to related corpora is great. This dataset is substantially larger (hundreds of thousands of annotated mentions vs. ones of thousands) and covers a broader range of concepts (previous works are each limited to a subset of biomedical concepts) than previous manually annotated data resources. MedMentions seems like a high-quality dataset that will accelerate important research in biomedical document retrieval and information extraction.\n\nSince one of the contributions is annotation that is supposed to help retrieval, it would be nice to include a baseline model that uses the data to do retrieval. Also, it looks like the baseline evaluation is only on the retrieval subset of the data. Why only evaluate on the subset and not the full dataset, if not doing retrieval?\n\nThis dataset appears to have been already been used in previous work (Murty et al., ACL 2018), but that work is not cited in this paper. That's fine -- I think the dataset deserves its own description paper, and the fact that the data have already been used in an ACL publication is a testament to the potential impact. But it seems like there should be some mention of that previous publication to resolve any confusion about whether it is indeed the same data.\n\nStyle/writing comments:\n- Would be helpful to include more details in the introduction, in particular about your proposed model/metrics. I'd like to know by the end of the introduction, at a high level, what type of model and metrics you're proposing.\n- replace \"~\" with a word (approximately, about, ...) in text\n- Section 2.3: capitalization typo \"IN MEDMENTIONS\"\n- Section 2.4, 3: \"Table\" should be capitalized in \"Table 6\"\n- Use \"and\" rather than \"/\" in text\n- Section 4: maybe just say \"training\" and \"development\" rather than \"Training\" and \"Dev\"\n- 4.1: Markov should be capitalized: semi-Markov\n- 4.1: reconsider use of scare quotes -- are they necessary? 'lexicons', 'Training', \"dev', 'holdout'\n- 4.1: replace \"aka\" with \"i.e.\" or something else more formal. In general this section could use cleanup.\n- 4.1: last paragraph (describing metrics, mention-level vs. document-level) is very confusing, please clarify, especially since you claim that a contribution of the paper is to propose these metrics. Is it the case that mention-level F1 is essentially entity recognition and document-level is entity linking? An example could possibly help here.", "rating": "7: Good paper, accept", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "Useful resource, but claims could be better supported, and the uniqueness of the resource better argued", "review": "The paper describes a new resource \"Med Mentions\" for entity linking of Pubmed abstracts, where entities are concepts in the UMLS type hierarchy -- for example \"Medical Device\".\n\nThe annotations were manually verified. (I assume the intention is to use this as a benchmark, but the paper does not say)\n\nThe paper is very rigorous in describing which concepts were considered, and which were pruned. Authors suggest to combine it with \"TaggerOne\" to obtain end-to-end entity recognition and linking system.\n\nIt is a little bit unclear what the main contribution of this paper is. Is it a benchmark for method development and evaluation (the paper mentions the train/dev/test split twice)? or do the authors propose a new system based on this benchmark?, or was the intent to test a range of baselines on this corpus (and what is the purpose?) -- I believe this lack of clarity could be easily addressed with a change in structure of headings. (Headings are currently not helping the reader, a more traditional paper outline would be helpful.)\n\nI appreciate that the paper lists a range of related benchmarks. However, I am missing a discussion of: where the advantage of MedMentions is in contrast to these benchmarks? What is MedMentions offering that none of the other benchmarks couldn't?\n\n\nIt is indisputable that a new resource provides value to the community, and therefore should be disseminated. However, the paper quality is more reminiscent of a technical report. A lot of space is dedicated to supplemental information (e.g. page 6) which would be better spent on a clear argumentation and motivation of the steps taken.\n\n", "rating": "6: Marginally above acceptance threshold", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}], "comment_id": ["B1g-hdECG4", "ByeRRU4Rf4", "rJeYPVVAM4", "Syl5lzVAME"], "comment_cdate": [1547745448576, 1547744981872, 1547744353242, 1547743730156], "comment_tcdate": [1547745448576, 1547744981872, 1547744353242, 1547743730156], "comment_tmdate": [1547757386927, 1547744981872, 1547744378931, 1547743730156], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["AKBC.ws/2019/Conference/Paper7/Authors", "AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference/Paper7/Authors", "AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference/Paper7/Authors", "AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference/Paper7/Authors", "AKBC.ws/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Response to AnonReviewer1", "comment": "The introduction has been updated to state that a new annotated gold standard resource ('benchmark') is being introduced.\n\nThe main contribution in this paper is the new 'benchmark' - a manually annotated resource, for training and evaluating biomedical concept recognition systems. This new resource addresses 2 key needs:\n(i) Larger annotated resource, useful for training today's more complex ML models\n(ii) Broader coverage of biology and medicine concepts, by targeting UMLS.\n\nCRAFT is the closest, in size and coverage of biology. MedMentions can be viewed as a supplement -- the sizes of the ontologies is so much greater than the available annotated corpora that for ML models more data will always be useful. MedMentions has the added benefit of better coverage of concepts from some biomedical disciplines, e.g. diseases and drugs. This comes from the use of UMLS as the target ontology.\n\nTaggerOne is an established model for biomedical concept recognition and is offered here as a baseline concept recognition model, that researchers developing new models may compare their results against.\n\nSection 3 on related annotated corpora has been expanded slightly. Main differences are the size of the MedMentions benchmark corpus, and through the use of UMLS as the target ontology, more comprehensive coverage of biomedical concepts."}, {"title": "Response to AnonReviewer3", "comment": "The Introduction has been expanded in the revised submission to make the motivation more explicit.\n\nIn this paper we wanted to describe the new resource we have created for training and evaluating biomedical concept recognition in scientific literature. The resource addresses 2 key needs:\n(i) Larger annotated resource, useful for training today's more complex ML models\n(ii) Broader coverage of biology and medicine concepts, by targeting UMLS.\n\nInformation Retrieval models is a research area on its own, and in this paper we just wanted to focus on concept recognition (CR). Metrics for CR models are quite standardized now, and the specific ones we use are described in a new section 4.1 added to the revised submission.\n\nCitation of the (Murty et al., ACL 2018) paper was excluded to anonymize the paper for review. It will be included in the final copy.\n\nThanks for the detailed proof-reading. These should now be fixed in the uploaded revision."}, {"title": "Responses to AnonReviewer2", "comment": "More detailed information on the annotation process and inter-annotator agreement is being gathered and will be published on the release site.\n\nSection 2.4 point 2 has been expanded to describe \"biomedical relevance\" as used to select semantic types.\n\nThe TaggerOne based model is now described in (expanded) section 4.2, and reason for its selection is also included. The TaggerOne paper referenced does include performance metrics on other biomedical datasets. Its demonstrated performance on recognizing biomedical entities from multiple types was our reason for using it as a baseline. Our goal at present is to simply offer baseline metrics for concept recognition models trained on MedMentions ST21pv.\n\nTypos fixed, thanks!"}, {"title": "Revised submission", "comment": "We would like to thank the reviewers for their detailed review. A revised version of the paper has been uploaded that should address most of the points raised. Various sections have been expanded in the revision, including: the introduction, to make the motivation more explicit; a clearer description of the metrics used for evaluating concept recognition models. See also responses to some specific questions below."}], "comment_replyto": ["rkeJBbeEl4", "SyejyHp1MN", "H1gAZoslfN", "SylxCx5pTQ"], "comment_url": ["https://openreview.net/forum?id=SylxCx5pTQ&noteId=B1g-hdECG4", "https://openreview.net/forum?id=SylxCx5pTQ&noteId=ByeRRU4Rf4", "https://openreview.net/forum?id=SylxCx5pTQ&noteId=rJeYPVVAM4", "https://openreview.net/forum?id=SylxCx5pTQ&noteId=Syl5lzVAME"], "meta_review_cdate": 1549913199501, "meta_review_tcdate": 1549913199501, "meta_review_tmdate": 1551128233379, "meta_review_ddate ": null, "meta_review_title": "Good paper about a valuable new data set", "meta_review_metareview": "The paper provides a valuable new resource to the community, a data set of 350,000 mentions from 4000 abstracts, all linked to UMLS concepts. MedMentions has some advantages over existing datasets that are either smaller in size, narrower in coverage of concepts, or only provide weakly supervised labels of the mentions (i.e., concepts are associated with an abstract, but not explicitly identified as mentions therein). The reviewers all agree that MedMentions would be a valuable resource for the community. The main criticism of the paper is that the motivation and contribution were not initially clear; however, the authors have addressed this criticism in the responses and have already updated the introduction to make the motivation and contribution more explicit.", "meta_review_readers": ["everyone"], "meta_review_writers": [], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=SylxCx5pTQ&noteId=HkgvOhBkB4"], "decision": "Accept (Poster)"}