{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:07:41.726129Z" }, "title": "An Empirical Study of Multi-Task Learning on BERT for Biomedical Text Mining", "authors": [ { "first": "Yifan", "middle": [], "last": "Peng", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Institutes of Health Bethesda", "location": { "region": "MD", "country": "USA" } }, "email": "yifan.peng@nih.gov" }, { "first": "Qingyu", "middle": [], "last": "Chen", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Institutes of Health Bethesda", "location": { "region": "MD", "country": "USA" } }, "email": "qingyu.chen@nih.gov" }, { "first": "Zhiyong", "middle": [], "last": "Lu", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Institutes of Health Bethesda", "location": { "region": "MD", "country": "USA" } }, "email": "zhiyong.lu@nih.gov" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Multi-task learning (MTL) has achieved remarkable success in natural language processing applications. In this work, we study a multi-task learning model with multiple decoders on varieties of biomedical and clinical natural language processing tasks such as text similarity, relation extraction, named entity recognition, and text inference. Our empirical results demonstrate that the MTL finetuned models outperform state-of-the-art transformer models (e.g., BERT and its variants) by 2.0% and 1.3% in biomedical and clinical domains, respectively. Pairwise MTL further demonstrates more details about which tasks can improve or decrease others. This is particularly helpful in the context that researchers are in the hassle of choosing a suitable model for new problems. The code and models are publicly available at https://github.com/ ncbi-nlp/bluebert.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Multi-task learning (MTL) has achieved remarkable success in natural language processing applications. In this work, we study a multi-task learning model with multiple decoders on varieties of biomedical and clinical natural language processing tasks such as text similarity, relation extraction, named entity recognition, and text inference. Our empirical results demonstrate that the MTL finetuned models outperform state-of-the-art transformer models (e.g., BERT and its variants) by 2.0% and 1.3% in biomedical and clinical domains, respectively. Pairwise MTL further demonstrates more details about which tasks can improve or decrease others. This is particularly helpful in the context that researchers are in the hassle of choosing a suitable model for new problems. The code and models are publicly available at https://github.com/ ncbi-nlp/bluebert.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Multi-task learning (MTL) is a field of machine learning where multiple tasks are learned in parallel while using a shared representation (Caruana, 1997) . Compared with learning multiple tasks individually, this joint learning effectively increases the sample size for training the model, thus leads to performance improvement by increasing the generalization of the model (Zhang and Yang, 2017) . This is particularly helpful in some applications such as medical informatics where (labeled) datasets are hard to collect to fulfill the data-hungry needs of deep learning.", "cite_spans": [ { "start": 138, "end": 153, "text": "(Caruana, 1997)", "ref_id": "BIBREF2" }, { "start": 374, "end": 396, "text": "(Zhang and Yang, 2017)", "ref_id": "BIBREF43" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "MTL has long been studied in machine learning (Ruder, 2017) and has been used successfully across different applications, from natural language processing (Collobert and Weston, 2008; Luong et al., 2016; Liu et al., 2019c) , computer vision (Wang et al., 2009; Liu et al., 2019a; , to health informatics (Zhou et al., 2011; He et al., 2016; Harutyunyan et al., 2019) . MTL has also been studied in biomedical and clinical natural language processing (NLP) such as named entity recognition and normalization and the relation extraction. However, most of these studies focus on either one task with multi corpora (Khan et al., 2020; Wang et al., 2019b) or multi-tasks on a single corpus (Xue et al., 2019; Li et al., 2017; Zhao et al., 2019) .", "cite_spans": [ { "start": 46, "end": 59, "text": "(Ruder, 2017)", "ref_id": "BIBREF30" }, { "start": 155, "end": 183, "text": "(Collobert and Weston, 2008;", "ref_id": "BIBREF5" }, { "start": 184, "end": 203, "text": "Luong et al., 2016;", "ref_id": "BIBREF26" }, { "start": 204, "end": 222, "text": "Liu et al., 2019c)", "ref_id": "BIBREF25" }, { "start": 241, "end": 260, "text": "(Wang et al., 2009;", "ref_id": "BIBREF36" }, { "start": 261, "end": 279, "text": "Liu et al., 2019a;", "ref_id": "BIBREF22" }, { "start": 304, "end": 323, "text": "(Zhou et al., 2011;", "ref_id": "BIBREF45" }, { "start": 324, "end": 340, "text": "He et al., 2016;", "ref_id": "BIBREF12" }, { "start": 341, "end": 366, "text": "Harutyunyan et al., 2019)", "ref_id": "BIBREF11" }, { "start": 611, "end": 630, "text": "(Khan et al., 2020;", "ref_id": "BIBREF14" }, { "start": 631, "end": 650, "text": "Wang et al., 2019b)", "ref_id": "BIBREF37" }, { "start": 685, "end": 703, "text": "(Xue et al., 2019;", "ref_id": "BIBREF31" }, { "start": 704, "end": 720, "text": "Li et al., 2017;", "ref_id": "BIBREF19" }, { "start": 721, "end": 739, "text": "Zhao et al., 2019)", "ref_id": "BIBREF44" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To bridge this gap, we investigate the use of MTL with transformer-based models (BERT) on multiple biomedical and clinical NLP tasks. We hypothesize the performance of the models on individual tasks (especially in the same domain) can be improved via joint learning. Specifically, we compare three models: the independent single-task model (BERT), the model refined via MTL (called MT-BERT-Refinement), and the model fine-tuned for each task using MT-BERT-Refinement (called MT-BERT-Fine-Tune). We conduct extensive empirical studies on the Biomedical Language Understanding Evaluation (BLUE) benchmark , which offers a diverse range of text genres (biomedical and clinical text) and NLP tasks (such as text similarity, relation extraction, and named entity recognition). When learned and fine-tuned on biomedical and clinical domains separately, we find that MTL achieved over 2% performance on average, created new state-of-the-art results on four BLUE benchmark tasks. We also demonstrate the use of multi-task learning to obtain a single model that still produces state-of-the-art performance on all tasks. This positive answer will be very helpful in the context that researchers are in the hassle of choosing a suitable model for new problems where training resources are limited.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our contribution in this work is three-fold: (1) We conduct extensive empirical studies on 8 tasks from a diverse range of text genres. 2We demonstrate that the MTL fine-tuned model (MT-BERT-Fine-Tune) achieved state-of-the-art performance on average and there is still a benefit to utilizing the MTL refinement model (MT-BERT-Refinement). Pairwise MTL, where two tasks were trained jointly, further demonstrates which tasks can improve or decrease other tasks. 3We make codes and pre-trained MT models publicly available.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of the paper is organized as follows. We first present related work in Section 2. Then, we describe the multi-task learning in Section 3, followed by our experimental setup, results, and discussion in Section 4. We conclude with future work in the last section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Multi-tasking learning (MTL) aims to improve the learning of a model for task t by using the knowledge contained in the tasks where all or a subset of tasks are related (Zhang and Yang, 2017) . It has long been studied and has applications on neural networks in the natural language processing domain (Caruana, 1997) . Collobert and Weston (2008) proposed to jointly learn six tasks such as part-of-speech tagging and language modeling in a time-decay neural network. Changpinyo et al. (2018) summarized recent studies on applying MTL in sequence tagging tasks. Bingel and S\u00f8gaard (2017) and Mart\u00ednez Alonso and Plank (2017) focused on conditions under which MTL leads to gain in NLP, and suggest that certain data features such as learning curve and entropy distribution are probably better predictors of MTL gains.", "cite_spans": [ { "start": 169, "end": 191, "text": "(Zhang and Yang, 2017)", "ref_id": "BIBREF43" }, { "start": 301, "end": 316, "text": "(Caruana, 1997)", "ref_id": "BIBREF2" }, { "start": 319, "end": 346, "text": "Collobert and Weston (2008)", "ref_id": "BIBREF5" }, { "start": 468, "end": 492, "text": "Changpinyo et al. (2018)", "ref_id": "BIBREF3" }, { "start": 562, "end": 587, "text": "Bingel and S\u00f8gaard (2017)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "In the biomedical and clinical domains, MTL has been studied mostly in two directions. One is to apply MTL on a single task with multiple corpora. For example, many studies focused on named entity recognition (NER) tasks (Crichton et al., 2017; Wang et al., 2019a,b) . , Khan et al. (2020) , and Mehmood et al. (2019) integrated MTL in the transformer-based networks (BERT), which is the state-of-the-art language representation model and demonstrated promising results to extract biomedical entities from literature. Yang et al. (2019) extracted clinical named entity from Electronic Medical Records using LSTM-CRF based model. Besides NER, and Li and Ji (2019) proposed to use MTL on relation classification task and Du et al. (2017) on biomedical semantic indexing. Xing et al. (2018) exploited domain-invariant knowledge to segment Chinese word in medical text.", "cite_spans": [ { "start": 221, "end": 244, "text": "(Crichton et al., 2017;", "ref_id": "BIBREF6" }, { "start": 245, "end": 266, "text": "Wang et al., 2019a,b)", "ref_id": null }, { "start": 271, "end": 289, "text": "Khan et al. (2020)", "ref_id": "BIBREF14" }, { "start": 296, "end": 317, "text": "Mehmood et al. (2019)", "ref_id": "BIBREF28" }, { "start": 518, "end": 536, "text": "Yang et al. (2019)", "ref_id": "BIBREF41" }, { "start": 646, "end": 662, "text": "Li and Ji (2019)", "ref_id": "BIBREF18" }, { "start": 769, "end": 787, "text": "Xing et al. (2018)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "The other direction is to apply MTL on different tasks, but the annotations are from a single corpus. Li et al. (2017) proposed a joint model extract biomedical entities as well as their relations simultaneously and carried out experiments on either the adverse drug event corpus (Gurulingappa et al., 2012) or the bacteria biotope corpus (Del\u00e9ger et al., 2016) . Shi et al. (2019) also jointly extract entities and relations but focused on the BioCreative/OHNLP 2018 challenge regarding family history extraction . Xue et al. (2019) integrated the BERT language model into joint learning through dynamic range attention mechanism and fine-tuned NER and relation extraction tasks jointly on one in-house dataset of coronary arteriography reports.", "cite_spans": [ { "start": 102, "end": 118, "text": "Li et al. (2017)", "ref_id": "BIBREF19" }, { "start": 280, "end": 307, "text": "(Gurulingappa et al., 2012)", "ref_id": "BIBREF10" }, { "start": 339, "end": 361, "text": "(Del\u00e9ger et al., 2016)", "ref_id": "BIBREF7" }, { "start": 364, "end": 381, "text": "Shi et al. (2019)", "ref_id": "BIBREF31" }, { "start": 516, "end": 533, "text": "Xue et al. (2019)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "Different from these works, we studied to jointly learn 8 different corpora from 4 different types of tasks. While MTL has brought significant improvements in medicine tasks, no (or mixed) results have been reported when pre-training MTL models in different tasks on different corpora. To this end, we deem that our model can provide more insights about conditions under which MTL leads to gains in BioNLP and clinical NLP, and sheds light on the specific task relations that can lead to gains from MTL models over single-task setups.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "The architecture of the MT-BERT model is shown in Figure 1 . The shared layers are based on BERT (Devlin et al., 2018) . The input X can be either a sentence or a pair of sentences packed together by a special token [SEP] . If X is longer than the allowed maximum length (e.g., 128 tokens in the BERT's base configuration), we truncate X to the maximum length. When X is packed by a sequence pair, we truncate the longer sequence one token at a time. Similar to (Devlin et al., 2018) , two additional tokens are added at the start ([CLS]) and end ([SEP]) of X, respectively. Similar to (Lee et al., 2020; , in the sequence tagging tasks, we split one sentence into several sub-sentences if it is longer than 30 words.", "cite_spans": [ { "start": 97, "end": 118, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF8" }, { "start": 216, "end": 221, "text": "[SEP]", "ref_id": null }, { "start": 462, "end": 483, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF8" }, { "start": 586, "end": 604, "text": "(Lee et al., 2020;", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 50, "end": 58, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Multi-task model", "sec_num": "3" }, { "text": "In the shared layers, the BERT model first converts the input sequence to a sequence of embedding vectors. Then, it applies attention mechanisms to gather contextual information. This se-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-task model", "sec_num": "3" }, { "text": "Named entity recognition", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relation extraction", "sec_num": null }, { "text": "( 1 , 2 ) ( | ) ( | ) [CLS] Tok 1 Tok 2 [SEP] Tok m [CLS] \u210e 1 \u210e 2 \u210e", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relation extraction", "sec_num": null }, { "text": "Single sentence or a sentence pair ( 1 , 2 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relation extraction", "sec_num": null }, { "text": "Inference", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relation extraction", "sec_num": null }, { "text": "( | \u2a01 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relation extraction", "sec_num": null }, { "text": "Task specific layers", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shared layers", "sec_num": null }, { "text": "[SEP] Tok n \u2026 \u2026 [SEP] [SEP]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shared layers", "sec_num": null }, { "text": "\u210e \u2026 BERT Figure 1 : The architecture of the MT-BERT model. mantic representation is shared across all tasks and is trained by our multi-task objectives. Finally, the BERT model encodes that information in a vector for each token (h 0 , . . . , h n ).", "cite_spans": [], "ref_spans": [ { "start": 9, "end": 17, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Shared layers", "sec_num": null }, { "text": "On top of the shared BERT layers, the taskspecific layer uses a fully-connected layer for each task. We fine-tune the BERT model and the taskspecific layers using multi-task objectives during the training phase. More details of the multi-task objectives in the BLUE benchmark are described below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shared layers", "sec_num": null }, { "text": "Suppose that h 0 is the BERT's output of the token [CLS] in the input sentence pair (X 1 , X 2 ). We use a fully connected layer to compute the similarity score sim(X 1 , X 2 ) = ah 0 + b, where sim(X 1 , X 2 ) is a real value. This task is trained using the Mean Squared Error (MSE) loss: (y \u2212 sim(X 1 , X 2 )) 2 , where y is the real-value similarity score of the sentence pair.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence similarity", "sec_num": "3.1" }, { "text": "This task extracts binary relations (two arguments) from sentences. After replacing two arguments of interest in the sentence with pre-defined tags (e.g., GENE, or DRUG), this task can be treated as a classification problem of a single sentence X. Suppose that h 0 is the output embedding of the token [CLS] , the probability that a relation is labeled as class c is predicted by a fully connected layer and a logistic regression with softmax: P (c|X) = sof tmax(ah 0 + b). This approach is widely used in the transformer-based models (Devlin et al., 2018; Liu et al., 2019c) . This task is trained using the categorical crossentropy loss: \u2212 c \u03b4(y c =\u0177) log(P (c|X)), where \u03b4(y c =\u0177) = 1 if the classification\u0177 of X is the correct ground-truth for the class c \u2208 C; otherwise \u03b4(y c =\u0177) = 0.", "cite_spans": [ { "start": 302, "end": 307, "text": "[CLS]", "ref_id": null }, { "start": 535, "end": 556, "text": "(Devlin et al., 2018;", "ref_id": "BIBREF8" }, { "start": 557, "end": 575, "text": "Liu et al., 2019c)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Relation extraction", "sec_num": "3.2" }, { "text": "After packing the pair of premise sentences with hypothesis into one sequence, this task can also be treated as a single sentence classification problem. The aim is to find logical relation R between premise P and hypothesis H. Suppose that that h 0 is the output embedding of the token [CLS] in X = P \u2295 H, P (R|P \u2295 H) = sof tmax(ah 0 + b). This task is trained using the categorical crossentropy loss as above.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.3" }, { "text": "The output of the BERT model produces a feature vector sequence {h i } n i=0 with the same length as the input sequence X. The MTL model predicts the label sequence by using a softmax output layer, which scales the output for a label l \u2208 {1, 2, . . . , L} as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Named entity recognition", "sec_num": "3.4" }, { "text": "P (\u0177 i = j|x) = exp(h i W j ) L l=1 exp(h i W j )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Named entity recognition", "sec_num": "3.4" }, { "text": ", where L is the total number of tags. This task is trained using the categorical crossentropy loss:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Named entity recognition", "sec_num": "3.4" }, { "text": "\u2212 i y i \u03b4(y i =\u0177 i ) log P (y i |X).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Named entity recognition", "sec_num": "3.4" }, { "text": "The training procedure for MT-BERT consists of three stages: (1) pretraining the BERT model,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The training procedure", "sec_num": "3.5" }, { "text": "(2) refining it via multi-task learning (MT-BERT-Refinement), and (3) fine-tuning the model using the task-specific data (MT-BERT-Fine-Tune).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The training procedure", "sec_num": "3.5" }, { "text": "The pretraining stage follows that of the BERT using the masked language modeling technique (Devlin et al., 2018) . Here we used the base version. The maximum length of the input sequences is thus 128.", "cite_spans": [ { "start": 92, "end": 113, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Pretraining", "sec_num": "3.5.1" }, { "text": "In this step, we refine all layers in the model. Algorithm 1 demonstrates the process of multi-task learning (Liu et al., 2019c) . We first initialize the shared layers with the pre-trained BERT model and randomly initialize the task-specific layer parameters. Then we create the dataset by merging mini-batches of all the datasets. In each epoch, we randomly select a mini-batch b t of task t from all datasets D. Then we update the model according to the task-specific objective of the task t. Same as in (Liu et al., 2019c) , we use the mini-batch based stochastic gradient descent to learn the parameters.", "cite_spans": [ { "start": 109, "end": 128, "text": "(Liu et al., 2019c)", "ref_id": "BIBREF25" }, { "start": 507, "end": 526, "text": "(Liu et al., 2019c)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Refining via Multi-task learning", "sec_num": "3.5.2" }, { "text": "Algorithm 1: Multi-task learning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Refining via Multi-task learning", "sec_num": "3.5.2" }, { "text": "Shared layer parameters by BERT; Task-specific layer parameters randomly; end Create D by merging mini-batches for each dataset; for epoch in 1, 2, ..., epoch max do Shuffle D; for b t in D do Compute loss: L(\u03b8) based on task t; Compute gradient: \u2207(\u03b8) Update model: \u03b8 = \u03b8 \u2212 \u03b7\u2207(\u03b8) end end", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Initialize model parameters \u03b8", "sec_num": null }, { "text": "We fine-tune existing MT-BERT that are trained in the previous stage by continue training all layers on each specific task. Provided that the dataset is not drastically different in context to other datasets, the MT-BERT model will already have learned general features that are relevant to a specific problem. Specifically, we truncate the last layer (softmax and linear layers) of the MT-BERT and replace it with a new one, then we use a smaller learning rate to train the network.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fine-tuning MT-BERT", "sec_num": "3.5.3" }, { "text": "We evaluate the proposed MT-BERT on 8 tasks in BLUE benchmarks. We compare three types of models: (1) existing start-of-the-art BERT models fine-tuned directly on each task, respectively;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "(2) refinement MT-BERT with multi-task training (MT-BERT-Refinement); and (3) MT-BERT with finetuning (MT-BERT-Fine-Tune).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "We evaluate the performance of the models on 8 datasets in the BLUE benchmark used by . Table 1 gives a summary of these datasets. Briefly, ClinicalSTS is a corpus of sentence pairs selected from Mayo Clinics's clinical data warehouse . The i2b2 2010 dataset was collected from three different hospitals and was annotated by medical practitioners for eight types of relations between problems and treatments (Uzuner et al., 2011) . MedNLI is a collection of sentence pairs selected from MIMIC-III (Shivade, 2017). For a fair comparison, we use the same training, development and test sets to train and evaluate the models. ShARe/CLEF is a collection of 299 de-identified clinical free-text notes from the MIMIC-II database (Suominen et al., 2013) . This corpus is for disease entity recognition.", "cite_spans": [ { "start": 408, "end": 429, "text": "(Uzuner et al., 2011)", "ref_id": "BIBREF34" }, { "start": 723, "end": 746, "text": "(Suominen et al., 2013)", "ref_id": "BIBREF33" } ], "ref_spans": [ { "start": 88, "end": 95, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Datasets", "sec_num": "4.1" }, { "text": "In the biomedical domain, the ChemProt consists of 1,820 PubMed abstracts with chemicalprotein interactions (Krallinger et al., 2017) . The DDI corpus is a collection of 792 texts selected from the DrugBank database and other 233 Medline abstracts (Herrero-Zazo et al., 2013). These two datasets were used in the relation extraction task for various types of relations. BC5CDR is a collection of 1,500 PubMed titles and abstracts selected from the CTD-Pfizer corpus and was used in the named entity recognition task for chemical and disease entities (Li et al., 2016) .", "cite_spans": [ { "start": 108, "end": 133, "text": "(Krallinger et al., 2017)", "ref_id": "BIBREF16" }, { "start": 550, "end": 567, "text": "(Li et al., 2016)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "4.1" }, { "text": "Our implementation of MT-BERT is based on the work of (Liu et al., 2019c) . 1 We trained the model on one NVIDIA\u00ae V100 GPU using the PyTorch framework. We used the Adamax optimizer (Kingma and Ba, 2015) with a learning rate of 5e \u22125 , a batch size of 32, a linear learning rate decay schedule with warm-up over 0.1, and a weight decay of 0.01 applied to every epoch of training by following (Liu et al., 2019c) . We use the BioBERT (Lee et al., 2020) , Blue-BERT base model , and Clin-icalBERT (Alsentzer et al., 2019) as the domainspecific language model 2 . As a result, all the tokenized texts using wordpieces were chopped to spans no longer than 128 tokens. We set the maximum number of epochs to 100. We also set the dropout rate of all the task-specific layers as 0.1. To avoid the exploding gradient problem, we clipped the gradient norm within 1. To fine-tune the MT-BERT on specific tasks, we set the maximum number of epochs to 10 and learning rate e \u22125 .", "cite_spans": [ { "start": 54, "end": 73, "text": "(Liu et al., 2019c)", "ref_id": "BIBREF25" }, { "start": 76, "end": 77, "text": "1", "ref_id": null }, { "start": 391, "end": 410, "text": "(Liu et al., 2019c)", "ref_id": "BIBREF25" }, { "start": 432, "end": 450, "text": "(Lee et al., 2020)", "ref_id": "BIBREF17" }, { "start": 494, "end": 518, "text": "(Alsentzer et al., 2019)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.2" }, { "text": "One of the most important criteria of building practical systems is fast adaptation to new domains.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.3" }, { "text": "2 https://github.com/ncbi-nlp/bluebert", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.3" }, { "text": "To evaluate the models on different domains, we multi-task learned various MT-BERT on BLUE biomedical tasks and clinical tasks, respectively. BlueBERT clinical is the base BlueBERT model pretrained on PubMed abstracts and MIMIC-III clinical notes, and fine-tuned for each BLUE task on task-specific data. MT-model are the proposed models described in Section 3. We used the pre-trained BlueBERT clinical to initialize its shared layers, refined the model via MTL on the BLUE tasks (MT-BlueBERT-Refinement clinical ). We keep fine-tuning the model for each BLUE task using task-specific data, then got MT-BlueBERT-Fine-Tune clinical . Table 2 shows the results on clinical tasks. MT-BlueBERT-Fine-Tune clinical created new state-ofthe-art results on 2 tasks and pushing the benchmark to 81.9%, which amounts to 1.3% absolution improvement over BlueBERT clinical and 1.2% absolute improvement over MT-BlueBERT-Refinement clinical . On the ShAReCLEFE task, the model gained the largest improvement by 6%. On the MedNLI task, the MT model gained improvement by 2.4%. On the remaining tasks, the MT model also performed well by reaching the stateof-the-art performance with less than 1% differences. When compared the models with and without fine-tuning on single datasets, Table 2 shows that the multi-task refinement model is similar to single baselines on average. Consider that MT-BlueBERT-Refinement clinical is one model while BlueBERT clinical are 4 individual models, we believe the MT refinement model would bring the benefit when researchers are in the hassle of choosing the suitable model for new problems or problems with limited training data.", "cite_spans": [], "ref_spans": [ { "start": 634, "end": 641, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 1269, "end": 1276, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Results", "sec_num": "4.3" }, { "text": "In biomedical tasks, we used BlueBERT biomedical as the baseline because it achieved the best performance on the BLUE benchmark. Table 3 shows the similar results as in the clinical tasks. MT-BlueBERT-Fine-Tune biomedical created new state-of-the-art results on 2 tasks and pushing the benchmark to 83.6%, which amounts to 2.0% absolute improvement over BlueBERT biomedical and 2.1% absolute improvement over MT-BlueBERT-Refinement biomedical . On the DDI task, the model gained the largest improvement by 8.1%.", "cite_spans": [], "ref_spans": [ { "start": 129, "end": 136, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results", "sec_num": "4.3" }, { "text": "To investigate which tasks are beneficial or harmful to others, we train on two tasks jointly using MT-BlueBERT-Refinement biomedical and MT-BlueBERT-Refinement clinical . Figure 2 gives pairwise relationships. The directed green (or red and grey) edge from s to t means s improves (or decreases and has no effect on) t.", "cite_spans": [], "ref_spans": [ { "start": 172, "end": 180, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Pairwise MTL", "sec_num": "4.4.1" }, { "text": "In the clinical tasks, ShARe/CLEFE always gets benefits from multi-task learning the remaining 3 tasks as the incoming edges are green. One factor might be that ShARe/CLEFE is an NER task", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pairwise MTL", "sec_num": "4.4.1" }, { "text": "BlueBERT BlueBERT MT-BioBERT MT-BlueBERT MT-BlueBERT that generally requires more training data to fulfill the data-hungry need of the BERT model. Clini-calSTS helps MedNLI because the nature of both are related and their inputs are a pair of sentences. MedNLI can help other tasks except ClinicalSTS partially because the test set of ClinialSTS is too small to reflect the changes. We also note that i2b2 2010 re can be both beneficial and harmful, depending on which other tasks they are trained with. One potential cause is i2b2 2010 re was collected from three different hospitals and have the largest label size of 8. In the biomedical tasks, both DDI and ChemProt tasks can be improved by MTL on other tasks, potentially because they are harder with largest size of label thus require more training data. In the meanwhile, BC5CDR chemical and disease can barely be improved potentially because they have already got large dataset to fit the model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "First, we would like to compare multi-task learning on BERT variants: BioBERT, Clinical-BERT, and BlueBERT. In the clinical tasks (Table 4), MT-BlueBERT-Fine-Tune clinical outperforms other models on all tasks. When compared the MTL models using BERT model pretrained on PubMed only (rows 2 and 3) and on the combination of PubMed and clinical notes (row 4), it shows the impact of using clinical notes during the pretraining process. This observation is consistently as shown in . On the other hand, MT-ClinicalBERT-Fine-Tune, which used Clinical-BERT during the pretraining, drops \u223c1.6% across the tasks. The differences between ClinicalBERT and BlueBERT are at least in 2-fold. (1) Clini-calBERT used \"cased\" text while BlueBERT used \"uncased\" text; and (2) the number of epochs to continuously pretrained the model. Given that there are limited details of pretraining ClinicalBERT, further investigation may be necessary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MTL on BERT variants", "sec_num": "4.4.2" }, { "text": "In the biomedical tasks, Table 5 shows that MT-BioBERT-Fine-Tune and MT-BlueBERT-Fine-Tune biomedical reached comparable results and pretraining on clinical notes has a negligible impact.", "cite_spans": [], "ref_spans": [ { "start": 25, "end": 32, "text": "Table 5", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "MTL on BERT variants", "sec_num": "4.4.2" }, { "text": "Next, we also compare MT-BERT with its variants on all BLUE tasks. Table 6 shows that MT-BioBERT-Fine-Tune reached the best performance on average and MT-BlueBERT-Fine-Tune biomedical stays closely. While confusing results were obtained when combing variety of tasks in both biomedical and clinical domains, we observed again that MTL models pretrained on biomedical literature perform better in biomedical tasks; and MTL models pretrained on both biomedical literature and clinical notes perform better in clinical tasks. These observations may suggest that it might be helpful to train separate deep neural networks on different types of text genres in BioNLP.", "cite_spans": [], "ref_spans": [ { "start": 67, "end": 74, "text": "Table 6", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Results on all BLUE tasks", "sec_num": "4.4.3" }, { "text": "In this work, we conduct an empirical study on MTL for biomedical and clinical tasks, which so far has been mostly studied with one or two tasks. Our results provide insights regarding domain adaptation and show benefits of the MTL refinement and fine-tuning. We recommend a combination of the MTL refinement and task-specific fine-tuning approach based on the evaluation results. When learned and fine-tuned on a different domain, MT-BERT achieved improvements by 2.0% and 1.3% in biomedical and clinical domains, respectively. Specifically, it has brought significant improvements in 4 tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and future work", "sec_num": "5" }, { "text": "There are two limitations to this work. First, our results on MTL training across all BLUE benchmark show that MTL is not always effective. We are interested in exploring further the characterization of task relationships. For example, it is not clear whether there are data characteristics that help to determine its success (Mart\u00ednez Alonso and Plank, 2017; Changpinyo et al., 2018) . In addition, our results suggest that the model could benefit more from some specific examples of some of the tasks in Table 1 . For example, it might be of interest to not using the BC5CDR corpus in the relation extraction task in future. Second, we studied one approach to MTL by sharing the encoder between all tasks while keeping several task-specific decoders. Other approaches, such as fine-tuning only the task specific layers, soft parameter sharing (Ruder, 2017) , knowledge distillation (Liu et al., 2019b) , need to be investigated in the future.", "cite_spans": [ { "start": 326, "end": 359, "text": "(Mart\u00ednez Alonso and Plank, 2017;", "ref_id": "BIBREF27" }, { "start": 360, "end": 384, "text": "Changpinyo et al., 2018)", "ref_id": "BIBREF3" }, { "start": 845, "end": 858, "text": "(Ruder, 2017)", "ref_id": "BIBREF30" }, { "start": 884, "end": 903, "text": "(Liu et al., 2019b)", "ref_id": "BIBREF24" } ], "ref_spans": [ { "start": 506, "end": 513, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Conclusions and future work", "sec_num": "5" }, { "text": "While our work only scratches the surface of MTL in the medical domain, we hope it will shed light on the development of generalizable NLP models and task relations that can lead to gains from MTL models over single-task setups.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and future work", "sec_num": "5" }, { "text": "https://github.com/namisan/mt-dnn", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported by the Intramural Research Programs of the NIH National Library of Medicine. This work was also supported by the National Library of Medicine of the National Institutes of Health under award number K99LM013001. We are also grateful to the authors of mt-dnn (https://github.com/namisan/ mt-dnn) to make the codes publicly available.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Publicly available clinical BERT embeddings", "authors": [ { "first": "Emily", "middle": [], "last": "Alsentzer", "suffix": "" }, { "first": "John", "middle": [], "last": "Murphy", "suffix": "" }, { "first": "William", "middle": [], "last": "Boag", "suffix": "" }, { "first": "Wei-Hung", "middle": [], "last": "Weng", "suffix": "" }, { "first": "Di", "middle": [], "last": "Jindi", "suffix": "" }, { "first": "Tristan", "middle": [], "last": "Naumann", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Mcdermott", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2nd Clinical Natural Language Processing Workshop", "volume": "", "issue": "", "pages": "72--78", "other_ids": { "DOI": [ "10.18653/v1/W19-1909" ] }, "num": null, "urls": [], "raw_text": "Emily Alsentzer, John Murphy, William Boag, Wei- Hung Weng, Di Jindi, Tristan Naumann, and Matthew McDermott. 2019. Publicly available clini- cal BERT embeddings. In Proceedings of the 2nd Clinical Natural Language Processing Workshop, pages 72-78.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Identifying beneficial task relations for multi-task learning in deep neural networks. In EACL", "authors": [ { "first": "Joachim", "middle": [], "last": "Bingel", "suffix": "" }, { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "164--169", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joachim Bingel and Anders S\u00f8gaard. 2017. Identify- ing beneficial task relations for multi-task learning in deep neural networks. In EACL, pages 164-169.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Multitask learning", "authors": [ { "first": "Rich", "middle": [], "last": "Caruana", "suffix": "" } ], "year": 1997, "venue": "Machine Learning", "volume": "28", "issue": "", "pages": "41--75", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rich Caruana. 1997. Multitask learning. Machine Learning, 28(1):41-75.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Multi-task learning for sequence tagging: an empirical study", "authors": [ { "first": "Soravit", "middle": [], "last": "Changpinyo", "suffix": "" }, { "first": "Hexiang", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Sha", "suffix": "" } ], "year": 2018, "venue": "COLING", "volume": "", "issue": "", "pages": "2965--2977", "other_ids": {}, "num": null, "urls": [], "raw_text": "Soravit Changpinyo, Hexiang Hu, and Fei Sha. 2018. Multi-task learning for sequence tagging: an empiri- cal study. In COLING, pages 2965-2977.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A multi-task deep learning model for the classification of age-related macular degeneration", "authors": [ { "first": "Qingyu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yifan", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Tiarnan", "middle": [], "last": "Keenan", "suffix": "" }, { "first": "Shazia", "middle": [], "last": "Dharssi", "suffix": "" }, { "first": "Elvira", "middle": [], "last": "Agro", "suffix": "" }, { "first": "N", "middle": [], "last": "", "suffix": "" }, { "first": "Wai", "middle": [ "T" ], "last": "Wong", "suffix": "" }, { "first": "Emily", "middle": [ "Y" ], "last": "Chew", "suffix": "" }, { "first": "Zhiyong", "middle": [], "last": "Lu", "suffix": "" } ], "year": 2019, "venue": "Informatics Summit", "volume": "", "issue": "", "pages": "505--514", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qingyu Chen, Yifan Peng, Tiarnan Keenan, Shazia Dharssi, Elvira Agro N, Wai T. Wong, Emily Y. Chew, and Zhiyong Lu. 2019. A multi-task deep learning model for the classification of age-related macular degeneration. AMIA 2019 Informatics Sum- mit, 2019:505-514.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A unified architecture for natural language processing", "authors": [ { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2008, "venue": "ICML", "volume": "", "issue": "", "pages": "160--167", "other_ids": { "DOI": [ "10.1145/1390156.1390177" ] }, "num": null, "urls": [], "raw_text": "Ronan Collobert and Jason Weston. 2008. A unified ar- chitecture for natural language processing. In ICML, pages 160-167.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A neural network multi-task learning approach to biomedical named entity recognition", "authors": [ { "first": "Gamal", "middle": [], "last": "Crichton", "suffix": "" }, { "first": "Sampo", "middle": [], "last": "Pyysalo", "suffix": "" }, { "first": "Billy", "middle": [], "last": "Chiu", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" } ], "year": 2017, "venue": "BMC Bioinformatics", "volume": "18", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1186/s12859-017-1776-8" ] }, "num": null, "urls": [], "raw_text": "Gamal Crichton, Sampo Pyysalo, Billy Chiu, and Anna Korhonen. 2017. A neural network multi-task learn- ing approach to biomedical named entity recogni- tion. BMC Bioinformatics, 18:368.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Overview of the bacteria biotope task at BioNLP shared task 2016", "authors": [ { "first": "Louise", "middle": [], "last": "Del\u00e9ger", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Bossy", "suffix": "" }, { "first": "Estelle", "middle": [], "last": "Chaix", "suffix": "" }, { "first": "Mouhamadou", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2016, "venue": "Proceedings of BioNLP Shared Task Workshop", "volume": "", "issue": "", "pages": "12--22", "other_ids": { "DOI": [ "10.18653/v1/W16-3002" ] }, "num": null, "urls": [], "raw_text": "Louise Del\u00e9ger, Robert Bossy, Estelle Chaix, Mouhamadou Ba, Arnaud Ferr\u00e9, Philippe Bessi\u00e8res, and Claire N\u00e9dellec. 2016. Overview of the bacteria biotope task at BioNLP shared task 2016. In Proceedings of BioNLP Shared Task Workshop, pages 12-22.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language under- standing. arXiv preprint: 1810.04805.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A novel serial deep multi-task learning model for large scale biomedical semantic indexing", "authors": [ { "first": "Yongping", "middle": [], "last": "Du", "suffix": "" }, { "first": "Yunpeng", "middle": [], "last": "Pan", "suffix": "" }, { "first": "Junzhong", "middle": [], "last": "Ji", "suffix": "" } ], "year": 2017, "venue": "IEEE International Conference on Bioinformatics and Biomedicine (BIBM)", "volume": "", "issue": "", "pages": "533--537", "other_ids": { "DOI": [ "10.1109/bibm.2017.8217704" ] }, "num": null, "urls": [], "raw_text": "Yongping Du, Yunpeng Pan, and Junzhong Ji. 2017. A novel serial deep multi-task learning model for large scale biomedical semantic indexing. In IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pages 533-537.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Development of a benchmark corpus to support the automatic extraction of drug-related adverse effects from medical case reports", "authors": [ { "first": "Harsha", "middle": [], "last": "Gurulingappa", "suffix": "" }, { "first": "Abdul", "middle": [ "Mateen" ], "last": "Rajput", "suffix": "" }, { "first": "Angus", "middle": [], "last": "Roberts", "suffix": "" }, { "first": "Juliane", "middle": [], "last": "Fluck", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Hofmann-Apitius", "suffix": "" }, { "first": "Luca", "middle": [], "last": "Toldo", "suffix": "" } ], "year": 2012, "venue": "Journal of Biomedical Informatics", "volume": "45", "issue": "", "pages": "885--892", "other_ids": { "DOI": [ "10.1016/j.jbi.2012.04.008" ] }, "num": null, "urls": [], "raw_text": "Harsha Gurulingappa, Abdul Mateen Rajput, Angus Roberts, Juliane Fluck, Martin Hofmann-Apitius, and Luca Toldo. 2012. Development of a bench- mark corpus to support the automatic extraction of drug-related adverse effects from medical case re- ports. Journal of Biomedical Informatics, 45:885- 892.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Multitask learning and benchmarking with clinical time series data", "authors": [ { "first": "Hrayr", "middle": [], "last": "Harutyunyan", "suffix": "" }, { "first": "Hrant", "middle": [], "last": "Khachatrian", "suffix": "" }, { "first": "David", "middle": [ "C" ], "last": "Kale", "suffix": "" }, { "first": "Greg", "middle": [ "Ver" ], "last": "Steeg", "suffix": "" }, { "first": "Aram", "middle": [], "last": "Galstyan", "suffix": "" } ], "year": 2019, "venue": "Scientific data", "volume": "6", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1038/s41597-019-0103-9" ] }, "num": null, "urls": [], "raw_text": "Hrayr Harutyunyan, Hrant Khachatrian, David C. Kale, Greg Ver Steeg, and Aram Galstyan. 2019. Multi- task learning and benchmarking with clinical time series data. Scientific data, 6:96.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Novel applications of multitask learning and multiple output regression to multiple genetic trait prediction", "authors": [ { "first": "Dan", "middle": [], "last": "He", "suffix": "" }, { "first": "David", "middle": [], "last": "Kuhn", "suffix": "" }, { "first": "Laxmi", "middle": [], "last": "Parida", "suffix": "" } ], "year": 2016, "venue": "Bioinformatics", "volume": "32", "issue": "", "pages": "37--43", "other_ids": { "DOI": [ "10.1093/bioinformatics/btw249" ] }, "num": null, "urls": [], "raw_text": "Dan He, David Kuhn, and Laxmi Parida. 2016. Novel applications of multitask learning and multiple out- put regression to multiple genetic trait prediction. Bioinformatics (Oxford, England), 32:i37-i43.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "The DDI corpus: an annotated corpus with pharmacological substances and drug-drug interactions", "authors": [ { "first": "Mar\u00eda", "middle": [], "last": "Herrero-Zazo", "suffix": "" }, { "first": "Isabel", "middle": [], "last": "Segura-Bedmar", "suffix": "" }, { "first": "Paloma", "middle": [], "last": "Mart\u00ednez", "suffix": "" }, { "first": "Thierry", "middle": [], "last": "Declerck", "suffix": "" } ], "year": 2013, "venue": "Journal of Biomedical Informatics", "volume": "46", "issue": "", "pages": "914--920", "other_ids": { "DOI": [ "10.1016/j.jbi.2013.07.011" ] }, "num": null, "urls": [], "raw_text": "Mar\u00eda Herrero-Zazo, Isabel Segura-Bedmar, Paloma Mart\u00ednez, and Thierry Declerck. 2013. The DDI corpus: an annotated corpus with pharmacological substances and drug-drug interactions. Journal of Biomedical Informatics, 46:914-920.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "MT-BioNER: multi-task learning for biomedical named entity recognition using deep bidirectional transformers", "authors": [ { "first": "Morteza", "middle": [], "last": "Muhammad Raza Khan", "suffix": "" }, { "first": "Mohamed", "middle": [], "last": "Ziyadi", "suffix": "" }, { "first": "", "middle": [], "last": "Abdelhady", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Muhammad Raza Khan, Morteza Ziyadi, and Mo- hamed AbdelHady. 2020. MT-BioNER: multi-task learning for biomedical named entity recognition us- ing deep bidirectional transformers. arXiv preprint: 2001.08904.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Adam: a method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2015, "venue": "International Conference on Learning Representations (ICLR)", "volume": "", "issue": "", "pages": "1--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: a method for stochastic optimization. In International Conference on Learning Representations (ICLR), pages 1-15.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Marleen Rodenburg, Astrid Laegreid, Marius Doornenbal, Julen Oyarzabal, Analia Louren\u00e7o, and Alfonso Valencia", "authors": [ { "first": "Martin", "middle": [], "last": "Krallinger", "suffix": "" }, { "first": "Obdulia", "middle": [], "last": "Rabal", "suffix": "" }, { "first": "A", "middle": [], "last": "Saber", "suffix": "" }, { "first": "", "middle": [], "last": "Akhondi", "suffix": "" }, { "first": "Jes\u00fas", "middle": [], "last": "Mart\u00edn P\u00e9rez P\u00e9rez", "suffix": "" }, { "first": "Gael", "middle": [ "P\u00e9rez" ], "last": "Santamar\u00eda", "suffix": "" }, { "first": "Georgios", "middle": [], "last": "Rodr\u00edguez", "suffix": "" }, { "first": "", "middle": [], "last": "Tsatsaronis", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the BioCreative workshop", "volume": "", "issue": "", "pages": "141--146", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin Krallinger, Obdulia Rabal, Saber A. Akhondi, Mart\u00edn P\u00e9rez P\u00e9rez, Jes\u00fas Santamar\u00eda, Gael P\u00e9rez Rodr\u00edguez, Georgios Tsatsaronis, Ander Intxau- rrondo, Jos\u00e9 Antonio L\u00f3pez1 Umesh Nandal, Erin Van Buel, Akileshwari Chandrasekhar, Mar- leen Rodenburg, Astrid Laegreid, Marius Doornen- bal, Julen Oyarzabal, Analia Louren\u00e7o, and Alfonso Valencia. 2017. Overview of the BioCreative VI chemical-protein interaction track. In Proceedings of the BioCreative workshop, pages 141-146.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "BioBERT: a pre-trained biomedical language representation model for biomedical text mining", "authors": [ { "first": "Jinhyuk", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Wonjin", "middle": [], "last": "Yoon", "suffix": "" }, { "first": "Sungdong", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Donghyeon", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Sunkyu", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Chan", "middle": [], "last": "Ho So", "suffix": "" }, { "first": "Jaewoo", "middle": [], "last": "Kang", "suffix": "" } ], "year": 2020, "venue": "Bioinformatics", "volume": "36", "issue": "", "pages": "1234--1240", "other_ids": { "DOI": [ "10.1093/bioinformatics/btz682" ] }, "num": null, "urls": [], "raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics (Oxford, England), 36:1234-1240.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Syntax-aware multi-task graph convolutional networks for biomedical relation extraction", "authors": [ { "first": "Diya", "middle": [], "last": "Li", "suffix": "" }, { "first": "Heng", "middle": [], "last": "Ji", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Tenth International Workshop on Health Text Mining and Information Analysis (LOUHI 2019)", "volume": "", "issue": "", "pages": "28--33", "other_ids": { "DOI": [ "10.18653/v1/D19-6204" ] }, "num": null, "urls": [], "raw_text": "Diya Li and Heng Ji. 2019. Syntax-aware multi-task graph convolutional networks for biomedical rela- tion extraction. In Proceedings of the Tenth Inter- national Workshop on Health Text Mining and Infor- mation Analysis (LOUHI 2019), pages 28-33.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A neural joint model for entity and relation extraction from biomedical text", "authors": [ { "first": "Fei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Meishan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Guohong", "middle": [], "last": "Fu", "suffix": "" }, { "first": "Donghong", "middle": [], "last": "Ji", "suffix": "" } ], "year": null, "venue": "BMC Bioinformatics", "volume": "18", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1186/s12859-017-1609-9" ] }, "num": null, "urls": [], "raw_text": "Fei Li, Meishan Zhang, Guohong Fu, and Donghong Ji. 2017. A neural joint model for entity and relation ex- traction from biomedical text. BMC Bioinformatics, 18:198.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "BioCreative V CDR task corpus: a resource for chemical disease relation extraction", "authors": [ { "first": "Jiao", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yueping", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Robin", "middle": [ "J" ], "last": "Johnson", "suffix": "" }, { "first": "Daniela", "middle": [], "last": "Sciaky", "suffix": "" }, { "first": "Chih-Hsuan", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Leaman", "suffix": "" }, { "first": "Allan", "middle": [ "Peter" ], "last": "Davis", "suffix": "" }, { "first": "Carolyn", "middle": [ "J" ], "last": "Mattingly", "suffix": "" }, { "first": "Thomas", "middle": [ "C" ], "last": "Wiegers", "suffix": "" }, { "first": "Zhiyong", "middle": [], "last": "Lu", "suffix": "" } ], "year": 2016, "venue": "Database", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1093/database/baw068" ] }, "num": null, "urls": [], "raw_text": "Jiao Li, Yueping Sun, Robin J. Johnson, Daniela Sci- aky, Chih-Hsuan Wei, Robert Leaman, Allan Peter Davis, Carolyn J. Mattingly, Thomas C. Wiegers, and Zhiyong Lu. 2016. BioCreative V CDR task corpus: a resource for chemical disease relation ex- traction. Database (Oxford), 2016.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A multi-task learning based approach to biomedical entity relation extraction", "authors": [ { "first": "Qingqing", "middle": [], "last": "Li", "suffix": "" }, { "first": "Zhihao", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Ling", "middle": [], "last": "Luo", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yin", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Hongfei", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Liang", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Kan", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Yijia", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2018, "venue": "IEEE International Conference on Bioinformatics and Biomedicine (BIBM)", "volume": "", "issue": "", "pages": "680--682", "other_ids": { "DOI": [ "10.1109/bibm.2018.8621284" ] }, "num": null, "urls": [], "raw_text": "Qingqing Li, Zhihao Yang, Ling Luo, Lei Wang, Yin Zhang, Hongfei Lin, Jian Wang, Liang Yang, Kan Xu, and Yijia Zhang. 2018. A multi-task learning based approach to biomedical entity relation extrac- tion. In IEEE International Conference on Bioinfor- matics and Biomedicine (BIBM), pages 680-682.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "End-to-end multi-task learning with attention", "authors": [ { "first": "Shikun", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Johns", "suffix": "" }, { "first": "Andrew", "middle": [ "J" ], "last": "Davison", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "volume": "", "issue": "", "pages": "1871--1880", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shikun Liu, Edward Johns, and Andrew J. Davison. 2019a. End-to-end multi-task learning with atten- tion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1871-1880.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Overview of the BioCreative/OHNLP 2018 family history extraction task", "authors": [ { "first": "Sijia", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yanshan", "middle": [], "last": "Majid Rastegar Mojarad", "suffix": "" }, { "first": "Liwei", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Feichen", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Sunyang", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Hongfang", "middle": [], "last": "Fu", "suffix": "" }, { "first": "", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the BioCreative Workshop", "volume": "", "issue": "", "pages": "1--5", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sijia Liu, Majid Rastegar Mojarad, Yanshan Wang, Liwei Wang, Feichen Shen, Sunyang Fu, and Hongfang Liu. 2018. Overview of the BioCre- ative/OHNLP 2018 family history extraction task. In Proceedings of the BioCreative Workshop, pages 1-5.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Improving multi-task deep neural networks via knowledge distillation for natural language understanding", "authors": [ { "first": "Xiaodong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Pengcheng", "middle": [], "last": "He", "suffix": "" }, { "first": "Weizhu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jian- feng Gao. 2019b. Improving multi-task deep neural networks via knowledge distillation for natural lan- guage understanding. arXiv preprint: 1904.09482.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Multi-task deep neural networks for natural language understanding", "authors": [ { "first": "Xiaodong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Pengcheng", "middle": [], "last": "He", "suffix": "" }, { "first": "Weizhu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2019, "venue": "ACL", "volume": "", "issue": "", "pages": "4487--4496", "other_ids": { "DOI": [ "10.18653/v1/P19-1441" ] }, "num": null, "urls": [], "raw_text": "Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jian- feng Gao. 2019c. Multi-task deep neural networks for natural language understanding. In ACL, pages 4487-4496.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Multi-task sequence to sequence learning", "authors": [ { "first": "Minh-Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Quoc", "middle": [ "V" ], "last": "Le", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Lukasz", "middle": [], "last": "Kaiser", "suffix": "" } ], "year": 2016, "venue": "ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Minh-Thang Luong, Quoc V. Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2016. Multi-task se- quence to sequence learning. In ICLR.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "When is multitask learning effective? semantic sequence prediction under varying data conditions", "authors": [ { "first": "Alonso", "middle": [], "last": "H\u00e9ctor Mart\u00ednez", "suffix": "" }, { "first": "Barbara", "middle": [], "last": "Plank", "suffix": "" } ], "year": 2017, "venue": "EACL", "volume": "", "issue": "", "pages": "44--53", "other_ids": {}, "num": null, "urls": [], "raw_text": "H\u00e9ctor Mart\u00ednez Alonso and Barbara Plank. 2017. When is multitask learning effective? semantic se- quence prediction under varying data conditions. In EACL, pages 44-53.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Multi-task learning applied to biomedical named entity recognition task", "authors": [ { "first": "Tahir", "middle": [], "last": "Mehmood", "suffix": "" }, { "first": "Alfonso", "middle": [ "E" ], "last": "Gerevini", "suffix": "" }, { "first": "Alberto", "middle": [], "last": "Lavelli", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Serina", "suffix": "" } ], "year": 2019, "venue": "Italian Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tahir Mehmood, Alfonso E Gerevini, Alberto Lavelli, and Ivan Serina. 2019. Multi-task learning applied to biomedical named entity recognition task. In Ital- ian Conference on Computational Linguistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets", "authors": [ { "first": "Yifan", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Shankai", "middle": [], "last": "Yan", "suffix": "" }, { "first": "Zhiyong", "middle": [], "last": "Lu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Workshop on Biomedical Natural Language Processing (BioNLP)", "volume": "", "issue": "", "pages": "58--65", "other_ids": { "DOI": [ "10.18653/v1/W19-5006" ] }, "num": null, "urls": [], "raw_text": "Yifan Peng, Shankai Yan, and Zhiyong Lu. 2019. Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. In Proceedings of the Workshop on Biomedical Natural Language Process- ing (BioNLP), pages 58-65.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "An overview of multi-task learning in", "authors": [ { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" } ], "year": 2017, "venue": "deep neural networks", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Ruder. 2017. An overview of multi-task learning in deep neural networks. arXiv preprint: 1706.05098.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Family history information extraction via deep joint learning", "authors": [ { "first": "Xue", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Dehuan", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Yuanhang", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Xiaolong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Qingcai", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2019, "venue": "BMC medical informatics and decision making", "volume": "19", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1186/s12911-019-0995-5" ] }, "num": null, "urls": [], "raw_text": "Xue Shi, Dehuan Jiang, Yuanhang Huang, Xiaolong Wang, Qingcai Chen, Jun Yan, and Buzhou Tang. 2019. Family history information extraction via deep joint learning. BMC medical informatics and decision making, 19:277.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Mednli -a natural language inference dataset for the clinical domain", "authors": [ { "first": "Chaitanya", "middle": [], "last": "Shivade", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.13026/C2RS98" ] }, "num": null, "urls": [], "raw_text": "Chaitanya Shivade. 2017. Mednli -a natural language inference dataset for the clinical domain.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Overview of the ShARe/CLEF eHealth evaluation lab", "authors": [ { "first": "Hanna", "middle": [], "last": "Suominen", "suffix": "" }, { "first": "Sanna", "middle": [], "last": "Salanter\u00e4", "suffix": "" }, { "first": "Sumithra", "middle": [], "last": "Velupillai", "suffix": "" }, { "first": "Wendy", "middle": [ "W" ], "last": "Chapman", "suffix": "" }, { "first": "Guergana", "middle": [], "last": "Savova", "suffix": "" }, { "first": "Noemie", "middle": [], "last": "Elhadad", "suffix": "" }, { "first": "Sameer", "middle": [], "last": "Pradhan", "suffix": "" }, { "first": "Brett", "middle": [ "R" ], "last": "South", "suffix": "" }, { "first": "Danielle", "middle": [ "L" ], "last": "Mowery", "suffix": "" }, { "first": "J", "middle": [ "F" ], "last": "Gareth", "suffix": "" }, { "first": "", "middle": [], "last": "Jones", "suffix": "" } ], "year": 2013, "venue": "International Conference of the Cross-Language Evaluation Forum for European Languages", "volume": "", "issue": "", "pages": "212--231", "other_ids": { "DOI": [ "10.1007/978-3-642-40802-1_24" ] }, "num": null, "urls": [], "raw_text": "Hanna Suominen, Sanna Salanter\u00e4, Sumithra Velupil- lai, Wendy W. Chapman, Guergana Savova, Noemie Elhadad, Sameer Pradhan, Brett R. South, Danielle L. Mowery, Gareth J. F. Jones, et al. 2013. Overview of the ShARe/CLEF eHealth evalu- ation lab 2013. In International Conference of the Cross-Language Evaluation Forum for European Languages, pages 212-231.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "i2b2/VA challenge on concepts, assertions, and relations in clinical text", "authors": [ { "first": "Ozlem", "middle": [], "last": "Uzuner", "suffix": "" }, { "first": "Brett", "middle": [ "R" ], "last": "South", "suffix": "" }, { "first": "Shuying", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Scott", "middle": [ "L" ], "last": "Duvall", "suffix": "" } ], "year": 2010, "venue": "Journal of the American Medical Informatics Association : JAMIA", "volume": "18", "issue": "", "pages": "552--556", "other_ids": { "DOI": [ "10.1136/amiajnl-2011-000203" ] }, "num": null, "urls": [], "raw_text": "Ozlem Uzuner, Brett R. South, Shuying Shen, and Scott L. DuVall. 2011. 2010 i2b2/VA challenge on concepts, assertions, and relations in clinical text. Journal of the American Medical Informatics Asso- ciation : JAMIA, 18:552-556.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Multitask learning for biomedical named entity recognition with cross-sharing structure", "authors": [ { "first": "Xi", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jiagao", "middle": [], "last": "Lyu", "suffix": "" }, { "first": "Li", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Ke", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2019, "venue": "BMC Bioinformatics", "volume": "20", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1186/s12859-019-3000-5" ] }, "num": null, "urls": [], "raw_text": "Xi Wang, Jiagao Lyu, Li Dong, and Ke Xu. 2019a. Multitask learning for biomedical named entity recognition with cross-sharing structure. BMC Bioinformatics, 20:427.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Boosted multi-task learning for face verification with applications to web image and video search", "authors": [ { "first": "Xiaogang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Cha", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhengyou", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2009, "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "volume": "", "issue": "", "pages": "142--149", "other_ids": { "DOI": [ "10.1109/cvpr.2009.5206736" ] }, "num": null, "urls": [], "raw_text": "Xiaogang Wang, Cha Zhang, and Zhengyou Zhang. 2009. Boosted multi-task learning for face verifi- cation with applications to web image and video search. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 142-149.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Cross-type biomedical named entity recognition with deep multi-task learning", "authors": [ { "first": "Xuan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Yuhao", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Marinka", "middle": [], "last": "Zitnik", "suffix": "" }, { "first": "Jingbo", "middle": [], "last": "Shang", "suffix": "" }, { "first": "Curtis", "middle": [], "last": "Langlotz", "suffix": "" }, { "first": "Jiawei", "middle": [], "last": "Han", "suffix": "" } ], "year": 2019, "venue": "Bioinformatics", "volume": "35", "issue": "", "pages": "1745--1752", "other_ids": { "DOI": [ "10.1093/bioinformatics/bty869" ] }, "num": null, "urls": [], "raw_text": "Xuan Wang, Yu Zhang, Xiang Ren, Yuhao Zhang, Marinka Zitnik, Jingbo Shang, Curtis Langlotz, and Jiawei Han. 2019b. Cross-type biomedical named entity recognition with deep multi-task learning. Bioinformatics (Oxford, England), 35:1745-1752.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "MedSTS: a resource for clinical semantic textual similarity. Language Resources and Evaluation", "authors": [ { "first": "Yanshan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Naveed", "middle": [], "last": "Afzal", "suffix": "" }, { "first": "Sunyang", "middle": [], "last": "Fu", "suffix": "" }, { "first": "Liwei", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Feichen", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Majid", "middle": [], "last": "Rastegar-Mojarad", "suffix": "" }, { "first": "Hongfang", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "1--16", "other_ids": { "DOI": [ "10.1007/s10579-018-9431-1" ] }, "num": null, "urls": [], "raw_text": "Yanshan Wang, Naveed Afzal, Sunyang Fu, Liwei Wang, Feichen Shen, Majid Rastegar-Mojarad, and Hongfang Liu. 2018. MedSTS: a resource for clini- cal semantic textual similarity. Language Resources and Evaluation, pages 1-16.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Adaptive multi-task transfer learning for Chinese word segmentation in medical text", "authors": [ { "first": "Junjie", "middle": [], "last": "Xing", "suffix": "" }, { "first": "Kenny", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Shaodian", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2018, "venue": "COLING", "volume": "", "issue": "", "pages": "3619--3630", "other_ids": {}, "num": null, "urls": [], "raw_text": "Junjie Xing, Kenny Zhu, and Shaodian Zhang. 2018. Adaptive multi-task transfer learning for Chinese word segmentation in medical text. In COLING, pages 3619-3630.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Huanhuan Zhang, and Ping He. 2019. Fine-tuning BERT for joint entity and relation extraction in Chinese medical text", "authors": [ { "first": "Kui", "middle": [], "last": "Xue", "suffix": "" }, { "first": "Yangming", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Tong", "middle": [], "last": "Ruan", "suffix": "" } ], "year": null, "venue": "2019 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)", "volume": "", "issue": "", "pages": "892--897", "other_ids": { "DOI": [ "10.1109/bibm47256.2019.8983370" ] }, "num": null, "urls": [], "raw_text": "Kui Xue, Yangming Zhou, Zhiyuan Ma, Tong Ruan, Huanhuan Zhang, and Ping He. 2019. Fine-tuning BERT for joint entity and relation extraction in Chi- nese medical text. In 2019 IEEE International Con- ference on Bioinformatics and Biomedicine (BIBM), pages 892-897.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Information extraction from electronic medical records using multitask recurrent neural network with contextual word embedding", "authors": [ { "first": "Jianliang", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Yuenan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Minghui", "middle": [], "last": "Qian", "suffix": "" }, { "first": "Chenghua", "middle": [], "last": "Guan", "suffix": "" }, { "first": "Xiangfei", "middle": [], "last": "Yuan", "suffix": "" } ], "year": 2019, "venue": "Applied Sciences", "volume": "9", "issue": "18", "pages": "", "other_ids": { "DOI": [ "10.3390/app9183658" ] }, "num": null, "urls": [], "raw_text": "Jianliang Yang, Yuenan Liu, Minghui Qian, Chenghua Guan, and Xiangfei Yuan. 2019. Information extrac- tion from electronic medical records using multitask recurrent neural network with contextual word em- bedding. Applied Sciences, 9(18):3658.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Multitask learning for Chinese named entity recognition", "authors": [ { "first": "Qun", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhenzhen", "middle": [], "last": "Li", "suffix": "" }, { "first": "Dawei", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Dongsheng", "middle": [], "last": "Li", "suffix": "" }, { "first": "Zhen", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Yuxing", "middle": [], "last": "Peng", "suffix": "" } ], "year": 2018, "venue": "Advances in Multimedia Information Processing -PCM", "volume": "", "issue": "", "pages": "653--662", "other_ids": { "DOI": [ "10.1007/978-3-030-00767-6_60" ] }, "num": null, "urls": [], "raw_text": "Qun Zhang, Zhenzhen Li, Dawei Feng, Dongsheng Li, Zhen Huang, and Yuxing Peng. 2018. Multitask learning for Chinese named entity recognition. In Advances in Multimedia Information Processing - PCM, pages 653-662.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "A survey on multitask learning", "authors": [ { "first": "Yu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Qiang", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yu Zhang and Qiang Yang. 2017. A survey on multi- task learning. arXiv preprint: 1707.08114.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "A neural multi-task learning framework to jointly model medical named entity recognition and normalization", "authors": [ { "first": "Sendong", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Sicheng", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "33", "issue": "", "pages": "817--824", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sendong Zhao, Ting Liu, Sicheng Zhao, and Fei Wang. 2019. A neural multi-task learning framework to jointly model medical named entity recognition and normalization. In Proceedings of the AAAI Con- ference on Artificial Intelligence, volume 33, pages 817-824.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "A multi-task learning formulation for predicting disease progression", "authors": [ { "first": "Jiayu", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Yuan", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jieping", "middle": [], "last": "Ye", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining", "volume": "", "issue": "", "pages": "814--822", "other_ids": { "DOI": [ "10.1145/2020408.2020549" ] }, "num": null, "urls": [], "raw_text": "Jiayu Zhou, Lei Yuan, Jun Liu, and Jieping Ye. 2011. A multi-task learning formulation for predicting dis- ease progression. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 814-822.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Pairwise MTL relationships in clinical (left) and biomedical (right) domains.", "num": null, "type_str": "figure", "uris": null }, "TABREF1": { "type_str": "table", "num": null, "html": null, "content": "
ModelClinicalSTS i2b2 2010 re MedNLI ShARe/CLEFE Avg
BlueBERT clinical0.8480.7640.8400.7710.806
MT-BlueBERT-Refinement clinical0.8220.7450.8350.8260.807
MT-BlueBERT-Fine-Tune clinical0.8400.7600.8460.8310.819
", "text": "Summary of eight tasks in the BLUE benchmark. More details can be found in." }, "TABREF2": { "type_str": "table", "num": null, "html": null, "content": "
ModelChemProt DDIBC5CDR BC5CDR Avg disease chemical
BlueBERT biomedical0.7250.7390.8660.9350.816
MT-BlueBERT-Refinement biomedical0.7140.7920.8240.9300.815
MT-BlueBERT-Fine-Tune biomedical0.7290.8200.8650.9310.836
", "text": "Test results on clinical tasks." }, "TABREF3": { "type_str": "table", "num": null, "html": null, "content": "", "text": "Test results on biomedical tasks." }, "TABREF4": { "type_str": "table", "num": null, "html": null, "content": "
ModelChemProt DDIBC5CDR BC5CDR Avg disease chemical
MT-BioBERT-Fine-Tune0.7290.8120.8510.9280.830
MT-BlueBERT-Fine-Tune biomedical0.7290.8200.8650.9310.836
MT-BlueBERT-Fine-Tune clinical0.7140.7920.8240.9300.815
", "text": "Test results of MT-BERT-Fine-Tune models on clinical tasks." }, "TABREF5": { "type_str": "table", "num": null, "html": null, "content": "", "text": "Test results of MT-BERT-Fine-Tune models on biomedical tasks." }, "TABREF7": { "type_str": "table", "num": null, "html": null, "content": "
", "text": "Test results on eight BLUE tasks." } } } }