{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:07:11.394885Z" }, "title": "Evaluating the Utility of Model Configurations and Data Augmentation on Clinical Semantic Textual Similarity", "authors": [ { "first": "Yuxia", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "", "institution": "The University of Melbourne Victoria", "location": { "country": "Australia" } }, "email": "yuxiaw@student.unimelb.edu.au" }, { "first": "Fei", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "", "institution": "The University of Melbourne Victoria", "location": { "country": "Australia" } }, "email": "" }, { "first": "Karin", "middle": [], "last": "Verspoor", "suffix": "", "affiliation": { "laboratory": "", "institution": "The University of Melbourne Victoria", "location": { "country": "Australia" } }, "email": "karin.verspoor@unimelb.edu.au" }, { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "", "affiliation": { "laboratory": "", "institution": "The University of Melbourne Victoria", "location": { "country": "Australia" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper, we apply pre-trained language models to the Semantic Textual Similarity (STS) task, with a specific focus on the clinical domain. In low-resource setting of clinical STS, these large models tend to be impractical and prone to overfitting. Building on BERT, we study the impact of a number of model design choices, namely different fine-tuning and pooling strategies. We observe that the impact of domain-specific fine-tuning on clinical STS is much less than that in the general domain, likely due to the concept richness of the domain. Based on this, we propose two data augmentation techniques. Experimental results on N2C2-STS 1 demonstrate substantial improvements, validating the utility of the proposed methods.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "In this paper, we apply pre-trained language models to the Semantic Textual Similarity (STS) task, with a specific focus on the clinical domain. In low-resource setting of clinical STS, these large models tend to be impractical and prone to overfitting. Building on BERT, we study the impact of a number of model design choices, namely different fine-tuning and pooling strategies. We observe that the impact of domain-specific fine-tuning on clinical STS is much less than that in the general domain, likely due to the concept richness of the domain. Based on this, we propose two data augmentation techniques. Experimental results on N2C2-STS 1 demonstrate substantial improvements, validating the utility of the proposed methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Semantic Textual Similarity (STS) is a language understanding task, involving assessing the degree of semantic equivalence between two pieces of text based on a graded numerical score (Corley and Mihalcea, 2005) . It has application in tasks such as information retrieval (Hliaoutakis et al., 2006) , question answering (Hoogeveen et al., 2018) , and summarization (AL-Khassawneh et al., 2016) . In this paper, we focus on STS in the clinical domain, in the context of a recent task within the framework of N2C2 (the National NLP Clinical Challenges) 1 , which makes use of the extended MedSTS data set (Wang et al., 2018) , referring to N2C2-STS, with limited annotated sentences pairs (1.6K) that are rich in domain terms.", "cite_spans": [ { "start": 184, "end": 211, "text": "(Corley and Mihalcea, 2005)", "ref_id": "BIBREF4" }, { "start": 272, "end": 298, "text": "(Hliaoutakis et al., 2006)", "ref_id": "BIBREF9" }, { "start": 320, "end": 344, "text": "(Hoogeveen et al., 2018)", "ref_id": "BIBREF10" }, { "start": 365, "end": 393, "text": "(AL-Khassawneh et al., 2016)", "ref_id": "BIBREF0" }, { "start": 603, "end": 622, "text": "(Wang et al., 2018)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Neural STS models typically consist of encoders to generate text representations, and a regression layer to measure the similarity score (He et al., 2015; Mueller and Thyagarajan, 2016; He and Lin, 1 https://portal.dbmi.hms.harvard.edu/ projects/n2c2-2019-t1/ 2016; Reimers and Gurevych, 2019) . These architectures require a large amount of training data, an unrealistic requirement in low resource settings.", "cite_spans": [ { "start": 137, "end": 154, "text": "(He et al., 2015;", "ref_id": "BIBREF7" }, { "start": 155, "end": 185, "text": "Mueller and Thyagarajan, 2016;", "ref_id": "BIBREF14" }, { "start": 186, "end": 199, "text": "He and Lin, 1", "ref_id": null }, { "start": 266, "end": 293, "text": "Reimers and Gurevych, 2019)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recently, pre-trained language models (LMs) such as GPT-2 (Radford et al., 2018) and BERT (Devlin et al., 2019) have been shown to benefit from pre-training over large corpora followed by fine tuning over specific tasks. However, for small-scale datasets, only limited finetuning can be done. For example, GPT-2 achieved strong results across four large natural language inference (NLI) datasets, but was less successful over the small-scale RTE corpus (Bentivogli et al., 2009) , performing below a multi-task biLSTM model. Similarly, while the large-scale pre-training of BERT has led to impressive improvements on a range of tasks, only very modest improvements have been achieved on STS tasks such as STS-B (Cer et al., 2017) and MRPC (Dolan and Brockett, 2005 ) (with 5.7k and 3.6k training instances, resp.). Compared to general-domain STS benchmarks, labeled clinical STS data is more scarce, which tends to cause overfitting during fine-tuning. Moreover, further model scaling is a challenge due to GPU/TPU memory limitations and longer training time (Lan et al., 2019) . This motivates us to search for model configurations which strike a balance between model flexibility and overfitting.", "cite_spans": [ { "start": 58, "end": 80, "text": "(Radford et al., 2018)", "ref_id": "BIBREF15" }, { "start": 90, "end": 111, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF5" }, { "start": 453, "end": 478, "text": "(Bentivogli et al., 2009)", "ref_id": "BIBREF2" }, { "start": 711, "end": 729, "text": "(Cer et al., 2017)", "ref_id": "BIBREF3" }, { "start": 734, "end": 764, "text": "MRPC (Dolan and Brockett, 2005", "ref_id": null }, { "start": 1059, "end": 1077, "text": "(Lan et al., 2019)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we study the impact of a number of model design choices. First, following Reimers and Gurevych (2019) , we study the impact of various pooling methods on STS, and find that convolution filters coupled with max and mean pooling outperform a number of alternative approaches. This can largely be attributed to their improved model expressiveness and ability to capture local interactions (Yu et al., 2019) . Next, we consider different parameter fine-tuning strategies, with varying degrees of flexibility, ranging from keeping all parameters frozen during training to allowing all pa-rameters to be updated. This allows us to identify the optimal model flexibility without over-tuning, thereby further improving model performance.", "cite_spans": [ { "start": 89, "end": 116, "text": "Reimers and Gurevych (2019)", "ref_id": "BIBREF16" }, { "start": 401, "end": 418, "text": "(Yu et al., 2019)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Finally, inspired by recent studies, including sentence ordering prediction (Lan et al., 2019) and data-augmented question answering (Yu et al., 2019) , we focus on data augmentation methods to expand the modest amount of training data. We first consider segment reordering (SR), in permuting segments that are delimited by commas or semicolons. Our second method increases linguistic diversity with back translation (BT). Extensive experiments on N2C2-STS reveal the effectiveness of data augmentation on clinical STS, particularly when combined with the best parameter fine-tuning and pooling strategies identified in Section 3, achieving an absolute gain in performance.", "cite_spans": [ { "start": 76, "end": 94, "text": "(Lan et al., 2019)", "ref_id": "BIBREF12" }, { "start": 133, "end": 150, "text": "(Yu et al., 2019)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In pre-training, a spectrum of design choices have been proposed to optimize models, such as the pretraining objective, training corpus, and hyperparameter selection. Specific examples of objective functions include masked language modeling in BERT, permutation language modeling in XLNet (Yang et al., 2019) , and sentence order prediction (SOP) in ALBERT (Lan et al., 2019) . Additionally, RoBERTa (Liu et al., 2019) explored benefits from a larger mini-batch size, a dynamic masking strategy, and increasing the size of the training corpus (16G to 160G). However, all these efforts are targeted at improving downstream tasks indirectly by optimizing the capability and generalizability of LMs, while adapting a single fully-connected layer to capture task features.", "cite_spans": [ { "start": 289, "end": 308, "text": "(Yang et al., 2019)", "ref_id": "BIBREF23" }, { "start": 357, "end": 375, "text": "(Lan et al., 2019)", "ref_id": "BIBREF12" }, { "start": 400, "end": 418, "text": "(Liu et al., 2019)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Model Configurations", "sec_num": "2.1" }, { "text": "Sentence-BERT (Reimers and Gurevych, 2019) makes use of task-specific structures to optimize STS, concentrating on computational and time efficiency, and is evaluated on relatively larger datasets in the general domain. For evaluating the impact of number of layers transferred to the supervised target task from the pre-trained language model, GPT-2 has been analyzed on two datasets. However, they are both large: MultiNLI (Williams et al., 2018) with >390k instances, and RACE (Lai et al., 2017) with >97k instances. These tasks also both involve reasoning-related classification, as opposed to the nuanced regression task of STS.", "cite_spans": [ { "start": 425, "end": 448, "text": "(Williams et al., 2018)", "ref_id": "BIBREF21" }, { "start": 480, "end": 498, "text": "(Lai et al., 2017)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Model Configurations", "sec_num": "2.1" }, { "text": "Synonym replacement is one of the most commonly used data augmentation methods to simulate linguistic diversity, but it introduces ambiguity if accurate context-dependent disambiguation is not performed. Moreover, random selection and replacement of a single word used in general texts is not plausible for term-rich clinical text, resulting in too much semantic divergence (e.g patient to affected role and discharge to home to spark to home). By contrast, replacing a complete mention of the concept can increase error propagation due to the prerequisite concept extraction and normalization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Augmentation", "sec_num": "2.2" }, { "text": "Random insertion, deletion, and swapping of words have been demonstrated to be effective on five text classification tasks (Wei and Zou, 2019) . But those experiments targeted topic prediction, in contrast to semantic reasoning such as STS and MultiNLI. Intuitively, they do not change the overall topic of a text, but can skew the meaning of a sentence, undermining the STS task. Swapping an entire semantic segment may mitigate the risk of introducing label noise to the STS task.", "cite_spans": [ { "start": 123, "end": 142, "text": "(Wei and Zou, 2019)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Data Augmentation", "sec_num": "2.2" }, { "text": "Compared to semantic and syntactic distortion potentially caused by aforementioned methods, back translation (BT) (Sennrich et al., 2016 )translating to a target language then back to the original language -presents fluent augmented data and reliable improvements for tasks demanding for adequate semantic understanding, such as low-resource machine translation (Xia et al., 2019) and question answering (Yu et al., 2019) . This motivates our application of BT on low-resource clinical STS, to bridge linguistic variation between two sentences. This work represents the first exploration of applying BT for STS.", "cite_spans": [ { "start": 114, "end": 136, "text": "(Sennrich et al., 2016", "ref_id": "BIBREF17" }, { "start": 362, "end": 380, "text": "(Xia et al., 2019)", "ref_id": "BIBREF22" }, { "start": 404, "end": 421, "text": "(Yu et al., 2019)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Data Augmentation", "sec_num": "2.2" }, { "text": "In this section, we study the impact of a number of model design choices on BERT for STS, using a 12-layer base model initialized with pretrained weights.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "STS Model Configurations", "sec_num": "3" }, { "text": "The resource-poor and concept-rich nature of clinical STS makes it difficult to train a large model endto-end on sentence pairs. To address this, most recent studies have made use of pre-trained language models, such as BERT. The most straightforward way to use BERT is the feature-based approach, where the output of the last transformer block is taken as input to the task-specific classifier. Many have proposed the use of a dummy CLS token to generate the feature vector, where CLS is a special symbol added in front of every sequence during pre-training, with its final hidden state always used as the aggregate sequence representation for classification tasks, referring to CLS pooling. Other types of pooling, such as mean and max pooling, are investigated by Reimers and Gurevych (2019) .", "cite_spans": [ { "start": 767, "end": 794, "text": "Reimers and Gurevych (2019)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Convolution (HConv)", "sec_num": "3.1" }, { "text": "However, this results in inferior performance as shown in the first row of Table 1 . 2 As a consequence, the best strategy for extracting feature vectors to represent a sentence remains an open question.", "cite_spans": [], "ref_spans": [ { "start": 75, "end": 82, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Hierarchical Convolution (HConv)", "sec_num": "3.1" }, { "text": "In this work, we first experiment with the featurebased approach, coupled with convolutional filters. This is inspired by the use of convolutional filters in QANet (Yu et al., 2019) to capture local interactions. The difference lies in where convolutional filters are applied. With QANet, multiple conv filters are incorporated into each transformer encoder block to process the input from the previous layer. In contrast, HConv-BERT is largely based on BERT, with the addition of a single task-specific classifier placed on top of BERT consisting of conv filters organised in a hierarchical fashion. This results in a much simplified model, making HConv-BERT less prone to overfitting.", "cite_spans": [ { "start": 164, "end": 181, "text": "(Yu et al., 2019)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Convolution (HConv)", "sec_num": "3.1" }, { "text": "Specifically, we run a collection of convolutional filters with a kernel of size k \u2208 [2, 4], each with J = 768 output channels (indexed by j \u2208 [1, J]), over the temporal axis (indexed by i \u2208 [1, T ]):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Convolution (HConv)", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "c i,k j = w k j * x i:i+k\u22121 + b k j (1) c i,k = [c i,k 1 ; . . . ; c i,k J ]", "eq_num": "(2)" } ], "section": "Hierarchical Convolution (HConv)", "sec_num": "3.1" }, { "text": "where x i:i+k\u22121 is the output BERT features for the token span i to i + k \u2212 1, * is the convolution operation, w k j and b k j are the convolution filter and bias term for the j-th kernel of size k, and [a; b] denotes the concatenation of a and b.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Convolution (HConv)", "sec_num": "3.1" }, { "text": "To capture interactions between distant elements, we feed the output c i,k into another convolution layer of kernel size 2 with M = 128 output channels (indexed by m \u2208 [1, M ]):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Convolution (HConv)", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "c k i,m = w m * c i:i+1,k + b m (3) c k i = c k i,1 ; . . . ; c k i,M", "eq_num": "(4)" } ], "section": "Hierarchical Convolution (HConv)", "sec_num": "3.1" }, { "text": "2 Due to space constrains, we limit our comparison to the CLS pooling strategy, based on the observation of little improvements when using other types of pooling (mean, max) and concatenation, or sequence processing recurrent units. where c i:i+1,k is the output of the first convolutional layer over the span i to i + 1 as defined in Equation 2, and w m and b m are the filter and bias term for the second convolutional layer with, a kernel size of 2 and output dimension of M = 128. Lastly, we extract feature vectors by max and mean pooling over the temporal axis and then concatenation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Convolution (HConv)", "sec_num": "3.1" }, { "text": "v k max = max c k i v k mean = avg c k i (5) v = v 2 max ; v 3 max ; v 4 max ; v 2 avg ; v 3 avg ; v 4 avg . (6)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Convolution (HConv)", "sec_num": "3.1" }, { "text": "The upper half of Table 1 shows that the proposed hierarchical convolutional (HConv) architecture provides substantial performance gains.", "cite_spans": [], "ref_spans": [ { "start": 18, "end": 25, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Hierarchical Convolution (HConv)", "sec_num": "3.1" }, { "text": "We also evaluate the utility of this mechanism in the fine-tuning setting with varying modelling flexibility. Concretely, we progressively increase the number of trainable parameters by transformer blocks. That is, for the base BERT model with 12 layers, we allow errors to be back-propagated through the last l layers while keeping the rest (12 \u2212 l) fixed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Flexibility", "sec_num": "3.2" }, { "text": "The results on STS-B and N2C2-STS are shown in Figure 1 . We observe performance crossover of HConv and CLS-pooling on both datasets as the number of trainable transformer layers increases. While HConv reaches peak performance before the crossover, CLS-pooling often requires more blocks to be trainable to achieve comparable accuracy, rendering the model much slower. Notably, the proposed mechanism peaks with much fewer trainable blocks on N2C2-STS than STS-B. We speculate that this is due to the size difference between the two datasets. To verify this hypothesis, we further look into the relationship between the number of trainable transformer blocks and training data size. In Figure 2 , we observe performance degradation as the size of training data shrinks, with the models trained on the full set achieving far superior Pearson correlation to those trained on the smaller subsets. Zooming into the curve representing each subset, we find that peak performance is attained at different points depending on data size: with the smallest dataset (500 instances), the number of parameter updates is also limited. Only updating the top few layers of transformer blocks is simply not enough to make the model fully adapt to the task. It is therefore beneficial to allow the model access to more trainable layers (e.g., 11) to improve performance.", "cite_spans": [], "ref_spans": [ { "start": 47, "end": 55, "text": "Figure 1", "ref_id": null }, { "start": 686, "end": 694, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Model Flexibility", "sec_num": "3.2" }, { "text": "Based on this, we set the number of trainable blocks to 6 for SICK-R (consisting of 4, 500 training instances), as presented in the bottom half of ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Flexibility", "sec_num": "3.2" }, { "text": "The accuracy of an STS model unsurprisingly depends on the amount of labeled data. This is reflected in Figure 2 , where models trained with more data outperform those with fewer training instances. In this section, we propose two data augmentation methods, namely segment reordering (SR) and back translation (BT), to address the data sparsity issue in clinical STS.", "cite_spans": [], "ref_spans": [ { "start": 104, "end": 112, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Data Augmentation", "sec_num": "4" }, { "text": "Segment reordering. Clinical texts often consist of text segments describing multiple events and patient symptoms. Each segment is often an independent semantic unit, separated by commas or semicolons. Inspired by the random word swapping of Wei and Zou (2019), we exploit this property and propose a heuristic, named segment reordering (SR), to generate permutations of the original sequence based on these segments. While we expect this to introduce some noise to the training data, our hypothesis is that the increase in training data size will outweigh this. For instance, consider the text new confusion or inability to stay alert and awake; feeling like you are going to pass out. Flipping the order of the two segments new confusion or inability to stay alert and awake and feeling like you are going to pass out will not hinder the overall understanding of the text. More formally, for a given pair of sentences S 1 and S 2 , each consisting of a sequence of segments S 1 = {s 11 , . . . , s 1m } and S 2 = {s 21 , . . . , s 2n }, we generate a new pair by randomly permuting the segment order, effectively doubling the size of the training corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Augmentation", "sec_num": "4" }, { "text": "Back translation. Inspired by the work of Yu et al. (2019) , we make use of machine translation tools to perform back translation (BT). Here, we choose Chinese as the pivot language as it is linguistically distant to English and supported by mature commercial translation solutions. That is, we first translate from English to Chinese and then back to English. We use Google Translate to translate each sentence in a sentence pair from English to Chinese, and Baidu Translation 3 to translate back to English. For example, for the original sentence negative for cough and stridor, the backtranslated result is bad for coughing and wheezing. We apply this to each sentence pair, doubling the amount of training data.", "cite_spans": [ { "start": 42, "end": 58, "text": "Yu et al. (2019)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Data Augmentation", "sec_num": "4" }, { "text": "We evaluate the effectiveness of SR and BT on N2C2-STS with four baseline models:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.1" }, { "text": "BERT base (Devlin et al., 2019) and BERT clinical (Alsentzer et al., 2019) , both using CLS-pooling and consisting of 12 layers; ConvBERT base , based on BERT base with hierarchical convolution and fine-tuning over the last 4 layers (consistent with our findings of the best model configuration in Section 3); and ConvBERT STS-B , where we take ConvBERT base and fine-tune first over STS-B, before N2C2-STS.", "cite_spans": [ { "start": 10, "end": 31, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF5" }, { "start": 50, "end": 74, "text": "(Alsentzer et al., 2019)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.1" }, { "text": "We split the training partition of N2C2-STS into 1, 233 (train) and 409 (dev) instances, and report results on the test set (412 instances).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.1" }, { "text": "Experimental results are presented in Table 2 . We see clear benefits of the two proposed data augmentation methods, consistently boosting performance across all categories, with BT providing larger gains than SR. This is likely caused by the rather na\u00efve implementation of SR, resulting in unnatural segment sequences. A possible fix to this is to further filter out such irregular statements with a language model pre-trained on clinical corpora. We leave this for future work.", "cite_spans": [], "ref_spans": [ { "start": 38, "end": 45, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "5.2" }, { "text": "It is impressive that the best-performing configuration ConvBERT STS-B + BT is capable of achieving comparable results with the state-of-the-art IBM-N2C2, an approach heavily reliant on external, domain-specific resources, and an ensemble of multiple pre-trained language models. Table 2 : Pearson r and Spearman \u03c1 on N2C2-STS for models with and without segment reordering (\"SR\") and back translation (\"BT\").", "cite_spans": [], "ref_spans": [ { "start": 280, "end": 287, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "5.2" }, { "text": "We additionally conduct a cross-domain experiment on BIOSSES (Soganc\u0131oglu et al., 2017) , a biomedical literature STS dataset comprising 100 sentence pairs derived from the Text Analysis Conference Biomedical Summarization task with scores ranging from 0 (complete unrelatedness) to 4 (exact equivalence). Specifically, baseline model Pooling BERT base and proposed ConvBERT STS-B + BT are both fine-tuned on N2C2-STS, and then applied with no further training to BIOSSES. Despite the increase in task difficulty, the proposed method demonstrates strong generalisability, outperforming the baseline by an absolute gain of 2.4 and 3.9 to 85.42/82.83 (r/\u03c1).", "cite_spans": [ { "start": 61, "end": 87, "text": "(Soganc\u0131oglu et al., 2017)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5.2" }, { "text": "In this paper, we have presented an empirical study of the impact of a number of model design choices on a BERT-based approach to clinical STS. We have demonstrated that the proposed hierarchical convolution mechanism outperforms a number of alternative conventional pooling methods. Also, we have investigated parameter fine-tuning strategies with varying degrees of flexibility, and identified the optimal number of trainable transformer blocks, thereby preventing over-tuning. Lastly, we have verified the utility of two data augmentation methods on clinical STS. It may be interesting to see the impact of leveraging target languages other than Chinese in BT, which we leave for future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Sentence similarity techniques for automatic text summarization", "authors": [ { "first": "Yazan", "middle": [], "last": "Alaya", "suffix": "" }, { "first": "Al-Khassawneh", "middle": [], "last": "", "suffix": "" }, { "first": "Naomie", "middle": [], "last": "Salim", "suffix": "" }, { "first": "Adekunle Isiaka", "middle": [], "last": "Obasae", "suffix": "" } ], "year": 2016, "venue": "", "volume": "3", "issue": "", "pages": "35--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yazan Alaya AL-Khassawneh, Naomie Salim, and Adekunle Isiaka Obasae. 2016. Sentence similarity techniques for automatic text summarization. Jour- nal of Soft Computing and Decision Support Sys- tems, 3(3):35-41.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Publicly available clinical BERT embeddings", "authors": [ { "first": "Emily", "middle": [], "last": "Alsentzer", "suffix": "" }, { "first": "John", "middle": [], "last": "Murphy", "suffix": "" }, { "first": "William", "middle": [], "last": "Boag", "suffix": "" }, { "first": "Wei-Hung", "middle": [], "last": "Weng", "suffix": "" }, { "first": "Di", "middle": [], "last": "Jindi", "suffix": "" }, { "first": "Tristan", "middle": [], "last": "Naumann", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Mcdermott", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2nd Clinical Natural Language Processing Workshop", "volume": "", "issue": "", "pages": "72--78", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emily Alsentzer, John Murphy, William Boag, Wei- Hung Weng, Di Jindi, Tristan Naumann, and Matthew McDermott. 2019. Publicly available clini- cal BERT embeddings. In Proceedings of the 2nd Clinical Natural Language Processing Workshop, pages 72-78, Minneapolis, Minnesota, USA.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The fifth pascal recognizing textual entailment challenge", "authors": [ { "first": "Luisa", "middle": [], "last": "Bentivogli", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" }, { "first": "Danilo", "middle": [], "last": "Giampiccolo", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luisa Bentivogli, Peter Clark, Ido Dagan, and Danilo Giampiccolo. 2009. The fifth pascal recognizing tex- tual entailment challenge. In TAC.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation", "authors": [ { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "I\u00f1igo", "middle": [], "last": "Lopez-Gazpio", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)", "volume": "", "issue": "", "pages": "1--14", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Cer, Mona Diab, Eneko Agirre, I\u00f1igo Lopez- Gazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1-14, Vancouver, Canada.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Measuring the semantic similarity of texts", "authors": [ { "first": "Courtney", "middle": [], "last": "Corley", "suffix": "" }, { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the ACL workshop on empirical modeling of semantic equivalence and entailment", "volume": "", "issue": "", "pages": "13--18", "other_ids": {}, "num": null, "urls": [], "raw_text": "Courtney Corley and Rada Mihalcea. 2005. Measuring the semantic similarity of texts. In Proceedings of the ACL workshop on empirical modeling of seman- tic equivalence and entailment, pages 13-18. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Automatically constructing a corpus of sentential paraphrases", "authors": [ { "first": "B", "middle": [], "last": "William", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dolan", "suffix": "" }, { "first": "", "middle": [], "last": "Brockett", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Third International Workshop on Paraphrasing (IWP2005)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "William B Dolan and Chris Brockett. 2005. Automati- cally constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005).", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Multiperspective sentence similarity modeling with convolutional neural networks", "authors": [ { "first": "Hua", "middle": [], "last": "He", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1576--1586", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hua He, Kevin Gimpel, and Jimmy Lin. 2015. Multi- perspective sentence similarity modeling with con- volutional neural networks. In Proceedings of the 2015 Conference on Empirical Methods in Natu- ral Language Processing, pages 1576-1586, Lisbon, Portugal.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Pairwise word interaction modeling with deep neural networks for semantic similarity measurement", "authors": [ { "first": "Hua", "middle": [], "last": "He", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "937--948", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hua He and Jimmy Lin. 2016. Pairwise word interac- tion modeling with deep neural networks for seman- tic similarity measurement. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 937-948, San Diego, California.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Epimenidis Voutsakis, Euripides GM Petrakis, and Evangelos Milios", "authors": [ { "first": "Angelos", "middle": [], "last": "Hliaoutakis", "suffix": "" }, { "first": "Giannis", "middle": [], "last": "Varelas", "suffix": "" } ], "year": 2006, "venue": "International Journal on Semantic Web and Information Systems (IJSWIS)", "volume": "2", "issue": "3", "pages": "55--73", "other_ids": {}, "num": null, "urls": [], "raw_text": "Angelos Hliaoutakis, Giannis Varelas, Epimenidis Voutsakis, Euripides GM Petrakis, and Evangelos Milios. 2006. Information retrieval by semantic sim- ilarity. International Journal on Semantic Web and Information Systems (IJSWIS), 2(3):55-73.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Detecting misflagged duplicate questions in community question-answering archives", "authors": [ { "first": "Doris", "middle": [], "last": "Hoogeveen", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Bennett", "suffix": "" }, { "first": "Yitong", "middle": [], "last": "Li", "suffix": "" }, { "first": "Karin", "middle": [ "M" ], "last": "Verspoor", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 2018, "venue": "Twelfth International AAAI Conference on Web and Social Media", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Doris Hoogeveen, Andrew Bennett, Yitong Li, Karin M Verspoor, and Timothy Baldwin. 2018. De- tecting misflagged duplicate questions in commu- nity question-answering archives. In Twelfth Inter- national AAAI Conference on Web and Social Me- dia.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "RACE: Large-scale ReAding comprehension dataset from examinations", "authors": [ { "first": "Guokun", "middle": [], "last": "Lai", "suffix": "" }, { "first": "Qizhe", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Hanxiao", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "785--794", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. RACE: Large-scale ReAd- ing comprehension dataset from examinations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 785-794, Copenhagen, Denmark. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Albert: A lite BERT for self-supervised learning of language representations", "authors": [ { "first": "Zhenzhong", "middle": [], "last": "Lan", "suffix": "" }, { "first": "Mingda", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Goodman", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "Piyush", "middle": [], "last": "Sharma", "suffix": "" }, { "first": "Radu", "middle": [], "last": "Soricut", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1909.11942" ] }, "num": null, "urls": [], "raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite BERT for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Roberta: A robustly optimized bert pretraining approach", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.11692" ] }, "num": null, "urls": [], "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Siamese recurrent architectures for learning sentence similarity", "authors": [ { "first": "Jonas", "middle": [], "last": "Mueller", "suffix": "" }, { "first": "Aditya", "middle": [], "last": "Thyagarajan", "suffix": "" } ], "year": 2016, "venue": "Thirtieth AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonas Mueller and Aditya Thyagarajan. 2016. Siamese recurrent architectures for learning sentence similar- ity. In Thirtieth AAAI Conference on Artificial Intel- ligence.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Improving language understanding by generative pre-training", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Karthik", "middle": [], "last": "Narasimhan", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Karthik Narasimhan, Tim Sali- mans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. URL https://s3-us-west-2.amazonaws.com/openai- assets/researchcovers/languageunsupervised/language understanding paper.pdf.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Sentence-BERT: Sentence embeddings using Siamese BERTnetworks", "authors": [ { "first": "Nils", "middle": [], "last": "Reimers", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "3980--3990", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence embeddings using Siamese BERT- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 3980-3990, Hong Kong, China.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Improving neural machine translation models with monolingual data", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "86--96", "other_ids": { "DOI": [ "10.18653/v1/P16-1009" ] }, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation mod- els with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 86-96, Berlin, Germany.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "BIOSSES: a semantic sentence similarity estimation system for the biomedical domain", "authors": [ { "first": "Gizem", "middle": [], "last": "Soganc\u0131oglu", "suffix": "" }, { "first": "Arzucan", "middle": [], "last": "Hakime\u00f6zt\u00fcrk", "suffix": "" }, { "first": "", "middle": [], "last": "Ozg\u00fcr", "suffix": "" } ], "year": 2017, "venue": "Bioinformatics", "volume": "33", "issue": "14", "pages": "49--58", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gizem Soganc\u0131oglu, Hakime\u00d6zt\u00fcrk, and Arzucan Ozg\u00fcr. 2017. BIOSSES: a semantic sentence simi- larity estimation system for the biomedical domain. Bioinformatics, 33(14):i49-i58.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "MedSTS: a resource for clinical semantic textual similarity. Language Resources and Evaluation", "authors": [ { "first": "Yanshan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Naveed", "middle": [], "last": "Afzal", "suffix": "" }, { "first": "Sunyang", "middle": [], "last": "Fu", "suffix": "" }, { "first": "Liwei", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Feichen", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Majid", "middle": [], "last": "Rastegar-Mojarad", "suffix": "" }, { "first": "Hongfang", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "1--16", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yanshan Wang, Naveed Afzal, Sunyang Fu, Liwei Wang, Feichen Shen, Majid Rastegar-Mojarad, and Hongfang Liu. 2018. MedSTS: a resource for clini- cal semantic textual similarity. Language Resources and Evaluation, pages 1-16.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "EDA: Easy data augmentation techniques for boosting performance on text classification tasks", "authors": [ { "first": "Jason", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Zou", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "6381--6387", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason Wei and Kai Zou. 2019. EDA: Easy data aug- mentation techniques for boosting performance on text classification tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6381-6387, Hong Kong, China.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A broad-coverage challenge corpus for sentence understanding through inference", "authors": [ { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Nikita", "middle": [], "last": "Nangia", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1112--1122", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122, New Orleans, Louisiana. Association for Computational Linguis- tics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Generalized data augmentation for low-resource translation", "authors": [ { "first": "Mengzhou", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Kong", "suffix": "" }, { "first": "Antonios", "middle": [], "last": "Anastasopoulos", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5786--5796", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mengzhou Xia, Xiang Kong, Antonios Anastasopou- los, and Graham Neubig. 2019. Generalized data augmentation for low-resource translation. In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 5786- 5796, Florence, Italy. Association for Computa- tional Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Xlnet: Generalized autoregressive pretraining for language understanding", "authors": [ { "first": "Zhilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zihang", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Jaime", "middle": [], "last": "Carbonell", "suffix": "" }, { "first": "R", "middle": [], "last": "Russ", "suffix": "" }, { "first": "Quoc V", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2019, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "5754--5764", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in neural in- formation processing systems, pages 5754-5764.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "QANet: Combining local convolution with global self-attention for reading comprehension", "authors": [ { "first": "Adams", "middle": [ "Wei" ], "last": "Yu", "suffix": "" }, { "first": "David", "middle": [], "last": "Dohan", "suffix": "" }, { "first": "Minh-Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Norouzi", "suffix": "" }, { "first": "Quoc V", "middle": [], "last": "Le", "suffix": "" } ], "year": 2019, "venue": "The Sixth International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V Le. 2019. QANet: Combining local convolution with global self-attention for reading comprehen- sion. In The Sixth International Conference on Learning Representations (ICLR).", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "Evaluation of CLS-BERT and HConv-BERT over datasets from the general (STS-B) and clinical (N2C2) domains. r refers to Pearson correlation. N2C2-STS is split into 1233 and 409 instances for training and dev. Impact of number of trainable transformer blocks based on HConv-BERT over different data size, randomly sampled from STS-B, ranging from 500 to full set (5, 749).", "num": null, "uris": null }, "TABREF1": { "content": "", "text": "with HConv outperforming CLS-pooling.", "num": null, "html": null, "type_str": "table" }, "TABREF2": { "content": "
Modelr\u03c1
IBM-N2C290.1 -
BERT base86.7 81.9
+ SR87.1 80.8
+ BT87.2 81.7
BERT clinical86.1 81.4
+ SR87.4 82.7
+ BT88.6 82.4
Conv1dBERT base87.7 80.7
+ SR88.0 81.4
+ BT88.1 82.2
Conv1dBERT STS-B 87.9 82.5
+ SR88.6 83.1
+ BT89.4 83.0
", "text": "https://fanyi.baidu.com/", "num": null, "html": null, "type_str": "table" } } } }