Datasets:

input
stringlengths
21
162
context
stringlengths
9.02k
101k
answers
list
length
int32
1.44k
14.7k
dataset
stringclasses
1 value
language
stringclasses
1 value
all_classes
list
_id
stringlengths
48
48
Which Facebook pages did they look at?
Introduction This work is licenced under a Creative Commons Attribution 4.0 International Licence. Licence details: http://creativecommons.org/licenses/by/4.0/ In the spirit of the brevity of social media's messages and reactions, people have got used to express feelings minimally and symbolically, as with hashtags on Twitter and Instagram. On Facebook, people tend to be more wordy, but posts normally receive more simple “likes” than longer comments. Since February 2016, Facebook users can express specific emotions in response to a post thanks to the newly introduced reaction feature (see Section SECREF2 ), so that now a post can be wordlessly marked with an expression of say “joy" or “surprise" rather than a generic “like”. It has been observed that this new feature helps Facebook to know much more about their users and exploit this information for targeted advertising BIBREF0 , but interest in people's opinions and how they feel isn't limited to commercial reasons, as it invests social monitoring, too, including health care and education BIBREF1 . However, emotions and opinions are not always expressed this explicitly, so that there is high interest in developing systems towards their automatic detection. Creating manually annotated datasets large enough to train supervised models is not only costly, but also—especially in the case of opinions and emotions—difficult, due to the intrinsic subjectivity of the task BIBREF2 , BIBREF3 . Therefore, research has focused on unsupervised methods enriched with information derived from lexica, which are manually created BIBREF3 , BIBREF4 . Since go2009twitter have shown that happy and sad emoticons can be successfully used as signals for sentiment labels, distant supervision, i.e. using some reasonably safe signals as proxies for automatically labelling training data BIBREF5 , has been used also for emotion recognition, for example exploiting both emoticons and Twitter hashtags BIBREF6 , but mainly towards creating emotion lexica. mohammad2015using use hashtags, experimenting also with highly fine-grained emotion sets (up to almost 600 emotion labels), to create the large Hashtag Emotion Lexicon. Emoticons are used as proxies also by hallsmarmulti, who use distributed vector representations to find which words are interchangeable with emoticons but also which emoticons are used in a similar context. We take advantage of distant supervision by using Facebook reactions as proxies for emotion labels, which to the best of our knowledge hasn't been done yet, and we train a set of Support Vector Machine models for emotion recognition. Our models, differently from existing ones, exploit information which is acquired entirely automatically, and achieve competitive or even state-of-the-art results for some of the emotion labels on existing, standard evaluation datasets. For explanatory purposes, related work is discussed further and more in detail when we describe the benchmarks for evaluation (Section SECREF3 ) and when we compare our models to existing ones (Section SECREF5 ). We also explore and discuss how choosing different sets of Facebook pages as training data provides an intrinsic domain-adaptation method. Facebook reactions as labels For years, on Facebook people could leave comments to posts, and also “like” them, by using a thumbs-up feature to explicitly express a generic, rather underspecified, approval. A “like” could thus mean “I like what you said", but also “I like that you bring up such topic (though I find the content of the article you linked annoying)". In February 2016, after a short trial, Facebook made a more explicit reaction feature available world-wide. Rather than allowing for the underspecified “like” as the only wordless response to a post, a set of six more specific reactions was introduced, as shown in Figure FIGREF1 : Like, Love, Haha, Wow, Sad and Angry. We use such reactions as proxies for emotion labels associated to posts. We collected Facebook posts and their corresponding reactions from public pages using the Facebook API, which we accessed via the Facebook-sdk python library. We chose different pages (and therefore domains and stances), aiming at a balanced and varied dataset, but we did so mainly based on intuition (see Section SECREF4 ) and with an eye to the nature of the datasets available for evaluation (see Section SECREF5 ). The choice of which pages to select posts from is far from trivial, and we believe this is actually an interesting aspect of our approach, as by using different Facebook pages one can intrinsically tackle the domain-adaptation problem (See Section SECREF6 for further discussion on this). The final collection of Facebook pages for the experiments described in this paper is as follows: FoxNews, CNN, ESPN, New York Times, Time magazine, Huffington Post Weird News, The Guardian, Cartoon Network, Cooking Light, Home Cooking Adventure, Justin Bieber, Nickelodeon, Spongebob, Disney. Note that thankful was only available during specific time spans related to certain events, as Mother's Day in May 2016. For each page, we downloaded the latest 1000 posts, or the maximum available if there are fewer, from February 2016, retrieving the counts of reactions for each post. The output is a JSON file containing a list of dictionaries with a timestamp, the post and a reaction vector with frequency values, which indicate how many users used that reaction in response to the post (Figure FIGREF3 ). The resulting emotion vectors must then be turned into an emotion label. In the context of this experiment, we made the simple decision of associating to each post the emotion with the highest count, ignoring like as it is the default and most generic reaction people tend to use. Therefore, for example, to the first post in Figure FIGREF3 , we would associate the label sad, as it has the highest score (284) among the meaningful emotions we consider, though it also has non-zero scores for other emotions. At this stage, we didn't perform any other entropy-based selection of posts, to be investigated in future work. Emotion datasets Three datasets annotated with emotions are commonly used for the development and evaluation of emotion detection systems, namely the Affective Text dataset, the Fairy Tales dataset, and the ISEAR dataset. In order to compare our performance to state-of-the-art results, we have used them as well. In this Section, in addition to a description of each dataset, we provide an overview of the emotions used, their distribution, and how we mapped them to those we obtained from Facebook posts in Section SECREF7 . A summary is provided in Table TABREF8 , which also shows, in the bottom row, what role each dataset has in our experiments: apart from the development portion of the Affective Text, which we used to develop our models (Section SECREF4 ), all three have been used as benchmarks for our evaluation. Affective Text dataset Task 14 at SemEval 2007 BIBREF7 was concerned with the classification of emotions and valence in news headlines. The headlines where collected from several news websites including Google news, The New York Times, BBC News and CNN. The used emotion labels were Anger, Disgust, Fear, Joy, Sadness, Surprise, in line with the six basic emotions of Ekman's standard model BIBREF8 . Valence was to be determined as positive or negative. Classification of emotion and valence were treated as separate tasks. Emotion labels were not considered as mututally exclusive, and each emotion was assigned a score from 0 to 100. Training/developing data amounted to 250 annotated headlines (Affective development), while systems were evaluated on another 1000 (Affective test). Evaluation was done using two different methods: a fine-grained evaluation using Pearson's r to measure the correlation between the system scores and the gold standard; and a coarse-grained method where each emotion score was converted to a binary label, and precision, recall, and f-score were computed to assess performance. As it is done in most works that use this dataset BIBREF3 , BIBREF4 , BIBREF9 , we also treat this as a classification problem (coarse-grained). This dataset has been extensively used for the evaluation of various unsupervised methods BIBREF2 , but also for testing different supervised learning techniques and feature portability BIBREF10 . Fairy Tales dataset This is a dataset collected by alm2008affect, where about 1,000 sentences from fairy tales (by B. Potter, H.C. Andersen and Grimm) were annotated with the same six emotions of the Affective Text dataset, though with different names: Angry, Disgusted, Fearful, Happy, Sad, and Surprised. In most works that use this dataset BIBREF3 , BIBREF4 , BIBREF9 , only sentences where all annotators agreed are used, and the labels angry and disgusted are merged. We adopt the same choices. ISEAR The ISEAR (International Survey on Emotion Antecedents and Reactions BIBREF11 , BIBREF12 ) is a dataset created in the context of a psychology project of the 1990s, by collecting questionnaires answered by people with different cultural backgrounds. The main aim of this project was to gather insights in cross-cultural aspects of emotional reactions. Student respondents, both psychologists and non-psychologists, were asked to report situations in which they had experienced all of seven major emotions (joy, fear, anger, sadness, disgust, shame and guilt). In each case, the questions covered the way they had appraised a given situation and how they reacted. The final dataset contains reports by approximately 3000 respondents from all over the world, for a total of 7665 sentences labelled with an emotion, making this the largest dataset out of the three we use. Overview of datasets and emotions We summarise datasets and emotion distribution from two viewpoints. First, because there are different sets of emotions labels in the datasets and Facebook data, we need to provide a mapping and derive a subset of emotions that we are going to use for the experiments. This is shown in Table TABREF8 , where in the “Mapped” column we report the final emotions we use in this paper: anger, joy, sadness, surprise. All labels in each dataset are mapped to these final emotions, which are therefore the labels we use for training and testing our models. Second, the distribution of the emotions for each dataset is different, as can be seen in Figure FIGREF9 . In Figure FIGREF9 we also provide the distribution of the emotions anger, joy, sadness, surprise per Facebook page, in terms of number of posts (recall that we assign to a post the label corresponding to the majority emotion associated to it, see Section SECREF2 ). We can observe that for example pages about news tend to have more sadness and anger posts, while pages about cooking and tv-shows have a high percentage of joy posts. We will use this information to find the best set of pages for a given target domain (see Section SECREF5 ). Model There are two main decisions to be taken in developing our model: (i) which Facebook pages to select as training data, and (ii) which features to use to train the model, which we discuss below. Specifically, we first set on a subset of pages and then experiment with features. Further exploration of the interaction between choice of pages and choice of features is left to future work, and partly discussed in Section SECREF6 . For development, we use a small portion of the Affective data set described in Section SECREF4 , that is the portion that had been released as development set for SemEval's 2007 Task 14 BIBREF7 , which contains 250 annotated sentences (Affective development, Section SECREF4 ). All results reported in this section are on this dataset. The test set of Task 14 as well as the other two datasets described in Section SECREF3 will be used to evaluate the final models (Section SECREF4 ). Selecting Facebook pages Although page selection is a crucial ingredient of this approach, which we believe calls for further and deeper, dedicated investigation, for the experiments described here we took a rather simple approach. First, we selected the pages that would provide training data based on intuition and availability, then chose different combinations according to results of a basic model run on development data, and eventually tested feature combinations, still on the development set. For the sake of simplicity and transparency, we first trained an SVM with a simple bag-of-words model and default parameters as per the Scikit-learn implementation BIBREF13 on different combinations of pages. Based on results of the attempted combinations as well as on the distribution of emotions in the development dataset (Figure FIGREF9 ), we selected a best model (B-M), namely the combined set of Time, The Guardian and Disney, which yields the highest results on development data. Time and The Guardian perform well on most emotions but Disney helps to boost the performance for the Joy class. Features In selecting appropriate features, we mainly relied on previous work and intuition. We experimented with different combinations, and all tests were still done on Affective development, using the pages for the best model (B-M) described above as training data. Results are in Table TABREF20 . Future work will further explore the simultaneous selection of features and page combinations. We use a set of basic text-based features to capture the emotion class. These include a tf-idf bag-of-words feature, word (2-3) and character (2-5) ngrams, and features related to the presence of negation words, and to the usage of punctuation. This feature is used in all unsupervised models as a source of information, and we mainly include it to assess its contribution, but eventually do not use it in our final model. We used the NRC10 Lexicon because it performed best in the experiments by BIBREF10 , which is built around the emotions anger, anticipation, disgust, fear, joy, sadness, and surprise, and the valence values positive and negative. For each word in the lexicon, a boolean value indicating presence or absence is associated to each emotion. For a whole sentence, a global score per emotion can be obtained by summing the vectors for all content words of that sentence included in the lexicon, and used as feature. As additional feature, we also included Word Embeddings, namely distributed representations of words in a vector space, which have been exceptionally successful in boosting performance in a plethora of NLP tasks. We use three different embeddings: Google embeddings: pre-trained embeddings trained on Google News and obtained with the skip-gram architecture described in BIBREF14 . This model contains 300-dimensional vectors for 3 million words and phrases. Facebook embeddings: embeddings that we trained on our scraped Facebook pages for a total of 20,000 sentences. Using the gensim library BIBREF15 , we trained the embeddings with the following parameters: window size of 5, learning rate of 0.01 and dimensionality of 100. We filtered out words with frequency lower than 2 occurrences. Retrofitted embeddings: Retrofitting BIBREF16 has been shown as a simple but efficient way of informing trained embeddings with additional information derived from some lexical resource, rather than including it directly at the training stage, as it's done for example to create sense-aware BIBREF17 or sentiment-aware BIBREF18 embeddings. In this work, we retrofit general embeddings to include information about emotions, so that emotion-similar words can get closer in space. Both the Google as well as our Facebook embeddings were retrofitted with lexical information obtained from the NRC10 Lexicon mentioned above, which provides emotion-similarity for each token. Note that differently from the previous two types of embeddings, the retrofitted ones do rely on handcrafted information in the form of a lexical resource. Results on development set We report precision, recall, and f-score on the development set. The average f-score is reported as micro-average, to better account for the skewed distribution of the classes as well as in accordance to what is usually reported for this task BIBREF19 . From Table TABREF20 we draw three main observations. First, a simple tf-idf bag-of-word mode works already very well, to the point that the other textual and lexicon-based features don't seem to contribute to the overall f-score (0.368), although there is a rather substantial variation of scores per class. Second, Google embeddings perform a lot better than Facebook embeddings, and this is likely due to the size of the corpus used for training. Retrofitting doesn't seem to help at all for the Google embeddings, but it does boost the Facebook embeddings, leading to think that with little data, more accurate task-related information is helping, but corpus size matters most. Third, in combination with embeddings, all features work better than just using tf-idf, but removing the Lexicon feature, which is the only one based on hand-crafted resources, yields even better results. Then our best model (B-M) on development data relies entirely on automatically obtained information, both in terms of training data as well as features. Results In Table TABREF26 we report the results of our model on the three datasets standardly used for the evaluation of emotion classification, which we have described in Section SECREF3 . Our B-M model relies on subsets of Facebook pages for training, which were chosen according to their performance on the development set as well as on the observation of emotions distribution on different pages and in the different datasets, as described in Section SECREF4 . The feature set we use is our best on the development set, namely all the features plus Google-based embeddings, but excluding the lexicon. This makes our approach completely independent of any manual annotation or handcrafted resource. Our model's performance is compared to the following systems, for which results are reported in the referred literature. Please note that no other existing model was re-implemented, and results are those reported in the respective papers. Discussion, conclusions and future work We have explored the potential of using Facebook reactions in a distant supervised setting to perform emotion classification. The evaluation on standard benchmarks shows that models trained as such, especially when enhanced with continuous vector representations, can achieve competitive results without relying on any handcrafted resource. An interesting aspect of our approach is the view to domain adaptation via the selection of Facebook pages to be used as training data. We believe that this approach has a lot of potential, and we see the following directions for improvement. Feature-wise, we want to train emotion-aware embeddings, in the vein of work by tang:14, and iacobacci2015sensembed. Retrofitting FB-embeddings trained on a larger corpus might also be successful, but would rely on an external lexicon. The largest room for yielding not only better results but also interesting insights on extensions of this approach lies in the choice of training instances, both in terms of Facebook pages to get posts from, as well as in which posts to select from the given pages. For the latter, one could for example only select posts that have a certain length, ignore posts that are only quotes or captions to images, or expand posts by including content from linked html pages, which might provide larger and better contexts BIBREF23 . Additionally, and most importantly, one could use an entropy-based measure to select only posts that have a strong emotion rather than just considering the majority emotion as training label. For the former, namely the choice of Facebook pages, which we believe deserves the most investigation, one could explore several avenues, especially in relation to stance-based issues BIBREF24 . In our dataset, for example, a post about Chile beating Colombia in a football match during the Copa America had very contradictory reactions, depending on which side readers would cheer for. Similarly, the very same political event, for example, would get very different reactions from readers if it was posted on Fox News or The Late Night Show, as the target audience is likely to feel very differently about the same issue. This also brings up theoretical issues related more generally to the definition of the emotion detection task, as it's strongly dependent on personal traits of the audience. Also, in this work, pages initially selected on availability and intuition were further grouped into sets to make training data according to performance on development data, and label distribution. Another criterion to be exploited would be vocabulary overlap between the pages and the datasets. Lastly, we could develop single models for each emotion, treating the problem as a multi-label task. This would even better reflect the ambiguity and subjectivity intrinsic to assigning emotions to text, where content could be at same time joyful or sad, depending on the reader. Acknowledgements In addition to the anonymous reviewers, we want to thank Lucia Passaro and Barbara Plank for insightful discussions, and for providing comments on draft versions of this paper.
[ "FoxNews, CNN, ESPN, New York Times, Time magazine, Huffington Post Weird News, The Guardian, Cartoon Network, Cooking Light, Home Cooking Adventure, Justin Bieber, Nickelodeon, Spongebob, Disney", "FoxNews, CNN, ESPN, New York Times, Time magazine, Huffington Post Weird News, The Guardian, Cartoon Network, Cooking Light, Home Cooking Adventure, Justin Bieber, Nickelodeon, Spongebob, Disney." ]
3,411
qasper_e
en
null
b26174d8ec97e8d01421548d3a640e8382db6235eb506661
What type of latent context is used to predict instructor intervention?
Introduction Massive Open Online Courses (MOOCs) have strived to bridge the social gap in higher education by bringing quality education from reputed universities to students at large. Such massive scaling through online classrooms, however, disrupt co-located, synchronous two-way communication between the students and the instructor. MOOC platforms provide discussion forums for students to talk to their classmates about the lectures, homeworks, quizzes and provide a venue to socialise. Instructors (defined here as the course instructors, their teaching assistants and the MOOC platform's technical staff) monitor the discussion forum to post (reply to their message) in discussion threads among students. We refer to this posting as intervention, following prior work BIBREF0 . However, due to large student enrolment, the student–instructor ratio in MOOCs is very high Therefore, instructors are not able to monitor and participate in all student discussions. To address this problem, a number of works have proposed systems e.g., BIBREF0 , BIBREF1 to aid instructors to selectively intervene on student discussions where they are needed the most. In this paper, we improve the state-of-the-art for instructor intervention in MOOC forums. We propose the first neural models for this prediction problem. We show that modelling the thread structure and the sequence of posts explicitly improves performance. Instructors in different MOOCs from different subject areas intervene differently. For example, on a Science, Technology, Engineering and Mathematics (STEM) MOOC, instructors may often intervene early as possible to resolve misunderstanding of the subject material and prevent confusion. However, in a Humanities MOOC, instructors allow for the students to explore open-ended discussions and debate among themselves. Such instructors may prefer to intervene later in the discussion to encourage further discussion or resolve conflicts among students. We therefore propose attention models to infer the latent context, i.e., the series of posts that trigger an intervention. Earlier studies on MOOC forum intervention either model the entire context or require the context size to be specified explicitly. Problem Statement A thread INLINEFORM0 consists of a series of posts INLINEFORM1 through INLINEFORM2 where INLINEFORM3 is an instructor's post when INLINEFORM4 is intervened, if applicable. INLINEFORM5 is considered intervened if an instructor had posted at least once. The problem of predicting instructor intervention is cast as a binary classification problem. Intervened threads are predicted as 1 given while non-intervened threads are predicted as 0 given posts INLINEFORM6 through INLINEFORM7 . The primary problem leads to a secondary problem of inferring the appropriate amount of context to intervene. We define a context INLINEFORM0 of a post INLINEFORM1 as a series of linear contiguous posts INLINEFORM2 through INLINEFORM3 where INLINEFORM4 . The problem of inferring context is to identify context INLINEFORM5 from a set of candidate contexts INLINEFORM6 . Modelling Context in Forums Context has been used and modelled in various ways for different problems in discussion forums. In a work on a closely related problem of forum thread retrieval BIBREF2 models context using inter-post discourse e.g., Question-Answer. BIBREF3 models the structural dependencies and relationships between forum posts using a conditional random field in their problem to infer the reply structure. Unlike BIBREF2 , BIBREF3 can be used to model any structural dependency and is, therefore, more general. In this paper, we seek to infer general dependencies between a reply and its previous context whereas BIBREF3 inference is limited to pairs of posts. More recently BIBREF4 proposed a context based model which factorises attention over threads of different lengths. Differently, we do not model length but the context before a post. However, our attention models cater to threads of all lengths. BIBREF5 proposed graph structured LSTM to model the explicit reply structure in Reddit forums. Our work does not assume access to such a reply structure because 1) Coursera forums do not provide one and 2) forum participants often err by posting their reply to a different post than that they intended. At the other end of the spectrum are document classification models that do not assume structure in the document layout but try to infer inherent structure in the natural language, viz, words, sentences, paragraphs and documents. Hierarchical attention BIBREF6 is a well know recent work that classifies documents using a multi-level LSTMs with attention mechanism to select important units at each hierarchical level. Differently, we propose a hierarchical model that encodes layout hierarchy between a post and a thread but also infers reply structure using a attention mechanism since the layout does not reliably encode it. Instructor Intervention in MOOC forums The problem of predicting instructor intervention in MOOCs was proposed by BIBREF0 . Later BIBREF7 evaluated baseline models by BIBREF0 over a larger corpus and found the results to vary widely across MOOCs. Since then subsequent works have used similar diverse evaluations on the same prediction problem BIBREF1 , BIBREF8 . BIBREF1 proposed models with discourse features to enable better prediction over unseen MOOCs. BIBREF8 recently showed interventions on Coursera forums to be biased by the position at which a thread appears to an instructor viewing the forum interface and proposed methods for debiased prediction. While all works since BIBREF0 address key limitations in this line of research, they have not investigated the role of structure and sequence in the threaded discussion in predicting instructor interventions. BIBREF0 proposed probabilistic graphical models to model structure and sequence. They inferred vocabulary dependent latent post categories to model the thread sequence and infer states that triggered intervention. Their model, however, requires a hyperparameter for the number of latent states. It is likely that their empirically reported setting will not generalise due to their weak evaluation BIBREF7 . In this paper, we propose models to infer the context that triggers instructor intervention that does not require context lengths to be set apriori. All our proposed models generalise over modelling assumptions made by BIBREF0 . For the purpose of comparison against a state-of-the-art and competing baselines we choose BIBREF7 since BIBREF0 's system and data are not available for replication. Data and Preprocessing We evaluate our proposed models over a corpus of 12 MOOC iterations (offerings) on Coursera.org In partnership with Coursera and in line with its Terms of Service, we obtained the data for use in our academic research. Following prior work BIBREF7 we evaluate over a diverse dataset to represent MOOCs of varying sizes, instructor styles, instructor team sizes and number of threads intervened. We only include threads from sub-forums on Lecture, Homework, Quiz and Exam. We also normalise and label sub-forums with other non-standard names (e.g., Assignments instead of Homework) into of the four said sub-forums. Threads on general discussion, meet and greet and other custom sub-forums for social chitchat are omitted as our focus is to aid instructors on intervening on discussion on the subject matter. We also exclude announcement threads and other threads started by instructors since they are not interventions. We preprocess each thread by replacing URLs, equations and other mathematical formulae and references to timestamps in lecture videos by tokens INLINEFORM0 URL INLINEFORM1 , INLINEFORM2 MATH INLINEFORM3 , INLINEFORM4 TIMEREF INLINEFORM5 respectively. We also truncate intervened threads to only include posts before the first instructor post since the instructor's and subsequent posts will bias the prediction due to the instructor's post. Model The key innovation of our work is to decompose the intervention prediction problem into a two-stage model that first explicitly tries to discover the proper context to which a potential intervention could be replying to, and then, predict the intervention status. This model implicitly assesses the importance (or urgency) of the existing thread's context to decide whether an intervention is necessary. For example in Figure SECREF1 , prior to the instructor's intervention, the ultimate post (Post #6) by Student 2 already acknowledged the OP's gratitude for his answer. In this regard, the instructor may have decided to use this point to summarize the entire thread to consolidate all the pertinent positions. Here, we might assume that the instructor's reply takes the entire thread (Posts #1–6) as the context for her reply. This subproblem of inferring the context scope is where our innovation centers on. To be clear, in order to make the prediction that a instruction intervention is now necessary on a thread, the instructor's reply is not yet available — the model predicts whether a reply is necessary — so in the example, only Posts #1–6 are available in the problem setting. To infer the context, we have to decide which subsequence of posts are the most plausible motivation for an intervention. Recent work in deep neural modeling has used an attention mechanism as a focusing query to highlight specific items within the input history that significantly influence the current decision point. Our work employs this mechanism – but with a twist: due to the fact that the actual instructor intervention is not (yet) available at the decision timing, we cannot use any actual intervention to decide the context. To employ attention, we must then employ a surrogate text as the query to train our prediction model. Our model variants model assess the suitability of such surrogate texts for the attention mechanism basis. Congruent with the representation of the input forums, in all our proposed models, we encode the discussion thread hierarchically. We first build representations for each post by passing pre-trained word vector representations from GloVe BIBREF9 for each word through an LSTM BIBREF10 , INLINEFORM0 . We use the last layer output of the LSTM as a representation of the post. We refer this as the post vector INLINEFORM1 . Then each post INLINEFORM0 is passed through another LSTM, INLINEFORM1 , whose last layer output forms the encoding of the entire thread. Hidden unit outputs of INLINEFORM2 represent the contexts INLINEFORM3 ; that is, snapshots of the threads after each post, as shown in Figure FIGREF1 . The INLINEFORM0 and INLINEFORM1 together constitute the hierarchical LSTM (hLSTM) model. This general hLSTM model serves as the basis for our model exploration in the rest of this section. Contextual Attention Models When they intervene, instructors either pay attention to a specific post or a series of posts, which trigger their reply. However, instructors rarely explicitly indicate to which post(s) their intervention is in relation to. This is the case in our corpus, party due to Coursera's user interface which only allows for single level comments (see Figure FIGREF2 ). Based solely on the binary, thread-level intervention signal, our secondary objective seeks to infer the appropriate context – represented by a sequence of posts – as the basis for the intervention. We only consider linear contiguous series of posts starting with the thread's original post to constitute to a context; e.g., INLINEFORM0 . This is a reasonable as MOOC forum posts always reply to the original post or to a subsequent post, which in turn replies to the original post. This is in contrast to forums such as Reddit that have a tree or graph-like structure that require forum structure to be modelled explicitly, such as in BIBREF5 . We propose three neural attention BIBREF11 variants based on how an instructor might attend and reply to a context in a thread: the ultimate, penultimate and any post attention models. We review each of these in turn. Ultimate Post Attention (UPA) Model. In this model we attend to the context represented by hidden state of the INLINEFORM0 . We use the post prior to the instructor's reply as a query over the contexts INLINEFORM1 to compute attention weights INLINEFORM2 , which are then used to compute the attended context representation INLINEFORM3 (recall again that the intervention text itself is not available for this purpose). This attention formulation makes an equivalence between the final INLINEFORM4 post and the prospective intervention, using Post INLINEFORM5 as the query for finding the appropriate context INLINEFORM6 , inclusive of itself INLINEFORM7 . Said in another way, UPA uses the most recent content in the thread as the attentional query for context. For example, if post INLINEFORM0 is the instructor's reply, post INLINEFORM1 will query over the contexts INLINEFORM2 and INLINEFORM3 . The model schematic is shown in Figure FIGREF12 . The attended context representations are computed as: DISPLAYFORM0 The INLINEFORM0 representation is then passed through a fully connected softmax layer to yield the binary prediction. Penultimate Post Attention (PPA) Model. While the UPA model uses the most recent text and makes the ultimate post itself available as potential context, our the ultimate post may be better modeled as having any of its prior posts as potential context. Penultimate Post Attention (PPA) variant does this. The schematic and the equations for the PPA model are obtained by summing over contexts INLINEFORM0 in Equation EQREF10 and Figure FIGREF12 . While we could properly model such a context inference decision with any post INLINEFORM1 and prospective contexts INLINEFORM2 (where INLINEFORM3 is a random post), it makes sense to use the penultimate post, as we can make the most information available to the model for the context inference. The attended context representations are computed as: DISPLAYFORM0 Any Post Attention (APA) Model. APA further relaxes both UPA and PPA, allowing APA to generalize and hypothesize that the prospective instructor intervention is based on the context that any previous post INLINEFORM0 replied to. In this model, each post INLINEFORM1 is set as a query to attend to its previous context INLINEFORM2 . For example, INLINEFORM3 will attend to INLINEFORM4 . Different from standard attention mechanisms, APA attention weights INLINEFORM5 are obtained by normalising interaction matrix over the different queries. In APA, the attention context INLINEFORM0 is computed via: DISPLAYFORM0 Evaluation The baseline and the models are evaluated on a corpus of 12 MOOC discussion forums. We train on 80% of the training data and report evaluation results on the held-out 20% of test data. We report INLINEFORM0 scores on the positive class (interventions), in line with prior work. We also argue that recall of the positive class is more important than precision, since it is costlier for instructors to miss intervening on a thread than spending irrelevant time intervening on a less critical threads due to false positives. Model hyperpameter settings. All proposed and baseline neural models are trained using Adam optimizer with a learning rate of 0.001. We used cross-entropy as loss function. Importantly we updated the model parameters during training after each instance as in vanilla stochastic gradient descent; this setting was practical since data on most courses had only a few hundred instances enabling convergence within a reasonable training time of a few hours (see Table TABREF15 , column 2). Models were trained for a single epoch as most of our courses with a few hundred thread converged after a single epoch. We used 300-dimensional GloVe vectors and permitted the embeddings to be updated during the model's end-to-end training. The hidden dimension size of both INLINEFORM0 and INLINEFORM1 are set to 128 for all the models. Baselines. We compare our models against a neural baseline models, hierarchical LSTM (hLSTM), with the attention ablated but with access to the complete context, and a strong, open-sourced feature-rich baseline BIBREF7 . We choose BIBREF7 over other prior works such as BIBREF0 since we do not have access to the dataset or the system used in their papers for replication. BIBREF7 is a logistic regression classifier with features inclusive of bag-of-words representation of the unigrams and thread length, normalised counts of agreements to previous posts, counts of non-lexical reference items such as URLs, and the Coursera forum type in which a thread appeared. We also report aggregated results from a hLSTM model with access only to the last post as context for comparison. Table TABREF17 compares the performance of these baselines against our proposed methods. Results Table TABREF15 shows performance of all our proposed models and the neural baseline over our 12 MOOC dataset. Our models of UPA, PPA individually better the baseline by 5 and 2% on INLINEFORM0 and 3 and 6% on recall respectively. UPA performs the best in terms of INLINEFORM1 on average while PPA performs the best in terms of recall on average. At the individual course level, however, the results are mixed. UPA performs the best on INLINEFORM2 on 5 out of 12 courses, PPA on 3 out 12 courses, APA 1 out of 12 courses and the baseline hLSTM on 1. PPA performs the best on recall on 7 out of the 12 courses. We also note that course level performance differences correlate with the course size and intervention ratio (hereafter, i.ratio), which is the ratio of intervened to non-intervened threads. UPA performs better than PPA and APA on low intervention courses (i.ratio INLINEFORM3 0.25) mainly because PPA and APA's performance drops steeply when i.ratio drops (see col.2 parenthesis and INLINEFORM4 of PPA and APA). While all the proposed models beat the baseline on every course except casebased-2. On medicalneuro-2 and compilers-4 which have the lowest i.ratio among the 12 courses none of the neural models better the reported baseline BIBREF7 (course level not scores not shown in this paper). The effect is pronounced in compilers-4 course where none of the neural models were able to predict any intervened threads. This is due to the inherent weakness of standard neural models, which are unable to learn features well enough when faced with sparse data. The best performance of UPA indicates that the reply context of the instructor's post INLINEFORM0 correlates strongly with that of the previous post INLINEFORM1 . This is not surprising since normal conversations are typically structured that way. Discussion In order to further understand the models' ability to infer the context and its effect on intervention prediction, we further investigate the following research questions. RQ1. Does context inference help intervention prediction? In order to understand if context inference is useful to intervention prediction, we ablate the attention components and experiment with the vanilla hierarchical LSTM model. Row 3 of Table TABREF17 shows the macro averaged result from this experiment. The UPA and PPA attention models better the vanilla hLSTM by 5% and 2% on average in INLINEFORM0 respectively. Recall that the vanilla hLSTM already has access to a context consisting of all posts (from INLINEFORM1 through INLINEFORM2 ). In contrast, the UPA and PPA models selectively infers a context for INLINEFORM3 and INLINEFORM4 posts, respectively, and use it to predict intervention. The improved performance of our attention models that actively select their optimal context, over a model with the complete thread as context, hLSTM, shows that the context inference improves intervention prediction over using the default full context. RQ2. How well do the models perform across threads of different lengths? To understand the models' prediction performance across threads of different lengths, we bin threads by length and study the models' recall. We choose three courses, ml-5, rprog-3 and calc-1, from our corpus of 12 with the highest number of positive instances ( INLINEFORM0 100 threads). We limit our analysis to these since binning renders courses with fewer positive instances sparse. Figure FIGREF18 shows performance across thread lengths from 1 through 7 posts and INLINEFORM1 posts. Clearly, the UPA model performs much better on shorter threads than on longer threads while PPA and APA works better on longer threads. Although, UPA is the best performing model in terms of overall INLINEFORM2 its performance drops steeply on threads of length INLINEFORM3 . UPA's overall best performance is because most of the interventions in the corpus happen after one post. To highlight the performance of APA we show an example from smac-1 in Figure FIGREF22 with nine posts which was predicted correctly as intervened by APA but not by other models. Threads shows students confused over a missing figure in a homework. The instructor finally shows up, though late, to resolve the confusion. RQ3. Do models trained with different context lengths perform better than when trained on a single context length? We find that context length has a regularising effect on the model's performance at test time. This is not surprising since models trained with threads of single context length will not generalise to infer different context lengths. Row 4 of Table TABREF17 shows a steep performance drop in training by classifier with all threads truncated to a context of just one post, INLINEFORM0 , the post immediately preceding the intervened post. We also conducted an experiment with a multi-objective loss function with an additive cross-entropy term where each term computes loss from a model with context limited to a length of 3. We chose 3 since intervened threads in all the courses had a median length between 3 and 4. We achieved an INLINEFORM1 of 0.45 with a precision of 0.47 and recall of 0.43. This achieves a performance comparable to that of the BIBREF7 with context length set to only to 3. This approach of using infinitely many loss terms for each context length from 1 through the maximum thread length in a course is naive and not practical. We only use this model to show the importance of training the model with loss from threads of different lengths to prevent models overfitting to threads of specific context lengths. Conclusion We predict instructor intervention on student discussions by first inferring the optimal size of the context needed to decide on the intervention decision for the intervened post. We first show that a structured representation of the complete thread as the context is better than a bag-of-words, feature-rich representation. We then propose attention-based models to infer and select a context – defined as a contiguous subsequence of student posts – to improve over a model that always takes the complete thread as a context to prediction intervention. Our Any Post Attention (APA) model enables instructors to tune the model to predict intervention early or late. We posit our APA model will enable MOOC instructors employing varying pedagogical styles to use the model equally well. We introspect the attention models' performance across threads of varying lengths and show that APA predicts intervention on longer threads, which possesses more candidate contexts, better. We note that the recall of the predictive models for longer threads (that is, threads of length greater 2) can still be improved. Models perform differently between shorter and longer length. An ensemble model or a multi-objective loss function is thus planned in our future work to better prediction on such longer threads.
[ "the series of posts that trigger an intervention" ]
3,732
qasper_e
en
null
8e5e6c35689e17e9f50df76ada506a4c05be9ed081c883d4
What other evaluation metrics are looked at?
Introduction Sarcasm is an intensive, indirect and complex construct that is often intended to express contempt or ridicule . Sarcasm, in speech, is multi-modal, involving tone, body-language and gestures along with linguistic artifacts used in speech. Sarcasm in text, on the other hand, is more restrictive when it comes to such non-linguistic modalities. This makes recognizing textual sarcasm more challenging for both humans and machines. Sarcasm detection plays an indispensable role in applications like online review summarizers, dialog systems, recommendation systems and sentiment analyzers. This makes automatic detection of sarcasm an important problem. However, it has been quite difficult to solve such a problem with traditional NLP tools and techniques. This is apparent from the results reported by the survey from DBLP:journals/corr/JoshiBC16. The following discussion brings more insights into this. Consider a scenario where an online reviewer gives a negative opinion about a movie through sarcasm: “This is the kind of movie you see because the theater has air conditioning”. It is difficult for an automatic sentiment analyzer to assign a rating to the movie and, in the absence of any other information, such a system may not be able to comprehend that prioritizing the air-conditioning facilities of the theater over the movie experience indicates a negative sentiment towards the movie. This gives an intuition to why, for sarcasm detection, it is necessary to go beyond textual analysis. We aim to address this problem by exploiting the psycholinguistic side of sarcasm detection, using cognitive features extracted with the help of eye-tracking. A motivation to consider cognitive features comes from analyzing human eye-movement trajectories that supports the conjecture: Reading sarcastic texts induces distinctive eye movement patterns, compared to literal texts. The cognitive features, derived from human eye movement patterns observed during reading, include two primary feature types: The cognitive features, along with textual features used in best available sarcasm detectors, are used to train binary classifiers against given sarcasm labels. Our experiments show significant improvement in classification accuracy over the state of the art, by performing such augmentation. Related Work Sarcasm, in general, has been the focus of research for quite some time. In one of the pioneering works jorgensen1984test explained how sarcasm arises when a figurative meaning is used opposite to the literal meaning of the utterance. In the word of clark1984pretense, sarcasm processing involves canceling the indirectly negated message and replacing it with the implicated one. giora1995irony, on the other hand, define sarcasm as a mode of indirect negation that requires processing of both negated and implicated messages. ivanko2003context define sarcasm as a six tuple entity consisting of a speaker, a listener, Context, Utterance, Literal Proposition and Intended Proposition and study the cognitive aspects of sarcasm processing. Computational linguists have previously addressed this problem using rule based and statistical techniques, that make use of : (a) Unigrams and Pragmatic features BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 (b) Stylistic patterns BIBREF4 and patterns related to situational disparity BIBREF5 and (c) Hastag interpretations BIBREF6 , BIBREF7 . Most of the previously done work on sarcasm detection uses distant supervision based techniques (ex: leveraging hashtags) and stylistic/pragmatic features (emoticons, laughter expressions such as “lol” etc). But, detecting sarcasm in linguistically well-formed structures, in absence of explicit cues or information (like emoticons), proves to be hard using such linguistic/stylistic features alone. With the advent of sophisticated eye-trackers and electro/magneto-encephalographic (EEG/MEG) devices, it has been possible to delve deep into the cognitive underpinnings of sarcasm understanding. Filik2014, using a series of eye-tracking and EEG experiments try to show that for unfamiliar ironies, the literal interpretation would be computed first. They also show that a mismatch with context would lead to a re-interpretation of the statement, as being ironic. Camblin2007103 show that in multi-sentence passages, discourse congruence has robust effects on eye movements. This also implies that disrupted processing occurs for discourse incongruent words, even though they are perfectly congruous at the sentence level. In our previous work BIBREF8 , we augment cognitive features, derived from eye-movement patterns of readers, with textual features to detect whether a human reader has realized the presence of sarcasm in text or not. The recent advancements in the literature discussed above, motivate us to explore gaze-based cognition for sarcasm detection. As far as we know, our work is the first of its kind. Eye-tracking Database for Sarcasm Analysis Sarcasm often emanates from incongruity BIBREF9 , which enforces the brain to reanalyze it BIBREF10 . This, in turn, affects the way eyes move through the text. Hence, distinctive eye-movement patterns may be observed in the case of successful processing of sarcasm in text in contrast to literal texts. This hypothesis forms the crux of our method for sarcasm detection and we validate this using our previously released freely available sarcasm dataset BIBREF8 enriched with gaze information. Document Description The database consists of 1,000 short texts, each having 10-40 words. Out of these, 350 are sarcastic and are collected as follows: (a) 103 sentences are from two popular sarcastic quote websites, (b) 76 sarcastic short movie reviews are manually extracted from the Amazon Movie Corpus BIBREF11 by two linguists. (c) 171 tweets are downloaded using the hashtag #sarcasm from Twitter. The 650 non-sarcastic texts are either downloaded from Twitter or extracted from the Amazon Movie Review corpus. The sentences do not contain words/phrases that are highly topic or culture specific. The tweets were normalized to make them linguistically well formed to avoid difficulty in interpreting social media lingo. Every sentence in our dataset carries positive or negative opinion about specific “aspects”. For example, the sentence “The movie is extremely well cast” has positive sentiment about the aspect “cast”. The annotators were seven graduate students with science and engineering background, and possess good English proficiency. They were given a set of instructions beforehand and are advised to seek clarifications before they proceed. The instructions mention the nature of the task, annotation input method, and necessity of head movement minimization during the experiment. Task Description The task assigned to annotators was to read sentences one at a time and label them with with binary labels indicating the polarity (i.e., positive/negative). Note that, the participants were not instructed to annotate whether a sentence is sarcastic or not., to rule out the Priming Effect (i.e., if sarcasm is expected beforehand, processing incongruity becomes relatively easier BIBREF12 ). The setup ensures its “ecological validity” in two ways: (1) Readers are not given any clue that they have to treat sarcasm with special attention. This is done by setting the task to polarity annotation (instead of sarcasm detection). (2) Sarcastic sentences are mixed with non sarcastic text, which does not give prior knowledge about whether the forthcoming text will be sarcastic or not. The eye-tracking experiment is conducted by following the standard norms in eye-movement research BIBREF13 . At a time, one sentence is displayed to the reader along with the “aspect” with respect to which the annotation has to be provided. While reading, an SR-Research Eyelink-1000 eye-tracker (monocular remote mode, sampling rate 500Hz) records several eye-movement parameters like fixations (a long stay of gaze) and saccade (quick jumping of gaze between two positions of rest) and pupil size. The accuracy of polarity annotation varies between 72%-91% for sarcastic texts and 75%-91% for non-sarcastic text, showing the inherent difficulty of sentiment annotation, when sarcasm is present in the text under consideration. Annotation errors may be attributed to: (a) lack of patience/attention while reading, (b) issues related to text comprehension, and (c) confusion/indecisiveness caused due to lack of context. For our analysis, we do not discard the incorrect annotations present in the database. Since our system eventually aims to involve online readers for sarcasm detection, it will be hard to segregate readers who misinterpret the text. We make a rational assumption that, for a particular text, most of the readers, from a fairly large population, will be able to identify sarcasm. Under this assumption, the eye-movement parameters, averaged across all readers in our setting, may not be significantly distorted by a few readers who would have failed to identify sarcasm. This assumption is applicable for both regular and multi-instance based classifiers explained in section SECREF6 . Analysis of Eye-movement Data We observe distinct behavior during sarcasm reading, by analyzing the “fixation duration on the text” (also referred to as “dwell time” in the literature) and “scanpaths” of the readers. Variation in the Average Fixation Duration per Word Since sarcasm in text can be expected to induce cognitive load, it is reasonable to believe that it would require more processing time BIBREF14 . Hence, fixation duration normalized over total word count should usually be higher for a sarcastic text than for a non-sarcastic one. We observe this for all participants in our dataset, with the average fixation duration per word for sarcastic texts being at least 1.5 times more than that of non-sarcastic texts. To test the statistical significance, we conduct a two-tailed t-test (assuming unequal variance) to compare the average fixation duration per word for sarcastic and non-sarcastic texts. The hypothesized mean difference is set to 0 and the error tolerance limit ( INLINEFORM0 ) is set to 0.05. The t-test analysis, presented in Table TABREF11 , shows that for all participants, a statistically significant difference exists between the average fixation duration per word for sarcasm (higher average fixation duration) and non-sarcasm (lower average fixation duration). This affirms that the presence of sarcasm affects the duration of fixation on words. It is important to note that longer fixations may also be caused by other linguistic subtleties (such as difficult words, ambiguity and syntactically complex structures) causing delay in comprehension, or occulomotor control problems forcing readers to spend time adjusting eye-muscles. So, an elevated average fixation duration per word may not sufficiently indicate the presence of sarcasm. But we would also like to share that, for our dataset, when we considered readability (Flesch readability ease-score BIBREF15 ), number of words in a sentence and average character per word along with the sarcasm label as the predictors of average fixation duration following a linear mixed effect model BIBREF16 , sarcasm label turned out to be the most significant predictor with a maximum slope. This indicates that average fixation duration per word has a strong connection with the text being sarcastic, at least in our dataset. We now analyze scanpaths to gain more insights into the sarcasm comprehension process. Analysis of Scanpaths Scanpaths are line-graphs that contain fixations as nodes and saccades as edges; the radii of the nodes represent the fixation duration. A scanpath corresponds to a participant's eye-movement pattern while reading a particular sentence. Figure FIGREF14 presents scanpaths of three participants for the sarcastic sentence S1 and the non-sarcastic sentence S2. The x-axis of the graph represents the sequence of words a reader reads, and the y-axis represents a temporal sequence in milliseconds. Consider a sarcastic text containing incongruous phrases A and B. Our qualitative scanpath-analysis reveals that scanpaths with respect to sarcasm processing have two typical characteristics. Often, a long regression - a saccade that goes to a previously visited segment - is observed when a reader starts reading B after skimming through A. In a few cases, the fixation duration on A and B are significantly higher than the average fixation duration per word. In sentence S1, we see long and multiple regressions from the two incongruous phrases “misconception” and “cherish”, and a few instances where phrases “always cherish” and “original misconception” are fixated longer than usual. Such eye-movement behaviors are not seen for S2. Though sarcasm induces distinctive scanpaths like the ones depicted in Figure FIGREF14 in the observed examples, presence of such patterns is not sufficient to guarantee sarcasm; such patterns may also possibly arise from literal texts. We believe that a combination of linguistic features, readability of text and features derived from scanpaths would help discriminative machine learning models learn sarcasm better. Features for Sarcasm Detection We describe the features used for sarcasm detection in Table . The features enlisted under lexical,implicit incongruity and explicit incongruity are borrowed from various literature (predominantly from joshi2015harnessing). These features are essential to separate sarcasm from other forms semantic incongruity in text (for example ambiguity arising from semantic ambiguity or from metaphors). Two additional textual features viz. readability and word count of the text are also taken under consideration. These features are used to reduce the effect of text hardness and text length on the eye-movement patterns. Simple Gaze Based Features Readers' eye-movement behavior, characterized by fixations, forward saccades, skips and regressions, can be directly quantified by simple statistical aggregation (i.e., either computing features for individual participants and then averaging or performing a multi-instance based learning as explained in section SECREF6 ). Since these eye-movement attributes relate to the cognitive process in reading BIBREF17 , we consider these as features in our model. Some of these features have been reported by sarcasmunderstandability for modeling sarcasm understandability of readers. However, as far as we know, these features are being introduced in NLP tasks like textual sarcasm detection for the first time. The values of these features are believed to increase with the increase in the degree of surprisal caused by incongruity in text (except skip count, which will decrease). Complex Gaze Based Features For these features, we rely on a graph structure, namely “saliency graphs", derived from eye-gaze information and word sequences in the text. For each reader and each sentence, we construct a “saliency graph”, representing the reader's attention characteristics. A saliency graph for a sentence INLINEFORM0 for a reader INLINEFORM1 , represented as INLINEFORM2 , is a graph with vertices ( INLINEFORM3 ) and edges ( INLINEFORM4 ) where each vertex INLINEFORM5 corresponds to a word in INLINEFORM6 (may not be unique) and there exists an edge INLINEFORM7 between vertices INLINEFORM8 and INLINEFORM9 if R performs at least one saccade between the words corresponding to INLINEFORM10 and INLINEFORM11 . Figure FIGREF15 shows an example of a saliency graph.A saliency graph may be weighted, but not necessarily connected, for a given text (as there may be words in the given text with no fixation on them). The “complex” gaze features derived from saliency graphs are also motivated by the theory of incongruity. For instance, Edge Density of a saliency graph increases with the number of distinct saccades, which could arise from the complexity caused by presence of sarcasm. Similarly, the highest weighted degree of a graph is expected to be higher, if the node corresponds to a phrase, incongruous to some other phrase in the text. The Sarcasm Classifier We interpret sarcasm detection as a binary classification problem. The training data constitutes 994 examples created using our eye-movement database for sarcasm detection. To check the effectiveness of our feature set, we observe the performance of multiple classification techniques on our dataset through a stratified 10-fold cross validation. We also compare the classification accuracy of our system and the best available systems proposed by riloff2013sarcasm and joshi2015harnessing on our dataset. Using Weka BIBREF18 and LibSVM BIBREF19 APIs, we implement the following classifiers: Results Table TABREF17 shows the classification results considering various feature combinations for different classifiers and other systems. These are: Unigram (with principal components of unigram feature vectors), Sarcasm (the feature-set reported by joshi2015harnessing subsuming unigram features and features from other reported systems) Gaze (the simple and complex cognitive features we introduce, along with readability and word count features), and Gaze+Sarcasm (the complete set of features). For all regular classifiers, the gaze features are averaged across participants and augmented with linguistic and sarcasm related features. For the MILR classifier, the gaze features derived from each participant are augmented with linguistic features and thus, a multi instance “bag” of features is formed for each sentence in the training data. This multi-instance dataset is given to an MILR classifier, which follows the standard multi instance assumption to derive class-labels for each bag. For all the classifiers, our feature combination outperforms the baselines (considering only unigram features) as well as BIBREF3 , with the MILR classifier getting an F-score improvement of 3.7% and Kappa difference of 0.08. We also achieve an improvement of 2% over the baseline, using SVM classifier, when we employ our feature set. We also observe that the gaze features alone, also capture the differences between sarcasm and non-sarcasm classes with a high-precision but a low recall. To see if the improvement obtained is statistically significant over the state-of-the art system with textual sarcasm features alone, we perform McNemar test. The output of the SVM classifier using only linguistic features used for sarcasm detection by joshi2015harnessing and the output of the MILR classifier with the complete set of features are compared, setting threshold INLINEFORM0 . There was a significant difference in the classifier's accuracy with p(two-tailed) = 0.02 with an odds-ratio of 1.43, showing that the classification accuracy improvement is unlikely to be observed by chance in 95% confidence interval. Considering Reading Time as a Cognitive Feature along with Sarcasm Features One may argue that, considering simple measures of reading effort like “reading time” as cognitive feature instead of the expensive eye-tracking features for sarcasm detection may be a cost-effective solution. To examine this, we repeated our experiments with “reading time” considered as the only cognitive feature, augmented with the textual features. The F-scores of all the classifiers turn out to be close to that of the classifiers considering sarcasm feature alone and the difference in the improvement is not statistically significant ( INLINEFORM0 ). One the other hand, F-scores with gaze features are superior to the F-scores when reading time is considered as a cognitive feature. How Effective are the Cognitive Features We examine the effectiveness of cognitive features on the classification accuracy by varying the input training data size. To examine this, we create a stratified (keeping the class ratio constant) random train-test split of 80%:20%. We train our classifier with 100%, 90%, 80% and 70% of the training data with our whole feature set, and the feature combination from joshi2015harnessing. The goodness of our system is demonstrated by improvements in F-score and Kappa statistics, shown in Figure FIGREF22 . We further analyze the importance of features by ranking the features based on (a) Chi squared test, and (b) Information Gain test, using Weka's attribute selection module. Figure FIGREF23 shows the top 20 ranked features produced by both the tests. For both the cases, we observe 16 out of top 20 features to be gaze features. Further, in each of the cases, Average Fixation Duration per Word and Largest Regression Position are seen to be the two most significant features. Example Cases Table TABREF21 shows a few example cases from the experiment with stratified 80%-20% train-test split. Example sentence 1 is sarcastic, and requires extra-linguistic knowledge (about poor living conditions at Manchester). Hence, the sarcasm detector relying only on textual features is unable to detect the underlying incongruity. However, our system predicts the label successfully, possibly helped by the gaze features. Similarly, for sentence 2, the false sense of presence of incongruity (due to phrases like “Helped me” and “Can't stop”) affects the system with only linguistic features. Our system, though, performs well in this case also. Sentence 3 presents a false-negative case where it was hard for even humans to get the sarcasm. This is why our gaze features (and subsequently the complete set of features) account for erroneous prediction. In sentence 4, gaze features alone false-indicate presence of incongruity, whereas the system predicts correctly when gaze and linguistic features are taken together. From these examples, it can be inferred that, only gaze features would not have sufficed to rule out the possibility of detecting other forms of incongruity that do not result in sarcasm. Error Analysis Errors committed by our system arise from multiple factors, starting from limitations of the eye-tracker hardware to errors committed by linguistic tools and resources. Also, aggregating various eye-tracking parameters to extract the cognitive features may have caused information loss in the regular classification setting. Conclusion In the current work, we created a novel framework to detect sarcasm, that derives insights from human cognition, that manifests over eye movement patterns. We hypothesized that distinctive eye-movement patterns, associated with reading sarcastic text, enables improved detection of sarcasm. We augmented traditional linguistic features with cognitive features obtained from readers' eye-movement data in the form of simple gaze-based features and complex features derived from a graph structure. This extended feature-set improved the success rate of the sarcasm detector by 3.7%, over the best available system. Using cognitive features in an NLP Processing system like ours is the first proposal of its kind. Our general approach may be useful in other NLP sub-areas like sentiment and emotion analysis, text summarization and question answering, where considering textual clues alone does not prove to be sufficient. We propose to augment this work in future by exploring deeper graph and gaze features. We also propose to develop models for the purpose of learning complex gaze feature representation, that accounts for the power of individual eye movement patterns along with the aggregated patterns of eye movements. Acknowledgments We thank the members of CFILT Lab, especially Jaya Jha and Meghna Singh, and the students of IIT Bombay for their help and support.
[ "F-score, Kappa", "Unanswerable" ]
3,544
qasper_e
en
null
ebfe29b14637ff245ca9117167c10d252b21c64aa484e871
What were the baselines?
Introduction In the field of natural language processing (NLP), the most prevalent neural approach to obtaining sentence representations is to use recurrent neural networks (RNNs), where words in a sentence are processed in a sequential and recurrent manner. Along with their intuitive design, RNNs have shown outstanding performance across various NLP tasks e.g. language modeling BIBREF0 , BIBREF1 , machine translation BIBREF2 , BIBREF3 , BIBREF4 , text classification BIBREF5 , BIBREF6 , and parsing BIBREF7 , BIBREF8 . Among several variants of the original RNN BIBREF9 , gated recurrent architectures such as long short-term memory (LSTM) BIBREF10 and gated recurrent unit (GRU) BIBREF2 have been accepted as de-facto standard choices for RNNs due to their capability of addressing the vanishing and exploding gradient problem and considering long-term dependencies. Gated RNNs achieve these properties by introducing additional gating units that learn to control the amount of information to be transferred or forgotten BIBREF11 , and are proven to work well without relying on complex optimization algorithms or careful initialization BIBREF12 . Meanwhile, the common practice for further enhancing the expressiveness of RNNs is to stack multiple RNN layers, each of which has distinct parameter sets (stacked RNN) BIBREF13 , BIBREF14 . In stacked RNNs, the hidden states of a layer are fed as input to the subsequent layer, and they are shown to work well due to increased depth BIBREF15 or their ability to capture hierarchical time series BIBREF16 which are inherent to the nature of the problem being modeled. However this setting of stacking RNNs might hinder the possibility of more sophisticated recurrence-based structures since the information from lower layers is simply treated as input to the next layer, rather than as another class of state that participates in core RNN computations. Especially for gated RNNs such as LSTMs and GRUs, this means that layer-to-layer connections cannot fully benefit from the carefully constructed gating mechanism used in temporal transitions. Some recent work on stacking RNNs suggests alternative methods that encourage direct and effective interaction between RNN layers by adding residual connections BIBREF17 , BIBREF18 , by shortcut connections BIBREF18 , BIBREF19 , or by using cell states of LSTMs BIBREF20 , BIBREF21 . In this paper, we propose a method of constructing multi-layer LSTMs where cell states are used in controlling the vertical information flow. This system utilizes states from the left and the lower context equally in computation of the new state, thus the information from lower layers is elaborately filtered and reflected through a soft gating mechanism. Our method is easy-to-implement, effective, and can replace conventional stacked LSTMs without much modification of the overall architecture. We call the proposed architecture Cell-aware Stacked LSTM, or CAS-LSTM, and evaluate our method on multiple benchmark datasets: SNLI BIBREF22 , MultiNLI BIBREF23 , Quora Question Pairs BIBREF24 , and SST BIBREF25 . From experiments we show that the CAS-LSTMs consistently outperform typical stacked LSTMs, opening the possibility of performance improvement of architectures that use stacked LSTMs. Our contribution is summarized as follows. This paper is organized as follows. We give a detailed description about the proposed method in § SECREF2 . Experimental results are given in § SECREF3 . We study prior work related to our objective in § SECREF4 and conclude in § SECREF5 . Model Description In this section, we give a detailed formulation of the architectures used in experiments. Notation Throughout this paper, we denote matrices as boldface capital letters ( INLINEFORM0 ), vectors as boldface lowercase letters ( INLINEFORM1 ), and scalars as normal italic letters ( INLINEFORM2 ). For LSTM states, we denote a hidden state as INLINEFORM3 and a cell state as INLINEFORM4 . Also, a layer index of INLINEFORM5 or INLINEFORM6 is denoted by superscript and a time index is denoted by a subscript, i.e. INLINEFORM7 indicates the hidden state at time INLINEFORM8 and layer INLINEFORM9 . INLINEFORM10 means the element-wise multiplication between two vectors. We write INLINEFORM11 -th component of vector INLINEFORM12 as INLINEFORM13 . All vectors are assumed to be column vectors. Stacked LSTMs While there exist various versions of LSTM formulation, in this work we use the following, one of the most common versions: DISPLAYFORM0 DISPLAYFORM1 where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 , INLINEFORM3 , INLINEFORM4 are trainable parameters. INLINEFORM5 and INLINEFORM6 are the sigmoid activation and the hyperbolic tangent activation function respectively. Also we assume that INLINEFORM7 where INLINEFORM8 is the INLINEFORM9 -th input to the network. The input gate INLINEFORM0 and the forget gate INLINEFORM1 control the amount of information transmitted from INLINEFORM2 and INLINEFORM3 , the candidate cell state and the previous cell state, to the new cell state INLINEFORM4 . Similarly the output gate INLINEFORM5 soft-selects which portion of the cell state INLINEFORM6 is to be used in the final hidden state. We can clearly see that cell states ( INLINEFORM0 , INLINEFORM1 , INLINEFORM2 ) play a crucial role in forming horizontal recurrence. However the current formulation does not consider INLINEFORM3 , the cell state from INLINEFORM4 -th layer, in computation and thus the lower context is reflected only through the rudimentary way, hindering the possibility of controlling vertical information flow. Cell-aware Stacked LSTMs Now we extend the stacked LSTM formulation defined above to address the problem noted in the previous subsection. To enhance the interaction between layers in a way similar to how LSTMs keep and forget the information from the previous time step, we introduce the additional forget gate INLINEFORM0 that determines whether to accept or ignore the signals coming from the previous layer. Therefore the proposed Cell-aware Stacked LSTM is formulated as follows: DISPLAYFORM0 DISPLAYFORM1 where INLINEFORM0 and INLINEFORM1 . INLINEFORM2 can either be a vector of constants or parameters. When INLINEFORM3 , the equations defined in the previous subsection are used. Therefore, it can be said that each non-bottom layer of CAS-LSTM accepts two sets of hidden and cell states—one from the left context and the other from the below context. The left and the below context participate in computation with the equivalent procedure so that the information from lower layers can be efficiently propagated. Fig. FIGREF1 compares CAS-LSTM to the conventional stacked LSTM architecture, and Fig. FIGREF8 depicts the computation flow of the CAS-LSTM. We argue that considering INLINEFORM0 in computation is beneficial for the following reasons. First, INLINEFORM1 contains additional information compared to INLINEFORM2 since it is not filtered by INLINEFORM3 . Thus a model that directly uses INLINEFORM4 does not rely solely on INLINEFORM5 for extracting information, due to the fact that it has access to the raw information INLINEFORM6 , as in temporal connections. In other words, INLINEFORM7 no longer has to take all responsibility for selecting useful features for both horizontal and vertical transitions, and the burden of selecting information is shared with INLINEFORM8 . Another advantage of using the INLINEFORM0 lies in the fact that it directly connects INLINEFORM1 and INLINEFORM2 . This direct connection helps and stabilizes training, since the terminal error signals can be easily backpropagated to model parameters. Fig. FIGREF23 illustrates paths between the two cell states. We find experimentally that there is little difference between letting INLINEFORM0 be constant and letting it be trainable parameters, thus we set INLINEFORM1 in all experiments. We also experimented with the architecture without INLINEFORM2 i.e. two cell states are combined by unweighted summation similar to multidimensional RNNs BIBREF27 , and found that it leads to performance degradation and unstable convergence, likely due to mismatch in the range of cell state values between layers ( INLINEFORM3 for the first layer and INLINEFORM4 for the others). Experimental results on various INLINEFORM5 are presented in § SECREF3 . The idea of having multiple states is also related to tree-structured RNNs BIBREF29 , BIBREF30 . Among them, tree-structured LSTMs (Tree-LSTMs) BIBREF31 , BIBREF32 , BIBREF33 are similar to ours in that they use both hidden and cell states from children nodes. In Tree-LSTMs, states for all children nodes are regarded as input, and they participate in the computation equally through weight-shared (in Child-Sum Tree-LSTMs) or weight-unshared (in INLINEFORM0 -ary Tree-LSTMs) projection. From this perspective, each CAS-LSTM layer (where INLINEFORM1 ) can be seen as a binary Tree-LSTM where the structures it operates on are fixed to right-branching trees. The use of cell state in computation could be one reason that Tree-LSTMs perform better than sequential LSTMs even when trivial trees (strictly left- or right-branching) are given BIBREF34 . Multidimensional RNNs (MDRNN) are an extension of 1D sequential RNNs that can accept multidimensional input e.g. images, and have been successfully applied to image segmentation BIBREF26 and handwriting recognition BIBREF27 . Notably multidimensional LSTMs (MDLSTM) BIBREF27 have an analogous formulation to ours except the INLINEFORM0 term and the fact that we use distinct weights per column (or `layer' in our case). From this view, CAS-LSTM can be seen as a certain kind of MDLSTM that accepts a 2D input INLINEFORM1 . Grid LSTMs BIBREF21 also take INLINEFORM2 inputs but emit INLINEFORM3 outputs, which is different from our case where a single set of hidden and cell states is produced. Sentence Encoders The sentence encoder network we use in our experiments takes INLINEFORM0 words (assumed to be one-hot vectors) as input. The words are projected to corresponding word representations: INLINEFORM1 where INLINEFORM2 . Then INLINEFORM3 is fed to a INLINEFORM4 -layer CAS-LSTM model, resulting in the representations INLINEFORM5 . The sentence representation, INLINEFORM6 , is computed by max-pooling INLINEFORM7 over time as in the work of BIBREF35 . Similar to their results, from preliminary experiments we found that the max-pooling performs consistently better than mean- and last-pooling. To make models more expressive, a bidirectional CAS-LSTM network may also be used. In the bidirectional case, the forward representations INLINEFORM0 and the backward representations INLINEFORM1 are concatenated and max-pooled to yield the sentence representation INLINEFORM2 . We call this bidirectional architecture Bi-CAS-LSTM in experiments. Top-layer Classifiers For the natural language inference experiments, we use the following heuristic function proposed by BIBREF36 in feature extraction: DISPLAYFORM0 where INLINEFORM0 means vector concatenation, and INLINEFORM1 and INLINEFORM2 are applied element-wise. And we use the following function in paraphrase identification experiments: DISPLAYFORM0 as in the work of BIBREF37 . For sentiment classification, we use the sentence representation itself. DISPLAYFORM0 We feed the feature extracted from INLINEFORM0 as input to the MLP classifier with ReLU activation followed by the fully-connected softmax layer to predict the label distribution: DISPLAYFORM0 where INLINEFORM0 , INLINEFORM1 is the number of label classes, and INLINEFORM2 the dimension of the MLP output, Experiments We evaluate our method on natural language inference (NLI), paraphrase identification (PI), and sentiment classification. We also conduct analysis on gate values and experiments on model variants. For detailed experimental settings, we refer readers to the supplemental material. For the NLI and PI tasks, there exists recent work specializing in sentence pair classification. However in this work we confine our model to the architecture that encodes each sentence using a shared encoder without any inter-sentence interaction, in order to focus on the effectiveness of the models in extracting semantics. But note that the applicability of CAS-LSTM is not limited to sentence encoding based approaches. Natural Language Inference For the evaluation of performance of the proposed method on the NLI task, SNLI BIBREF22 and MultiNLI BIBREF23 datasets are used. The objective of both datasets is to predict the relationship between a premise and a hypothesis sentence: entailment, contradiction, and neutral. SNLI and MultiNLI datasets are composed of about 570k and 430k premise-hypothesis pairs respectively. GloVe pretrained word embeddings BIBREF49 are used and remain fixed during training. The dimension of encoder states ( INLINEFORM0 ) is set to 300 and a 1024D MLP with one or two hidden layers is used. We apply dropout BIBREF50 to the word embeddings and the MLP layers. The features used as input to the MLP classifier are extracted following Eq. EQREF28 . Table TABREF32 and TABREF33 contain results of the models on SNLI and MultiNLI datasets. In SNLI, our best model achieves the new state-of-the-art accuracy of 87.0% with relatively fewer parameters. Similarly in MultiNLI, our models match the accuracy of state-of-the-art models in both in-domain (matched) and cross-domain (mismatched) test sets. Note that only the GloVe word vectors are used as word representations, as opposed to some models that introduce character-level features. It is also notable that our proposed architecture does not restrict the selection of pooling method; the performance could further be improved by replacing max-pooling with other advanced algorithms e.g. intra-sentence attention BIBREF39 and generalized pooling BIBREF19 . Paraphrase Identification We use Quora Question Pairs dataset BIBREF24 in evaluating the performance of our method on the PI task. The dataset consists of over 400k question pairs, and each pair is annotated with whether the two sentences are paraphrase of each other or not. Similar to the NLI experiments, GloVe pretrained vectors, 300D encoders, and 1024D MLP are used. The number of CAS-LSTM layers is fixed to 2 in PI experiments. Two sentence vectors are aggregated using Eq. EQREF29 and fed as input to the MLP. The results on the Quora Question Pairs dataset are summarized in Table TABREF34 . Again we can see that our models outperform other models by large margin, achieving the new state of the art. Sentiment Classification In evaluating sentiment classification performance, the Stanford Sentiment Treebank (SST) BIBREF25 is used. It consists of about 12,000 binary-parsed sentences where constituents (phrases) of each parse tree are annotated with a sentiment label (very positive, positive, neutral, negative, very negative). Following the convention of prior work, all phrases and their labels are used in training but only the sentence-level data are used in evaluation. In evaluation we consider two settings, namely SST-2 and SST-5, the two differing only in their level of granularity with regard to labels. In SST-2, data samples annotated with `neutral' are ignored from training and evaluation. The two positive labels (very positive, positive) are considered as the same label, and similarly for the two negative labels. As a result 98,794/872/1,821 data samples are used in training/validation/test, and the task is considered as a binary classification problem. In SST-5, data are used as-is and thus the task is a 5-class classification problem. All 318,582/1,101/2,210 data samples for training/validation/test are used in the SST-5 setting. We use 300D GloVe vectors, 2-layer 150D or 300D encoders, and a 300D MLP classifier for the models, however unlike previous experiments we tune the word embeddings during training. The results on SST are listed in Table TABREF35 . Our models achieve the new state-of-the-art accuracy on SST-2 and competitive accuracy on SST-5, without utilizing parse tree information. Forget Gate Analysis To inspect the effect of the additional forget gate, we investigate how the values of vertical forget gates are distributed. We sample 1,000 random sentences from the development set of the SNLI dataset, and use the 3-layer CAS-LSTM model trained on the SNLI dataset to compute gate values. If all values from a vertical forget gate INLINEFORM0 were to be 0, this would mean that the introduction of the additional forget gate is meaningless and the model would reduce to a plain stacked LSTM. On the contrary if all values were 1, meaning that the vertical forget gates were always open, it would be impossible to say that the information is modulated effectively. Fig. FIGREF40 and FIGREF40 represent histograms of the vertical forget gate values from the second and the third layer. From the figures we can validate that the trained model does not fall into the degenerate case where vertical forget gates are ignored. Also the figures show that the values are right-skewed, which we conjecture to be a result of focusing more on a strong interaction between adjacent layers. To further verify that the gate values are diverse enough within each time step, we compute the distribution of the range of values per time step, INLINEFORM0 , where INLINEFORM1 . We plot the histograms in Fig. FIGREF40 and FIGREF40 . From the figure we see that a vertical forget gate controls the amount of information flow effectively, making the decision of retaining or discarding signals. Finally, to investigate the argument presented in § SECREF2 that the additional forget gate helps the previous output gate with reducing the burden of extracting all needed information, we inspect the distribution of the values from INLINEFORM0 . This distribution indicates how differently the vertical forget gate and the previous output gate select information from INLINEFORM1 . From Fig. FIGREF40 and FIGREF40 we can see that the two gates make fairly different decisions, from which we demonstrate that the direct path between INLINEFORM2 and INLINEFORM3 enables a model to utilize signals overlooked by INLINEFORM4 . Model Variations In this subsection, we see the influence of each component of a model on performance by removing or replacing its components. the SNLI dataset is used for experiments, and the best performing configuration is used as a baseline for modifications. We consider the following variants: (i) models that use plain stacked LSTMs, (ii) models with different INLINEFORM0 , (iii) models without INLINEFORM1 , and (iv) models that integrate lower contexts via peephole connections. Variant (iv) integrates lower contexts via the following equations: DISPLAYFORM0 DISPLAYFORM1 where INLINEFORM0 represent peephole weights that take cell states into account. Among the above equations, those that use the lower cell state INLINEFORM1 are Eq. EQREF52 and EQREF55 . We can see that INLINEFORM2 affects the value of INLINEFORM3 only via peephole connections, which makes INLINEFORM4 independent of INLINEFORM5 . Table TABREF36 summarizes the results of model variants. We can again see that the use of cell states clearly improves sentence modeling performance (baseline vs. (i) and (iv) vs. (i)). Also from the results of baseline and (ii), we validate that the selection of INLINEFORM0 does not significantly affect performance but introducing INLINEFORM1 is beneficial (baseline vs. (iii)) possibly due to its effect on normalizing information from multiple sources, as mentioned in § SECREF2 . Finally, from the comparison between baseline and (iv), we show that the proposed way of combining the left and the lower contexts leads to better modeling of sentence representations than that of BIBREF20 in encoding sentences. Conclusion In this paper, we proposed a method of stacking multiple LSTM layers for modeling sentences, dubbed CAS-LSTM. It uses not only hidden states but also cell states from the previous layer, for the purpose of controlling the vertical information flow in a more elaborate way. We evaluated the proposed method on various benchmark tasks: natural language inference, paraphrase identification, and sentiment classification. Our models achieve the new state-of-the-art accuracy on SNLI and Quora Question Pairs datasets and obtain comparable results on MultiNLI and SST datasets. The proposed architecture can replace any stacked LSTM under one weak restriction—the size of states should be identical across all layers. For future work we plan to apply the CAS-LSTM architecture beyond sentence modeling tasks. Various problems e.g. sequence labeling, sequence generation, and language modeling might benefit from sophisticated modulation on context integration. Aggregating diverse contexts from sequential data, e.g. those from forward and backward reading of text, could also be an intriguing research direction. Acknowledgments We thank Dan Edmiston for the review of the manuscript.
[ "(i) models that use plain stacked LSTMs, (ii) models with different INLINEFORM0, (iii) models without INLINEFORM1, (iv) models that integrate lower contexts via peephole connections" ]
3,224
qasper_e
en
null
f4dd3a0b7e956fadae8f49e07a6ef1c6e90a315d21969a76
Is jiant compatible with models in any programming language?
Introduction This paper introduces jiant, an open source toolkit that allows researchers to quickly experiment on a wide array of NLU tasks, using state-of-the-art NLP models, and conduct experiments on probing, transfer learning, and multitask training. jiant supports many state-of-the-art Transformer-based models implemented by Huggingface's Transformers package, as well as non-Transformer models such as BiLSTMs. Packages and libraries like HuggingFace's Transformers BIBREF0 and AllenNLP BIBREF1 have accelerated the process of experimenting and iterating on NLP models by both abstracting out implementation details, and simplifying the model training pipeline. jiant extends the capabilities of both toolkits by presenting a wrapper that implements a variety of complex experimental pipelines in a scalable and easily controllable setting. jiant contains a task bank of over 50 tasks, including all the tasks presented in GLUE BIBREF2, SuperGLUE BIBREF3, the edge-probing suite BIBREF4, and the SentEval probing suite BIBREF5, as well as other individual tasks including CCG supertagging BIBREF6, SocialIQA BIBREF7, and CommonsenseQA BIBREF8. jiant is also the official baseline codebase for the SuperGLUE benchmark. jiant's core design principles are: Ease of use: jiant should allow users to run a variety of experiments using state-of-the-art models via an easy to use configuration-driven interface. jiant should also provide features that support correct and reproducible experiments, including logging and saving and restoring model state. Availability of NLU tasks: jiant should maintain and continue to grow a collection of tasks useful for NLU research, especially popular evaluation tasks and tasks commonly used in pretraining and transfer learning. Availability of cutting-edge models: jiant should make implementations of state-of-the-art models available for experimentation. Open source: jiant should be free to use, and easy to contribute to. Early versions of jiant have already been used in multiple works, including probing analyses BIBREF4, BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13, transfer learning experiments BIBREF14, BIBREF15, and dataset and benchmark construction BIBREF3, BIBREF2, BIBREF16. Background Transfer learning is an area of research that uses knowledge from pretrained models to transfer to new tasks. In recent years, Transformer-based models like BERT BIBREF17 and T5 BIBREF18 have yielded state-of-the-art results on the lion's share of benchmark tasks for language understanding through pretraining and transfer, often paired with some form of multitask learning. jiant enables a variety of complex training pipelines through simple configuration changes, including multi-task training BIBREF19, BIBREF20 and pretraining, as well as the sequential fine-tuning approach from STILTs BIBREF15. In STILTs, intermediate task training takes a pretrained model like ELMo or BERT, and applies supplementary training on a set of intermediate tasks, before finally performing single-task training on additional downstream tasks. jiant System Overview ::: Requirements jiant v1.3.0 requires Python 3.5 or later. jiant can be installed via pip, or cloned and installed from GitHub. jiant's core dependencies are PyTorch BIBREF21, AllenNLP BIBREF1, and HuggingFace's Transformers BIBREF0. jiant is released under the MIT License BIBREF22. jiant System Overview ::: jiant Components Tasks: Tasks have references to task data, methods for processing data, references to classifier heads, and methods for calculating performance metrics, and making predictions. Sentence Encoder: Sentence encoders map from the indexed examples to a sentence-level representation. Sentence encoders can include an input module (e.g., Transformer models, ELMo, or word embeddings), followed by an optional second layer of encoding (usually a BiLSTM). Examples of possible sentence encoder configurations include BERT, ELMo followed by a BiLSTM, BERT with a variety of pooling and aggregation metthods, or a bag of words model. Task-Specific Output Heads: Task-specific output modules map representations from sentence encoders to outputs specific to a task, e.g. entailment/neutral/contradiction for NLI tasks, or tags for part-of-speech tagging. They also include logic for computing the corresponding loss for training (e.g. cross-entropy). Trainer: Trainers manage the control flow for the training and validation loop for experiments. They sample batches from one or more tasks, perform forward and backward passes, calculate training metrics, evaluate on a validation set, and save checkpoints. Users can specify experiment-specific parameters such as learning rate, batch size, and more. Config: Config files or flags are defined in HOCON format. Configs specify parameters for jiant experiments including choices of tasks, sentence encoder, and training routine. Configs are jiant's primary user interface. Tasks and modeling components are designed to be modular, while jiant's pipeline is a monolithic, configuration-driven design intended to facilitate a number of common workflows outlined in SECREF17. jiant System Overview ::: jiant Pipeline Overview jiant's core pipeline consists of the five stages described below and illustrated in Figure FIGREF16: A config or multiple configs defining an experiment are interpreted. Users can choose and configure models, tasks, and stages of training and evaluation. The tasks and sentence encoder are prepared: The task data is loaded, tokenized, and indexed, and (optionally) the preprocessed task objects are serialized and cached. In this process, AllenNLP is used to create the vocabulary and index the tokenized data. The sentence encoder is constructed and (optionally) pretrained weights are loaded. The task-specific output heads are created for each task, and task heads are attached to a common sentence encoder. Optionally, different tasks can share the same output head, as in BIBREF20. Optionally, in the intermediate phase the trainer samples batches randomly from one or more tasks, and trains the shared model. Optionally, in the target training phase, a copy of the model is configured and trained or fine-tuned for each target task separately. Optionally, the model is evaluated on the validation and/or test sets of the target tasks. jiant System Overview ::: Task and Model resources in jiant jiant supports over 50 tasks. Task types include classification, regression, sequence generation, tagging, and span prediction. jiant focuses on NLU tasks like MNLI BIBREF24, CommonsenseQA BIBREF8, the Winograd Schema Challenge BIBREF25, and SQuAD BIBREF26. A full inventory of tasks and task variants is available in the jiant/tasks module. jiant provides support for cutting-edge sentence encoder models, including support for Huggingface's Transformers. Supported models include: BERT BIBREF17, RoBERTa BIBREF27, XLNet BIBREF28, XLM BIBREF29, GPT BIBREF30, GPT-2 BIBREF31, ALBERT BIBREF32 and ELMo BIBREF33. jiant also supports the from-scratch training of (bidirectional) LSTMs BIBREF34 and deep bag of words models BIBREF35, as well as syntax-aware models such as PRPN BIBREF36 and ON-LSTM BIBREF37. jiant also supports word embeddings such as GloVe BIBREF38. jiant System Overview ::: User Interface jiant experiments can be run with a simple CLI: jiant provides default config files that allow running many experiments without modifying source code. jiant also provides baseline config files that can serve as a starting point for model development and evaluation against GLUE BIBREF2 and SuperGLUE BIBREF3 benchmarks. More advanced configurations can be developed by composing multiple configurations files and overrides. Figure FIGREF29 shows a config file that overrides a default config, defining an experiment that uses BERT as the sentence encoder. This config includes an example of a task-specific configuration, which can be overridden in another config file or via a command line override. jiant also implements the option to provide command line overrides with a flag. This option makes it easy to write scripts that launch jiant experiments over a range of parameters, for example while performing grid search across hyperparameters. jiant users have successfully run large-scale experiments launching hundreds of runs on both Kubernetes and Slurm. jiant System Overview ::: Example jiant Use Cases and Options Here we highlight some example use cases and key corresponding jiant config options required in these experiments: Fine-tune BERT on SWAG BIBREF39 and SQUAD BIBREF26, then fine-tune on HellaSwag BIBREF40: Train a probing classifier over a frozen BERT model, as in BIBREF9: Compare performance of GloVe BIBREF38 embeddings using a BiLSTM: Evaluate ALBERT BIBREF32 on the MNLI BIBREF24 task: jiant System Overview ::: jiant Deployment Environments jiant runs on consumer-grade hardware or in cluster environments with or without CUDA GPUs. The jiant repository also contains documentation and configuration files demonstrating how to deploy jiant in Kubernetes clusters on Google Kubernetes Engine. jiant System Overview ::: Logging and Metric Tracking jiant generates custom log files that capture experimental configurations, training and evaluation metrics, and relevant run-time information. jiant also generates TensorBoard event files BIBREF41 for training and evaluation metric tracking. TensorBoard event files can be visualized using the TensorBoard Scalars Dashboard. jiant System Overview ::: Optimizations and Other Features jiant implements features that improve run stability and efficiency: jiant implements checkpointing options designed to offer efficient early stopping and to show consistent behavior when restarting after an interruption. jiant caches preprocessed task data to speed up reuse across experiments which share common data resources and artifacts. jiant implements gradient accumulation and multi-GPU, which enables training on larger batches than can fit in memory for a single GPU. jiant supports outputting predictions in a format ready for GLUE and SuperGLUE benchmark submission. jiant System Overview ::: Extensibility jiant's design offers conveniences that reduce the need to modify code when making changes: jiant's task registry makes it easy to define a new version of an existing task using different data. Once the new task is defined in the task registry, the task is available as an option in jiant's config. jiant's sentence encoder and task output head abstractions allow for easy support of new sentence encoders. In use cases requiring the introduction of a new task, users can use class inheritance to build on a number of available parent task types including classification, tagging, span prediction, span classification, sequence generation, regression, ranking, and multiple choice task classes. For these task types, corresponding task-specific output heads are already implemented. More than 30 researchers and developers from more than 5 institutions have contributed code to the jiant project. jiant's maintainers welcome pull requests that introduce new tasks or sentence encoder components, and pull request are actively reviewed. The jiant repository's continuous integration system requires that all pull requests pass unit and integration tests and meet Black code formatting requirements. jiant System Overview ::: Limitations While jiant is quite flexible in the pipelines that can be specified through configs, and some components are highly modular (e.g., tasks, sentence encoders, and output heads), modification of the pipeline code can be difficult. For example, training in more than two phases would require modifying the trainer code. Making multi-stage training configurations more flexible is on jiant's development roadmap. jiant System Overview ::: jiant Development Roadmap jiant is actively being developed. The jiant project has prioritized continuing to add support for new Transformer models and adding tasks that are commonly used for pretraining and evaluation in NLU, including sequence-to-sequence tasks. Additionally, there are plans to make jiant's training phase configuration options more flexible to allow training in more than two phases, and to continue to refactor jiant's code to keep jiant flexible to track developments in NLU research. Benchmark Experiments To benchmark jiant, we perform a set of experiments that reproduce external results for single fine-tuning and transfer learning experiments. jiant has been benchmarked extensively in both published and ongoing work on a majority of the implemented tasks. We benchmark single-task fine-tuning configurations using CommonsenseQA BIBREF8 and SocialIQA BIBREF7. On CommonsenseQA with $\mathrm {RoBERTa}_\mathrm {LARGE}$, jiant achieves an accuracy of 0.722, comparable to 0.721 reported by BIBREF27. On SocialIQA with BERT-large, jiant achieves a dev set accuracy of 0.658, comparable to 0.66 reported in BIBREF7. Next, we benchmark jiant's transfer learning regime. We perform transfer experiments from MNLI to BoolQ with BERT-large. In this configuration BIBREF42 demonstrated an accuracy improvement of 0.78 to 0.82 on the dev set, and jiant achieves an improvement of 0.78 to 0.80. Conclusion jiant provides a configuration-driven interface for defining transfer learning and representation learning experiments using a bank of over 50 NLU tasks, cutting-edge sentence encoder models, and multi-task and multi-stage training procedures. Further, jiant is shown to be able to replicate published performance on various NLU tasks. jiant's modular design of task and sentence encoder components make it possible for users to quickly and easily experiment with a large number of tasks, models, and parameter configurations, without editing source code. jiant's design also makes it easy to add new tasks, and jiant's architecture makes it convenient to extend jiant to support new sentence encoders. jiant code is open source, and jiant invites contributors to open issues or submit pull request to the jiant project repository: https://github.com/nyu-mll/jiant. Acknowledgments Katherin Yu, Jan Hula, Patrick Xia, Raghu Pappagari, Shuning Jin, R. Thomas McCoy, Roma Patel, Yinghui Huang, Edouard Grave, Najoung Kim, Thibault Févry, Berlin Chen, Nikita Nangia, Anhad Mohananey, Katharina Kann, Shikha Bordia, Nicolas Patry, David Benton, and Ellie Pavlick have contributed substantial engineering assistance to the project. The early development of jiant took at the 2018 Frederick Jelinek Memorial Summer Workshop on Speech and Language Technologies, and was supported by Johns Hopkins University with unrestricted gifts from Amazon, Facebook, Google, Microsoft and Mitsubishi Electric Research Laboratories. Subsequent development was possible in part by a donation to NYU from Eric and Wendy Schmidt made by recommendation of the Schmidt Futures program, by support from Intuit Inc., and by support from Samsung Research under the project Improving Deep Learning using Latent Structure. We gratefully acknowledge the support of NVIDIA Corporation with the donation of a Titan V GPU used at NYU in this work. Alex Wang's work on the project is supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE 1342536. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. Yada Pruksachatkun's work on the project is supported in part by the Moore-Sloan Data Science Environment as part of the NYU Data Science Services initiative. Sam Bowman's work on jiant during Summer 2019 took place in his capacity as a visiting researcher at Google.
[ "Yes", "Unanswerable" ]
2,284
qasper_e
en
null
e5d1d589ddb30f43547012f04b06ac2924a1f4fdcf56daab
Are the experts comparable to real-world users?
Introduction Privacy policies are the documents which disclose the ways in which a company gathers, uses, shares and manages a user's data. As legal documents, they function using the principle of notice and choice BIBREF0, where companies post their policies, and theoretically, users read the policies and decide to use a company's products or services only if they find the conditions outlined in its privacy policy acceptable. Many legal jurisdictions around the world accept this framework, including the United States and the European Union BIBREF1, BIBREF2. However, the legitimacy of this framework depends upon users actually reading and understanding privacy policies to determine whether company practices are acceptable to them BIBREF3. In practice this is seldom the case BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10. This is further complicated by the highly individual and nuanced compromises that users are willing to make with their data BIBREF11, discouraging a `one-size-fits-all' approach to notice of data practices in privacy documents. With devices constantly monitoring our environment, including our personal space and our bodies, lack of awareness of how our data is being used easily leads to problematic situations where users are outraged by information misuse, but companies insist that users have consented. The discovery of increasingly egregious uses of data by companies, such as the scandals involving Facebook and Cambridge Analytica BIBREF12, have further brought public attention to the privacy concerns of the internet and ubiquitous computing. This makes privacy a well-motivated application domain for NLP researchers, where advances in enabling users to quickly identify the privacy issues most salient to them can potentially have large real-world impact. [1]https://play.google.com/store/apps/details?id=com.gotokeep.keep.intl [2]https://play.google.com/store/apps/details?id=com.viber.voip [3]A question might not have any supporting evidence for an answer within the privacy policy. Motivated by this need, we contribute PrivacyQA, a corpus consisting of 1750 questions about the contents of privacy policies, paired with over 3500 expert annotations. The goal of this effort is to kickstart the development of question-answering methods for this domain, to address the (unrealistic) expectation that a large population should be reading many policies per day. In doing so, we identify several understudied challenges to our ability to answer these questions, with broad implications for systems seeking to serve users' information-seeking intent. By releasing this resource, we hope to provide an impetus to develop systems capable of language understanding in this increasingly important domain. Related Work Prior work has aimed to make privacy policies easier to understand. Prescriptive approaches towards communicating privacy information BIBREF21, BIBREF22, BIBREF23 have not been widely adopted by industry. Recently, there have been significant research effort devoted to understanding privacy policies by leveraging NLP techniques BIBREF24, BIBREF25, BIBREF26, BIBREF27, BIBREF28, especially by identifying specific data practices within a privacy policy. We adopt a personalized approach to understanding privacy policies, that allows users to query a document and selectively explore content salient to them. Most similar is the PolisisQA corpus BIBREF29, which examines questions users ask corporations on Twitter. Our approach differs in several ways: 1) The PrivacyQA dataset is larger, containing 10x as many questions and answers. 2) Answers are formulated by domain experts with legal training. 3) PrivacyQA includes diverse question types, including unanswerable and subjective questions. Our work is also related to reading comprehension in the open domain, which is frequently based upon Wikipedia passages BIBREF16, BIBREF17, BIBREF15, BIBREF30 and news articles BIBREF20, BIBREF31, BIBREF32. Table.TABREF4 presents the desirable attributes our dataset shares with past approaches. This work is also tied into research in applying NLP approaches to legal documents BIBREF33, BIBREF34, BIBREF35, BIBREF36, BIBREF37, BIBREF38, BIBREF39. While privacy policies have legal implications, their intended audience consists of the general public rather than individuals with legal expertise. This arrangement is problematic because the entities that write privacy policies often have different goals than the audience. feng2015applying, tan-EtAl:2016:P16-1 examine question answering in the insurance domain, another specialized domain similar to privacy, where the intended audience is the general public. Data Collection We describe the data collection methodology used to construct PrivacyQA. With the goal of achieving broad coverage across application types, we collect privacy policies from 35 mobile applications representing a number of different categories in the Google Play Store. One of our goals is to include both policies from well-known applications, which are likely to have carefully-constructed privacy policies, and lesser-known applications with smaller install bases, whose policies might be considerably less sophisticated. Thus, setting 5 million installs as a threshold, we ensure each category includes applications with installs on both sides of this threshold. All policies included in the corpus are in English, and were collected before April 1, 2018, predating many companies' GDPR-focused BIBREF41 updates. We leave it to future studies BIBREF42 to look at the impact of the GDPR (e.g., to what extent GDPR requirements contribute to making it possible to provide users with more informative answers, and to what extent their disclosures continue to omit issues that matter to users). Data Collection ::: Crowdsourced Question Elicitation The intended audience for privacy policies consists of the general public. This informs the decision to elicit questions from crowdworkers on the contents of privacy policies. We choose not to show the contents of privacy policies to crowdworkers, a procedure motivated by a desire to avoid inadvertent biases BIBREF43, BIBREF44, BIBREF45, BIBREF46, BIBREF47, and encourage crowdworkers to ask a variety of questions beyond only asking questions based on practices described in the document. Instead, crowdworkers are presented with public information about a mobile application available on the Google Play Store including its name, description and navigable screenshots. Figure FIGREF9 shows an example of our user interface. Crowdworkers are asked to imagine they have access to a trusted third-party privacy assistant, to whom they can ask any privacy question about a given mobile application. We use the Amazon Mechanical Turk platform and recruit crowdworkers who have been conferred “master” status and are located within the United States of America. Turkers are asked to provide five questions per mobile application, and are paid $2 per assignment, taking ~eight minutes to complete the task. Data Collection ::: Answer Selection To identify legally sound answers, we recruit seven experts with legal training to construct answers to Turker questions. Experts identify relevant evidence within the privacy policy, as well as provide meta-annotation on the question's relevance, subjectivity, OPP-115 category BIBREF49, and how likely any privacy policy is to contain the answer to the question asked. Data Collection ::: Analysis Table.TABREF17 presents aggregate statistics of the PrivacyQA dataset. 1750 questions are posed to our imaginary privacy assistant over 35 mobile applications and their associated privacy documents. As an initial step, we formulate the problem of answering user questions as an extractive sentence selection task, ignoring for now background knowledge, statistical data and legal expertise that could otherwise be brought to bear. The dataset is partitioned into a training set featuring 27 mobile applications and 1350 questions, and a test set consisting of 400 questions over 8 policy documents. This ensures that documents in training and test splits are mutually exclusive. Every question is answered by at least one expert. In addition, in order to estimate annotation reliability and provide for better evaluation, every question in the test set is answered by at least two additional experts. Table TABREF14 describes the distribution over first words of questions posed by crowdworkers. We also observe low redundancy in the questions posed by crowdworkers over each policy, with each policy receiving ~49.94 unique questions despite crowdworkers independently posing questions. Questions are on average 8.4 words long. As declining to answer a question can be a legally sound response but is seldom practically useful, answers to questions where a minority of experts abstain to answer are filtered from the dataset. Privacy policies are ~3000 words long on average. The answers to the question asked by the users typically have ~100 words of evidence in the privacy policy document. Data Collection ::: Analysis ::: Categories of Questions Questions are organized under nine categories from the OPP-115 Corpus annotation scheme BIBREF49: First Party Collection/Use: What, why and how information is collected by the service provider Third Party Sharing/Collection: What, why and how information shared with or collected by third parties Data Security: Protection measures for user information Data Retention: How long user information will be stored User Choice/Control: Control options available to users User Access, Edit and Deletion: If/how users can access, edit or delete information Policy Change: Informing users if policy information has been changed International and Specific Audiences: Practices pertaining to a specific group of users Other: General text, contact information or practices not covered by other categories. For each question, domain experts indicate one or more relevant OPP-115 categories. We mark a category as relevant to a question if it is identified as such by at least two annotators. If no such category exists, the category is marked as `Other' if atleast one annotator has identified the `Other' category to be relevant. If neither of these conditions is satisfied, we label the question as having no agreement. The distribution of questions in the corpus across OPP-115 categories is as shown in Table.TABREF16. First party and third party related questions are the largest categories, forming nearly 66.4% of all questions asked to the privacy assistant. Data Collection ::: Analysis ::: Answer Validation When do experts disagree? We would like to analyze the reasons for potential disagreement on the annotation task, to ensure disagreements arise due to valid differences in opinion rather than lack of adequate specification in annotation guidelines. It is important to note that the annotators are experts rather than crowdworkers. Accordingly, their judgements can be considered valid, legally-informed opinions even when their perspectives differ. For the sake of this question we randomly sample 100 instances in the test data and analyze them for likely reasons for disagreements. We consider a disagreement to have occurred when more than one expert does not agree with the majority consensus. By disagreement we mean there is no overlap between the text identified as relevant by one expert and another. We find that the annotators agree on the answer for 74% of the questions, even if the supporting evidence they identify is not identical i.e full overlap. They disagree on the remaining 26%. Sources of apparent disagreement correspond to situations when different experts: have differing interpretations of question intent (11%) (for example, when a user asks 'who can contact me through the app', the questions admits multiple interpretations, including seeking information about the features of the app, asking about first party collection/use of data or asking about third party collection/use of data), identify different sources of evidence for questions that ask if a practice is performed or not (4%), have differing interpretations of policy content (3%), identify a partial answer to a question in the privacy policy (2%) (for example, when the user asks `who is allowed to use the app' a majority of our annotators decline to answer, but the remaining annotators highlight partial evidence in the privacy policy which states that children under the age of 13 are not allowed to use the app), and other legitimate sources of disagreement (6%) which include personal subjective views of the annotators (for example, when the user asks `is my DNA information used in any way other than what is specified', some experts consider the boilerplate text of the privacy policy which states that it abides to practices described in the policy document as sufficient evidence to answer this question, whereas others do not). Experimental Setup We evaluate the ability of machine learning methods to identify relevant evidence for questions in the privacy domain. We establish baselines for the subtask of deciding on the answerability (§SECREF33) of a question, as well as the overall task of identifying evidence for questions from policies (§SECREF37). We describe aspects of the question that can render it unanswerable within the privacy domain (§SECREF41). Experimental Setup ::: Answerability Identification Baselines We define answerability identification as a binary classification task, evaluating model ability to predict if a question can be answered, given a question in isolation. This can serve as a prior for downstream question-answering. We describe three baselines on the answerability task, and find they considerably improve performance over a majority-class baseline. SVM: We define 3 sets of features to characterize each question. The first is a simple bag-of-words set of features over the question (SVM-BOW), the second is bag-of-words features of the question as well as length of the question in words (SVM-BOW + LEN), and lastly we extract bag-of-words features, length of the question in words as well as part-of-speech tags for the question (SVM-BOW + LEN + POS). This results in vectors of 200, 201 and 228 dimensions respectively, which are provided to an SVM with a linear kernel. CNN: We utilize a CNN neural encoder for answerability prediction. We use GloVe word embeddings BIBREF50, and a filter size of 5 with 64 filters to encode questions. BERT: BERT BIBREF51 is a bidirectional transformer-based language-model BIBREF52. We fine-tune BERT-base on our binary answerability identification task with a learning rate of 2e-5 for 3 epochs, with a maximum sequence length of 128. Experimental Setup ::: Privacy Question Answering Our goal is to identify evidence within a privacy policy for questions asked by a user. This is framed as an answer sentence selection task, where models identify a set of evidence sentences from all candidate sentences in each policy. Experimental Setup ::: Privacy Question Answering ::: Evaluation Metric Our evaluation metric for answer-sentence selection is sentence-level F1, implemented similar to BIBREF30, BIBREF16. Precision and recall are implemented by measuring the overlap between predicted sentences and sets of gold-reference sentences. We report the average of the maximum F1 from each n$-$1 subset, in relation to the heldout reference. Experimental Setup ::: Privacy Question Answering ::: Baselines We describe baselines on this task, including a human performance baseline. No-Answer Baseline (NA) : Most of the questions we receive are difficult to answer in a legally-sound way on the basis of information present in the privacy policy. We establish a simple baseline to quantify the effect of identifying every question as unanswerable. Word Count Baseline : To quantify the effect of using simple lexical matching to answer the questions, we retrieve the top candidate policy sentences for each question using a word count baseline BIBREF53, which counts the number of question words that also appear in a sentence. We include the top 2, 3 and 5 candidates as baselines. BERT: We implement two BERT-based baselines BIBREF51 for evidence identification. First, we train BERT on each query-policy sentence pair as a binary classification task to identify if the sentence is evidence for the question or not (Bert). We also experiment with a two-stage classifier, where we separately train the model on questions only to predict answerability. At inference time, if the answerable classifier predicts the question is answerable, the evidence identification classifier produces a set of candidate sentences (Bert + Unanswerable). Human Performance: We pick each reference answer provided by an annotator, and compute the F1 with respect to the remaining references, as described in section 4.2.1. Each reference answer is treated as the prediction, and the remaining n-1 answers are treated as the gold reference. The average of the maximum F1 across all reference answers is computed as the human baseline. Results and Discussion The results of the answerability baselines are presented in Table TABREF31, and on answer sentence selection in Table TABREF32. We observe that bert exhibits the best performance on a binary answerability identification task. However, most baselines considerably exceed the performance of a majority-class baseline. This suggests considerable information in the question, indicating it's possible answerability within this domain. Table.TABREF32 describes the performance of our baselines on the answer sentence selection task. The No-answer (NA) baseline performs at 28 F1, providing a lower bound on performance at this task. We observe that our best-performing baseline, Bert + Unanswerable achieves an F1 of 39.8. This suggest that bert is capable of making some progress towards answering questions in this difficult domain, while still leaving considerable headroom for improvement to reach human performance. Bert + Unanswerable performance suggests that incorporating information about answerability can help in this difficult domain. We examine this challenging phenomena of unanswerability further in Section . Results and Discussion ::: Error Analysis Disagreements are analyzed based on the OPP-115 categories of each question (Table.TABREF34). We compare our best performing BERT variant against the NA model and human performance. We observe significant room for improvement across all categories of questions but especially for first party, third party and data retention categories. We analyze the performance of our strongest BERT variant, to identify classes of errors and directions for future improvement (Table.8). We observe that a majority of answerability mistakes made by the BERT model are questions which are in fact answerable, but are identified as unanswerable by BERT. We observe that BERT makes 124 such mistakes on the test set. We collect expert judgments on relevance, subjectivity , silence and information about how likely the question is to be answered from the privacy policy from our experts. We find that most of these mistakes are relevant questions. However many of them were identified as subjective by the annotators, and at least one annotator marked 19 of these questions as having no answer within the privacy policy. However, only 6 of these questions were unexpected or do not usually have an answer in privacy policies. These findings suggest that a more nuanced understanding of answerability might help improve model performance in his challenging domain. Results and Discussion ::: What makes Questions Unanswerable? We further ask legal experts to identify potential causes of unanswerability of questions. This analysis has considerable implications. While past work BIBREF17 has treated unanswerable questions as homogeneous, a question answering system might wish to have different treatments for different categories of `unanswerable' questions. The following factors were identified to play a role in unanswerability: Incomprehensibility: If a question is incomprehensible to the extent that its meaning is not intelligible. Relevance: Is this question in the scope of what could be answered by reading the privacy policy. Ill-formedness: Is this question ambiguous or vague. An ambiguous statement will typically contain expressions that can refer to multiple potential explanations, whereas a vague statement carries a concept with an unclear or soft definition. Silence: Other policies answer this type of question but this one does not. Atypicality: The question is of a nature such that it is unlikely for any policy policy to have an answer to the question. Our experts attempt to identify the different `unanswerable' factors for all 573 such questions in the corpus. 4.18% of the questions were identified as being incomprehensible (for example, `any difficulties to occupy the privacy assistant'). Amongst the comprehendable questions, 50% were identified as likely to have an answer within the privacy policy, 33.1% were identified as being privacy-related questions but not within the scope of a privacy policy (e.g., 'has Viber had any privacy breaches in the past?') and 16.9% of questions were identified as completely out-of-scope (e.g., `'will the app consume much space?'). In the questions identified as relevant, 32% were ill-formed questions that were phrased by the user in a manner considered vague or ambiguous. Of the questions that were both relevant as well as `well-formed', 95.7% of the questions were not answered by the policy in question but it was reasonable to expect that a privacy policy would contain an answer. The remaining 4.3% were described as reasonable questions, but of a nature generally not discussed in privacy policies. This suggests that the answerability of questions over privacy policies is a complex issue, and future systems should consider each of these factors when serving user's information seeking intent. We examine a large-scale dataset of “natural” unanswerable questions BIBREF54 based on real user search engine queries to identify if similar unanswerability factors exist. It is important to note that these questions have previously been filtered, according to a criteria for bad questions defined as “(questions that are) ambiguous, incomprehensible, dependent on clear false presuppositions, opinion-seeking, or not clearly a request for factual information.” Annotators made the decision based on the content of the question without viewing the equivalent Wikipedia page. We randomly sample 100 questions from the development set which were identified as unanswerable, and find that 20% of the questions are not questions (e.g., “all I want for christmas is you mariah carey tour”). 12% of questions are unlikely to ever contain an answer on Wikipedia, corresponding closely to our atypicality category. 3% of questions are unlikely to have an answer anywhere (e.g., `what guides Santa home after he has delivered presents?'). 7% of questions are incomplete or open-ended (e.g., `the south west wind blows across nigeria between'). 3% of questions have an unresolvable coreference (e.g., `how do i get to Warsaw Missouri from here'). 4% of questions are vague, and a further 7% have unknown sources of error. 2% still contain false presuppositions (e.g., `what is the only fruit that does not have seeds?') and the remaining 42% do not have an answer within the document. This reinforces our belief that though they have been understudied in past work, any question answering system interacting with real users should expect to receive such unanticipated and unanswerable questions. Conclusion We present PrivacyQA, the first significant corpus of privacy policy questions and more than 3500 expert annotations of relevant answers. The goal of this work is to promote question-answering research in the specialized privacy domain, where it can have large real-world impact. Strong neural baselines on PrivacyQA achieve a performance of only 39.8 F1 on this corpus, indicating considerable room for future research. Further, we shed light on several important considerations that affect the answerability of questions. We hope this contribution leads to multidisciplinary efforts to precisely understand user intent and reconcile it with information in policy documents, from both the privacy and NLP communities. Acknowledgements This research was supported in part by grants from the National Science Foundation Secure and Trustworthy Computing program (CNS-1330596, CNS-1330214, CNS-15-13957, CNS-1801316, CNS-1914486, CNS-1914444) and a DARPA Brandeis grant on Personalized Privacy Assistants (FA8750-15-2-0277). The US Government is authorized to reproduce and distribute reprints for Governmental purposes not withstanding any copyright notation. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the NSF, DARPA, or the US Government. The authors would like to extend their gratitude to Elias Wright, Gian Mascioli, Kiara Pillay, Harrison Kay, Eliel Talo, Alexander Fagella and N. Cameron Russell for providing their valuable expertise and insight to this effort. The authors are also grateful to Eduard Hovy, Lorrie Cranor, Florian Schaub, Joel Reidenberg, Aditya Potukuchi and Igor Shalyminov for helpful discussions related to this work, and to the three anonymous reviewers of this draft for their constructive feedback. Finally, the authors would like to thank all crowdworkers who consented to participate in this study.
[ "No" ]
3,843
qasper_e
en
null
81fafbdc90dfc4c0ea443de35ee84e7423d8971a66f27a09
Does this method help in sentiment classification task improvement?
Introduction The NLP community is revisiting the role of linguistic structure in applications with the advent of contextual word representations (cwrs) derived from pretraining language models on large corpora BIBREF2, BIBREF3, BIBREF4, BIBREF5. Recent work has shown that downstream task performance may benefit from explicitly injecting a syntactic inductive bias into model architectures BIBREF6, even when cwrs are also used BIBREF7. However, high quality linguistic structure annotation at a large scale remains expensive—a trade-off needs to be made between the quality of the annotations and the computational expense of obtaining them. Shallow syntactic structures (BIBREF8; also called chunk sequences) offer a viable middle ground, by providing a flat, non-hierarchical approximation to phrase-syntactic trees (see Fig. FIGREF1 for an example). These structures can be obtained efficiently, and with high accuracy, using sequence labelers. In this paper we consider shallow syntax to be a proxy for linguistic structure. While shallow syntactic chunks are almost as ubiquitous as part-of-speech tags in standard NLP pipelines BIBREF9, their relative merits in the presence of cwrs remain unclear. We investigate the role of these structures using two methods. First, we enhance the ELMo architecture BIBREF0 to allow pretraining on predicted shallow syntactic parses, instead of just raw text, so that contextual embeddings make use of shallow syntactic context (§SECREF2). Our second method involves classical addition of chunk features to cwr-infused architectures for four different downstream tasks (§SECREF3). Shallow syntactic information is obtained automatically using a highly accurate model (97% $F_1$ on standard benchmarks). In both settings, we observe only modest gains on three of the four downstream tasks relative to ELMo-only baselines (§SECREF4). Recent work has probed the knowledge encoded in cwrs and found they capture a surprisingly large amount of syntax BIBREF10, BIBREF1, BIBREF11. We further examine the contextual embeddings obtained from the enhanced architecture and a shallow syntactic context, using black-box probes from BIBREF1. Our analysis indicates that our shallow-syntax-aware contextual embeddings do not transfer to linguistic tasks any more easily than ELMo embeddings (§SECREF18). Overall, our findings show that while shallow syntax can be somewhat useful, ELMo-style pretraining discovers representations which make additional awareness of shallow syntax largely redundant. Pretraining with Shallow Syntactic Annotations We briefly review the shallow syntactic structures used in this work, and then present a model architecture to obtain embeddings from shallow Syntactic Context (mSynC). Pretraining with Shallow Syntactic Annotations ::: Shallow Syntax Base phrase chunking is a cheap sequence-labeling–based alternative to full syntactic parsing, where the sequence consists of non-overlapping labeled segments (Fig. FIGREF1 includes an example.) Full syntactic trees can be converted into such shallow syntactic chunk sequences using a deterministic procedure BIBREF9. BIBREF12 offered a rule-based transformation deriving non-overlapping chunks from phrase-structure trees as found in the Penn Treebank BIBREF13. The procedure percolates some syntactic phrase nodes from a phrase-syntactic tree to the phrase in the leaves of the tree. All overlapping embedded phrases are then removed, and the remainder of the phrase gets the percolated label—this usually corresponds to the head word of the phrase. In order to obtain shallow syntactic annotations on a large corpus, we train a BiLSTM-CRF model BIBREF14, BIBREF15, which achieves 97% $F_1$ on the CoNLL 2000 benchmark test set. The training data is obtained from the CoNLL 2000 shared task BIBREF12, as well as the remaining sections (except §23 and §20) of the Penn Treebank, using the official script for chunk generation. The standard task definition from the shared task includes eleven chunk labels, as shown in Table TABREF4. Pretraining with Shallow Syntactic Annotations ::: Pretraining Objective Traditional language models are estimated to maximize the likelihood of each word $x_i$ given the words that precede it, $p(x_i \mid x_{<i})$. Given a corpus that is annotated with shallow syntax, we propose to condition on both the preceding words and their annotations. We associate with each word $x_i$ three additional variables (denoted $c_i$): the indices of the beginning and end of the last completed chunk before $x_i$, and its label. For example, in Fig. FIGREF8, $c_4=\langle 3, 3, \text{VP}\rangle $ for $x_4=\text{the}$. Chunks, $c$ are only used as conditioning context via $p(x_i \mid x_{<i}, c_{\leqslant i})$; they are not predicted. Because the $c$ labels depend on the entire sentence through the CRF chunker, conditioning each word's probability on any $c_i$ means that our model is, strictly speaking, not a language model, and it can no longer be meaningfully evaluated using perplexity. A right-to-left model is constructed analogously, conditioning on $c_{\geqslant i}$ alongside $x_{>i}$. Following BIBREF2, we use a joint objective maximizing data likelihood objectives in both directions, with shared softmax parameters. Pretraining with Shallow Syntactic Annotations ::: Pretraining Model Architecture Our model uses two encoders: $e_{\mathit {seq}}$ for encoding the sequential history ($x_{<i}$), and $e_{\mathit {syn}}$ for shallow syntactic (chunk) history ($c_{\leqslant i}$). For both, we use transformers BIBREF16, which consist of large feedforward networks equipped with multiheaded self-attention mechanisms. As inputs to $e_{\mathit {seq}}$, we use a context-independent embedding, obtained from a CNN character encoder BIBREF17 for each token $x_i$. The outputs $h_i$ from $e_{\mathit {seq}}$ represent words in context. Next, we build representations for (observed) chunks in the sentence by concatenating a learned embedding for the chunk label with $h$s for the boundaries and applying a linear projection ($f_\mathit {proj}$). The output from $f_\mathit {proj}$ is input to $e_{\mathit {syn}}$, the shallow syntactic encoder, and results in contextualized chunk representations, $g$. Note that the number of chunks in the sentence is less than or equal to the number of tokens. Each $h_i$ is now concatentated with $g_{c_i}$, where $g_{c_i}$ corresponds to $c_i$, the last chunk before position $i$. Finally, the output is given by $\mbox{\textbf {mSynC}}_i = {u}_\mathit {proj}(h_i, g_{c_i}) = W^\top [h_i; g_{c_i}]$, where $W$ is a model parameter. For training, $\mbox{\textbf {mSynC}}_i$ is used to compute the probability of the next word, using a sampled softmax BIBREF18. For downstream tasks, we use a learned linear weighting of all layers in the encoders to obtain a task-specific mSynC, following BIBREF2. Pretraining with Shallow Syntactic Annotations ::: Pretraining Model Architecture ::: Staged parameter updates Jointly training both the sequential encoder $e_{\mathit {seq}}$, and the syntactic encoder $e_{\mathit {syn}}$ can be expensive, due to the large number of parameters involved. To reduce cost, we initialize our sequential cwrs $h$, using pretrained embeddings from ELMo-transformer. Once initialized as such, the encoder is fine-tuned to the data likelihood objective (§SECREF5). This results in a staged parameter update, which reduces training duration by a factor of 10 in our experiments. We discuss the empirical effect of this approach in §SECREF20. Shallow Syntactic Features Our second approach incorporates shallow syntactic information in downstream tasks via token-level chunk label embeddings. Task training (and test) data is automatically chunked, and chunk boundary information is passed into the task model via BIOUL encoding of the labels. We add randomly initialized chunk label embeddings to task-specific input encoders, which are then fine-tuned for task-specific objectives. This approach does not require a shallow syntactic encoder or chunk annotations for pretraining cwrs, only a chunker. Hence, this can more directly measure the impact of shallow syntax for a given task. Experiments Our experiments evaluate the effect of shallow syntax, via contextualization (mSynC, §SECREF2) and features (§SECREF3). We provide comparisons with four baselines—ELMo-transformer BIBREF0, our reimplementation of the same, as well as two cwr-free baselines, with and without shallow syntactic features. Both ELMo-transformer and mSynC are trained on the 1B word benchmark corpus BIBREF19; the latter also employs chunk annotations (§SECREF2). Experimental settings are detailed in Appendix §SECREF22. Experiments ::: Downstream Task Transfer We employ four tasks to test the impact of shallow syntax. The first three, namely, coarse and fine-grained named entity recognition (NER), and constituency parsing, are span-based; the fourth is a sentence-level sentiment classification task. Following BIBREF2, we do not apply finetuning to task-specific architectures, allowing us to do a controlled comparison with ELMo. Given an identical base architecture across models for each task, we can attribute any difference in performance to the incorporation of shallow syntax or contextualization. Details of downstream architectures are provided below, and overall dataset statistics for all tasks is shown in the Appendix, Table TABREF26. Experiments ::: Downstream Task Transfer ::: NER We use the English portion of the CoNLL 2003 dataset BIBREF20, which provides named entity annotations on newswire data across four different entity types (PER, LOC, ORG, MISC). A bidirectional LSTM-CRF architecture BIBREF14 and a BIOUL tagging scheme were used. Experiments ::: Downstream Task Transfer ::: Fine-grained NER The same architecture and tagging scheme from above is also used to predict fine-grained entity annotations from OntoNotes 5.0 BIBREF21. There are 18 fine-grained NER labels in the dataset, including regular named entitities as well as entities such as date, time and common numerical entries. Experiments ::: Downstream Task Transfer ::: Phrase-structure parsing We use the standard Penn Treebank splits, and adopt the span-based model from BIBREF22. Following their approach, we used predicted part-of-speech tags from the Stanford tagger BIBREF23 for training and testing. About 51% of phrase-syntactic constituents align exactly with the predicted chunks used, with a majority being single-width noun phrases. Given that the rule-based procedure used to obtain chunks only propagates the phrase type to the head-word and removes all overlapping phrases to the right, this is expected. We did not employ jack-knifing to obtain predicted chunks on PTB data; as a result there might be differences in the quality of shallow syntax annotations between the train and test portions of the data. Experiments ::: Downstream Task Transfer ::: Sentiment analysis We consider fine-grained (5-class) classification on Stanford Sentiment Treebank BIBREF24. The labels are negative, somewhat_negative, neutral, positive and somewhat_positive. Our model was based on the biattentive classification network BIBREF25. We used all phrase lengths in the dataset for training, but test results are reported only on full sentences, following prior work. Results are shown in Table TABREF12. Consistent with previous findings, cwrs offer large improvements across all tasks. Though helpful to span-level task models without cwrs, shallow syntactic features offer little to no benefit to ELMo models. mSynC's performance is similar. This holds even for phrase-structure parsing, where (gold) chunks align with syntactic phrases, indicating that task-relevant signal learned from exposure to shallow syntax is already learned by ELMo. On sentiment classification, chunk features are slightly harmful on average (but variance is high); mSynC again performs similarly to ELMo-transformer. Overall, the performance differences across all tasks are small enough to infer that shallow syntax is not particularly helpful when using cwrs. Experiments ::: Linguistic Probes We further analyze whether awareness of shallow syntax carries over to other linguistic tasks, via probes from BIBREF1. Probes are linear models trained on frozen cwrs to make predictions about linguistic (syntactic and semantic) properties of words and phrases. Unlike §SECREF11, there is minimal downstream task architecture, bringing into focus the transferability of cwrs, as opposed to task-specific adaptation. Experiments ::: Linguistic Probes ::: Probing Tasks The ten different probing tasks we used include CCG supertagging BIBREF26, part-of-speech tagging from PTB BIBREF13 and EWT (Universal Depedencies BIBREF27), named entity recognition BIBREF20, base-phrase chunking BIBREF12, grammar error detection BIBREF28, semantic tagging BIBREF29, preposition supersense identification BIBREF30, and event factuality detection BIBREF31. Metrics and references for each are summarized in Table TABREF27. For more details, please see BIBREF1. Results in Table TABREF13 show ten probes. Again, we see the performance of baseline ELMo-transformer and mSynC are similar, with mSynC doing slightly worse on 7 out of 9 tasks. As we would expect, on the probe for predicting chunk tags, mSynC achieves 96.9 $F_1$ vs. 92.2 $F_1$ for ELMo-transformer, indicating that mSynC is indeed encoding shallow syntax. Overall, the results further confirm that explicit shallow syntax does not offer any benefits over ELMo-transformer. Experiments ::: Effect of Training Scheme We test whether our staged parameter training (§SECREF9) is a viable alternative to an end-to-end training of both $e_{\mathit {syn}}$ and $e_{\mathit {seq}}$. We make a further distinction between fine-tuning $e_{\mathit {seq}}$ vs. not updating it at all after initialization (frozen). Downstream validation-set $F_1$ on fine-grained NER, reported in Table TABREF21, shows that the end-to-end strategy lags behind the others, perhaps indicating the need to train longer than 10 epochs. However, a single epoch on the 1B-word benchmark takes 36 hours on 2 Tesla V100s, making this prohibitive. Interestingly, the frozen strategy, which takes the least amount of time to converge (24 hours on 1 Tesla V100), also performs almost as well as fine-tuning. Conclusion We find that exposing cwr-based models to shallow syntax, either through new cwr learning architectures or explicit pipelined features, has little effect on their performance, across several tasks. Linguistic probing also shows that cwrs aware of such structures do not improve task transferability. Our architecture and methods are general enough to be adapted for richer inductive biases, such as those given by full syntactic trees (RNNGs; BIBREF32), or to different pretraining objectives, such as masked language modeling (BERT; BIBREF5); we leave this pursuit to future work. Supplemental Material ::: Hyperparameters ::: ELMo-transformer Our baseline pretraining model was a reimplementation of that given in BIBREF0. Hyperparameters were generally identical, but we trained on only 2 GPUs with (up to) 4,000 tokens per batch. This difference in batch size meant we used 6,000 warm up steps with the learning rate schedule of BIBREF16. Supplemental Material ::: Hyperparameters ::: mSynC The function $f_{seq}$ is identical to the 6-layer biLM used in ELMo-transformer. $f_{syn}$, on the other hand, uses only 2 layers. The learned embeddings for the chunk labels have 128 dimensions and are concatenated with the two boundary $h$ of dimension 512. Thus $f_{proj}$ maps $1024 + 128$ dimensions to 512. Further, we did not perform weight averaging over several checkpoints. Supplemental Material ::: Hyperparameters ::: Shallow Syntax The size of the shallow syntactic feature embedding was 50 across all experiments, initialized uniform randomly. All model implementations are based on the AllenNLP library BIBREF33.
[ "Yes", "No" ]
2,317
qasper_e
en
null
bcfe56efad9715cc714ffd2e523eaa9ad796a453e7da77a6
what datasets were used in evaluation?
Introduction With the steady growth in the commercial websites and social media venues, the access to users' reviews have become easier. As the amount of data that can be mined for opinion increased, commercial companies' interests for sentiment analysis increased as well. Sentiment analysis is an important part of understanding user behavior and opinions on products, places, or services. Sentiment analysis has long been studied by the research community, leading to several sentiment-related resources such as sentiment dictionaries that can be used as features for machine learning models BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . These resources help increase sentiment analysis accuracies; however, they are highly dependent on language and require researchers to build such resources for every language to process. Feature engineering is a large part of the model building phase for most sentiment analysis and emotion detection models BIBREF4 . Determining the correct set of features is a task that requires thorough investigation. Furthermore, these features are mostly language and dataset dependent making it even further challenging to build models for different languages. For example, the sentiment and emotion lexicons, as well as pre-trained word embeddings are not completely transferable to other languages which replicates the efforts for every language that users would like to build sentiment classification models on. For languages and tasks where the data is limited, extracting these features, building language models, training word embeddings, and creating lexicons are big challenges. In addition to the feature engineering effort, the machine learning models' parameters also need to be tuned separately for each language to get the optimal results. In this paper, we take a different approach. We build a reusable sentiment analysis model that does not utilize any lexicons. Our goal is to evaluate how well a generic model can be used to mine opinion in different languages where data is more limited than the language where the generic model is trained on. To that end, we build a training set that contains reviews from different domains in English (e.g., movie reviews, product reviews) and train a recurrent neural network (RNN) model to predict polarity of those reviews. Then focusing on a domain, we make the model specialized in that domain by using the trained weights from the larger data and further training with data on a specific domain. To evaluate the reusability of the sentiment analysis model, we test with non-English datasets. We first translate the test set to English and use the pre-trained model to score polarity in the translated text. In this way, our proposed approach eliminates the need to train language-dependent models, use of sentiment lexicons and word embeddings for each language. Our experiments show that a generalizable sentiment analysis model can be utilized successfully to perform opinion mining for languages that do not have enough resources to train specific models. The contributions of this study are; 1) a robust approach that utilizes machine translation to reuse a model trained on one language in other languages, 2) an RNN-based approach to eliminate feature extraction as well as resource requirements for sentiment analysis, and 3) a technique that statistically significantly outperforms baselines for multilingual sentiment analysis task when data is limited. To the best of our knowledge, this study is the first to apply a deep learning model to the multilingual sentiment analysis task. Related Work There is a rich body of work in sentiment analysis including social media platforms such as Twitter BIBREF5 and Facebook BIBREF4 . One common factor in most of the sentiment analysis work is that features that are specific to sentiment analysis are extracted (e.g., sentiment lexicons) and used in different machine learning models. Lexical resources BIBREF0 , BIBREF1 , BIBREF4 for sentiment analysis such as SentiWordNet BIBREF6 , BIBREF7 , linguistic features and expressions BIBREF8 , polarity dictionaries BIBREF2 , BIBREF3 , other features such as topic-oriented features and syntax BIBREF9 , emotion tokens BIBREF10 , word vectors BIBREF11 , and emographics BIBREF12 are some of the information that are found useful for improving sentiment analysis accuracies. Although these features are beneficial, extracting them requires language-dependent data (e.g., a sentiment dictionary for Spanish is trained on Spanish data instead of using all data from different languages). Our goal in this work is to streamline the feature engineering phase by not relying on any dictionary other than English word embeddings that are trained on any data (i.e. not necessarily sentiment analysis corpus). To that end, we utilize off-the-shelf machine translation tools to first translate corpora to the language where more training data is available and use the translated corpora to do inference on. Machine translation for multilingual sentiment analysis has also seen attention from researchers. Hiroshi et al. BIBREF13 translated only sentiment units with a pattern-based approach. Balahur and Turchi BIBREF14 used uni-grams, bi-grams and tf-idf features for building support vector machines on translated text. Boyd-Graber and Resnik BIBREF15 built Latent Dirichlet Allocation models to investigate how multilingual concepts are clustered into topics. Mohammed et al. BIBREF16 translate Twitter posts to English as well as the English sentiment lexicons. Tellez et al. BIBREF17 propose a framework where language-dependent and independent features are used with an SVM classifier. These machine learning approaches also require a feature extraction phase where we eliminate by incorporating a deep learning approach that does the feature learning intrinsically. Further, Wan BIBREF18 uses an ensemble approach where the resources (e.g., lexicons) in both the original language and the translated language are used – requiring resources to be present in both languages. Brooke et al. BIBREF19 also use multiple dictionaries. In this paper, we address the resource bottleneck of these translation-based approaches and propose a deep learning approach that does not require any dictionaries. Methodology In order to eliminate the need to find data and build separate models for each language, we propose a multilingual approach where a single model is built in the language where the largest resources are available. In this paper we focus on English as there are several sentiment analysis datasets in English. To make the English sentiment analysis model as generalizable as possible, we first start by training with a large dataset that has product reviews for different categories. Then, using the trained weights from the larger generic dataset, we make the model more specialized for a specific domain. We further train the model with domain-specific English reviews and use this trained model to score reviews that share the same domain from different languages. To be able to employ the trained model, test sets are first translated to English via machine translation and then inference takes place. Figure FIGREF1 shows our multilingual sentiment analysis approach. It is important to note that this approach does not utilize any resource in any of the languages of the test sets (e.g., word embeddings, lexicons, training set). Deep learning approaches have been successful in many applications ranging from computer vision to natural language processing BIBREF20 . Recurrent neural network (RNN) including Long Short Term Memory (LSTM) and Gated Recurrent Units (GRU) are subsets of deep learning algorithms where the dependencies between tokens can be used by the model. These models can also be used with variable length input vectors which makes them suitable for text input. LSTM and GRU models allow operations of sequences of vectors over time and have the capability to `remember' previous information BIBREF20 . RNN have been found useful for several natural language processing tasks including language modeling, text classification, machine translation. RNN can also utilize pre-trained word embeddings (numeric vector representations of words trained on unlabeled data) without requiring hand-crafted features. Therefore in this paper, we employ an RNN architecture that takes text and pre-trained word embeddings as inputs and generates a classification result. Word embeddings represent words as numeric vectors and capture semantic information. They are trained in an unsupervised fashion making it useful for our task. The sentiment analysis model that is trained on English reviews has two bidirectional layers, each with 40 neurons and a dropout BIBREF21 of 0.2 is used. The training phase takes pre-trained word embeddings and reviews in textual format, then predicts the polarity of the reviews. For this study, an embedding length of 100 is used (i.e., each word is represented by a vector of length 100). We utilized pre-trained global vectors BIBREF22 . The training phase is depicted in Figure FIGREF2 . Experiments To evaluate the proposed approach for multilingual sentiment analysis task, we conducted experiments. This section first presents the corpora used in this study followed by experimental results. Throughout our experiments, we use SAS Deep Learning Toolkit. For machine translation, Google translation API is used. Corpora Two sets of corpora are used in this study, both are publicly available. The first set consists of English reviews and the second set contains restaurant reviews from four different languages (Spanish, Turkish, Dutch, Russian). We focus on polarity detection in reviews, therefore all datasets in this study have two class values (positive, negative). With the goal of building a generalizable sentiment analysis model, we used three different training sets as provided in Table TABREF5 . One of these three datasets (Amazon reviews BIBREF23 , BIBREF24 ) is larger and has product reviews from several different categories including book reviews, electronics products reviews, and application reviews. The other two datasets are to make the model more specialized in the domain. In this paper we focus on restaurant reviews as our domain and use Yelp restaurant reviews dataset extracted from Yelp Dataset Challenge BIBREF25 and restaurant reviews dataset as part of a Kaggle competition BIBREF26 . For evaluation of the multilingual approach, we use four languages. These datasets are part of SemEval-2016 Challenge Task 5 BIBREF27 , BIBREF28 . Table TABREF7 shows the number of observations in each test corpus. Experimental Results For experimental results, we report majority baseline for each language where the majority baseline corresponds to a model's accuracy if it always predicts the majority class in the dataset. For example, if the dataset has 60% of all reviews positive and 40% negative, majority baseline would be 60% because a model that always predicts “positive” will be 60% accurate and will make mistakes 40% of the time. In addition to the majority baseline, we also compare our results with a lexicon-based approach. We use SentiWordNet BIBREF29 to obtain a positive and a negative sentiment score for each token in a review. Then sum of positive sentiment scores and negative sentiment scores for each review is obtained by summing up the scores for each token. If the positive sum score for a given review is greater than the negative sum score, we accept that review as a positive review. If negative sum is larger than or equal to the positive sum, the review is labeled as a negative review. RNN outperforms both baselines in all four datasets (see Table TABREF9 ). Also for Spanish restaurant review, the lexicon-based baseline is below the majority baseline which shows that solely translating data and using lexicons is not sufficient to achieve good results in multilingual sentiment analysis. Among the wrong classifications for each test set, we calculated the percentage of false positives and false negatives. Table TABREF10 shows the distribution of false positives and false negatives for each class. In all four classes, the number of false negatives are more than the number of false positives. This can be explained by the unbalanced training dataset where the number of positive reviews are more than the number of negative reviews (59,577 vs 17,132). To be able to see the difference between baseline and RNN, we took each method's results as a group (4 values: one for each language) and compared the means. Post hoc comparisons using the Tukey HSD test indicated that the mean accuracies for baselines (majority and lexicon-based) are significantly different than RNN accuracies as can be seen in Table TABREF12 (family-wise error rate=0.06). When RNN is compared with lexicon-based baseline and majority baseline, the null hypothesis can be rejected meaning that each test is significant. In addition to these comparisons, we also calculated the effect sizes (using Cohen's d) between the baselines and our method. The results are aligning with Tukey HSD results such that while our method versus baselines have very large effect sizes, lexicon-based baseline and majority baseline have negligible effect size. Figure FIGREF11 shows the differences in minimum and maximum values of all three approaches. As the figure shows, RNN significantly outperforms both baselines for the sentiment classification task. Discussion One of the crucial elements while using machine translation is to have highly accurate translations. It is likely that non-English words would not have word embeddings, which will dramatically affect the effectiveness of the system. We analyzed the effect of incorrect translations into our approach. To that end, we extracted all wrong predictions from the test set and computed the ratio of misclassifications that have non-English words in them. We first extracted all misclassifications for a given language and for each observation in the misclassification set, we iterated through each token to check if the token is in English. In this way, we counted the number of observations that contained at least one non-English word and divided that with the size of the misclassifications set. We used this ratio to investigate the effect of machine translation errors. We found that 25.84% of Dutch, 21.76% of Turkish, 24.46% Spanish, and 10.71% of Russian reviews that were misclassified had non-English words in them. These non-English words might be causing the misclassifications. However, a large portion of the missclassifications is not caused due to not-translated words. At the end, the machine translation errors has some but not noticeable effects on our model. Therefore, we can claim that machine translation preserves most of the information necessary for sentiment analysis. We also evaluated our model with an English corpus BIBREF27 to see its performance without any interference from machine translation errors. Using the English data for testing, the model achieved 87.06% accuracy where a majority baseline was 68.37% and the lexicon-based baseline was 60.10%. Considering the improvements over the majority baseline achieved by the RNN model for both non-English (on the average 22.76% relative improvement; 15.82% relative improvement on Spanish, 72.71% vs. 84.21%, 30.53% relative improvement on Turkish, 56.97% vs. 74.36%, 37.13% relative improvement on Dutch, 59.63% vs. 81.77%, and 7.55% relative improvement on Russian, 79.60% vs. 85.62%) and English test sets (27.34% relative improvement), we can draw the conclusion that our model is robust to handle multiple languages. Building separate models for each language requires both labeled and unlabeled data. Even though having lots of labeled data in every language is the perfect case, it is unrealistic. Therefore, eliminating the resource requirement in this resource-constrained task is crucial. The fact that machine translation can be used in reusing models from different languages is promising for reducing the data requirements. Conclusion Building effective machine learning models for text requires data and different resources such as pre-trained word embeddings and reusable lexicons. Unfortunately, most of these resources are not entirely transferable to different domains, tasks or languages. Sentiment analysis is one such task that requires additional effort to transfer knowledge between languages. In this paper, we studied the research question: Can we build reusable sentiment analysis models that can be utilized for making inferences in different languages without requiring separate models and resources for each language? To that end, we built a recurrent neural network model in the language that had largest data available. We took a general-to-specific model building strategy where the larger corpus that had reviews from different domains was first used to train the RNN model and a smaller single-domain corpus of sentiment reviews was used to specialize the model on the given domain. During scoring time, we used corpora for the given domain in different languages and translated them to English to be able to classify sentiments with the trained model. Experimental results showed that the proposed multilingual approach outperforms both the majority baseline and the lexicon-based baseline. In this paper we made the sentiment analysis model specific to a single domain. For future work, we would like to investigate the effectiveness of our model on different review domains including hotel reviews and on different problems such as detecting stance.
[ "SemEval-2016 Challenge Task 5 BIBREF27 , BIBREF28", " English reviews , restaurant reviews from four different languages (Spanish, Turkish, Dutch, Russian)" ]
2,720
qasper_e
en
null
1bd9a082bd7b5ab6c152dbda3db38c0c3bc92cddf82383a3
How big are improvements of small-scale unbalanced datasets when sentence representation is enhanced with topic information?
Introduction Medical text mining is an exciting area and is becoming attractive to natural language processing (NLP) researchers. Clinical notes are an example of text in the medical area that recent work has focused on BIBREF0, BIBREF1, BIBREF2. This work studies abbreviation disambiguation on clinical notes BIBREF3, BIBREF4, specifically those used commonly by physicians and nurses. Such clinical abbreviations can have a large number of meanings, depending on the specialty BIBREF5, BIBREF6. For example, the term MR can mean magnetic resonance, mitral regurgitation, mental retardation, medical record and the general English Mister (Mr.). Table TABREF1 illustrates such an example. Abbreviation disambiguation is an important task in medical text understanding task BIBREF7. Successful recognition of the abbreviations in the notes can contribute to downstream tasks such as classification, named entity recognition, and relation extraction BIBREF7. Recent work focuses on formulating the abbreviation disambiguation task as a classification problem, where the possible senses of a given abbreviation term are pre-defined with the help of domain experts BIBREF6, BIBREF5. Traditional features such as part-of-speech (POS) and Term Frequency-Inverse Document Frequency (TF-IDF) are widely investigated for clinical notes classification. Classifiers like support vector machines (SVMs) and random forests (RFs) are used to make predictions BIBREF1. Such methods depend heavily on feature engineering. Recently, deep features have been investigated in the medical domain. Word embeddings BIBREF8 and Convolutional Neural Networks (CNNs) BIBREF9, BIBREF10 provide very competitive performance on text classification for clinical notes and abbreviation disambiguation task BIBREF0, BIBREF11, BIBREF12, BIBREF6. Beyond vanilla embeddings, BIBREF13 utilized contextual features to do abbreviation expansion. Another challenge is the difficulty in obtaining training data: clinical notes have many restrictions due to privacy issues and require domain experts to develop high-quality annotations, thus leading to limited annotated training and testing data. Another difficulty is that in the real world (and in the existing public datasets), some abbreviation term-sense pairs (for example, AB as abortion) have a very high frequency of occurrence BIBREF5, while others are rarely found. This long tail issue creates the challenge of training under unbalanced sample distributions. We tackle these problems in the setting of few-shot learning BIBREF14, BIBREF15 where only a few or a low number of samples can be found in the training dataset to make use of limited resources. We propose a model that combines deep contextual features based on ELMo BIBREF16 and topic information to solve the abbreviation disambiguation task. Our contributions can be summarized as: 1) we re-examined and corrected an existing dataset for training and we collected a test set for evaluation with focus especially for rare senses; 2) we proposed a few-shot learning approach which combines topic information and contextualized word embeddings to solve clinical abbreviation disambiguation task. The implementation is available online; 3) as limited research are conducted on this particular task, we evaluated and compared a number of baseline methods including classical models and deep models comprehensively. Datasets Training Dataset UM Inventory BIBREF5 is a public dataset created by researchers from the University of Minnesota, containing about 37,500 training samples with 75 abbreviation terms. Existing work reports abbreviation disambiguation results on 50 abbreviation terms BIBREF6, BIBREF5, BIBREF17. However, after carefully reviewing this dataset, we found that it contains many samples where medical professionals disagree: wrong samples and uncategorized samples. Due to these mistakes and flaws of this dataset, we removed the erroneous samples and eventually selected 30 abbreviation terms as our training dataset that can be made public. Among all the abbreviation terms, we have 11,466 samples, and 93 term-sense pairs in total (on average 123.3 samples/term-sense pair and 382.2 samples/term). Some term-sense pairs are very popular with a larger number of training samples but some are not (typically less than 5), we call them rare-sense cases . More details can be found in Appendix SECREF7. Testing Dataset Our testing dataset takes MIMIC-III BIBREF18 and PubMed as data sources. Here we are referring to all the notes data from MIMIC-III (NOTEEVENTS table ) and Case Reports articles from PubMed, as this type of contents are close to medical notes. We provide detailed information in Appendix SECREF8, including the steps to create the testing dataset. Eventually, we have a balanced testing dataset, where each term-sense pair has at least 11 and up to 15 samples for training (on average, each pair has 14.56 samples and the median sample number is 15). Baselines We conducted a comprehensive comparison with the baseline models, and some of them were never investigated for the abbreviation disambiguation task. We applied traditional features by simply taking the TF-IDF features as the inputs into the classic classifiers. Deep features are also considered: a Doc2vec model BIBREF19 was pre-trained using Gensim and these word embeddings were applied to initialize deep models and fine-tuned. TF-IDF: We applied TF-IDF features as inputs to four classifiers: support vector machine classifier (SVM), logistic regression classifier (LR), Naive Bayes classifier (NB) and random forest (RF); CNN: We followed the same architecture as BIBREF9; LSTM: We applied an LSTM model BIBREF20 to classify the sentences with pre-trained word embeddings;LSTM-soft: We then added a soft-attention BIBREF21 layer on top of the LSTM model where we computed soft alignment scores over each of the hidden states; LSTM-self: We applied a self-attention layer BIBREF22 to LSTM model. We denote the content vector as $c_i$ for each sentence $i$, as the input to the last classification layer. Topic-attention Model ELMo ELMo is a new type of word representation introduced by BIBREF16 which considers contextual information captured by pre-trained BiLSTM language models. Similar works like BERT BIBREF23 and BioBERT BIBREF24 also consider context but with deeper structures. Compared with those models, ELMo has less parameters and not easy to be overfitting. We trained our ELMo model on the MIMIC-III corpus. Since some sentences also appear in the test set, one may raise the concern of performance inflation in testing. However, we pre-trained ELMo using the whole corpus of MIMIC in an unsupervised way, so the classification labels were not visible during training. To train the ELMo model, we adapted code from AllenNLP, we set the word embedding dimension to 64 and applied two BiLSTM layers. For all of our experiments that involved with ELMo, we initialized the word embeddings from the last layer of ELMo. Topic-attention We propose a neural topic-attention model for text classification. Our assumption is that the topic information plays an important role in doing the disambiguation given a sentence. For example, the abbreviation of the medical term FISH has two potential senses: general English fish as seafood and the sense of fluorescent in situ hybridization BIBREF17. The former case always goes with the topic of food, allergies while the other one appears in the topic of some examination test reports. In our model, a topic-attention module is applied to add topic information into the final sentence representation. This module calculates the distribution of topic-attention weights on a list of topic vectors. As shown in Figure SECREF5, we took the content vector $c_i$ (from soft-attention BIBREF22) and a topic matrix $T_{topic}=[t_1,t_2,..,t_j]$ (where each $t_i$ is a column vector in the figure and we illustrate four topics) as the inputs to the topic-attention module, and then calculated a weighted summation of the topic vectors. The final sentence representation $r_i$ for the sentence $i$ was calculated as the following: where $W_{topic}$ and $b _ { topic }$ are trainable parameters, $\beta _{it}$ represents the topic-attention weights. The final sentence representation is denoted as $r_i$, and $[\cdot ,\cdot ]$ means concatenation. Here $s_i$ is the representation of the sentence which includes the topic information. The final sentence representation $r_i$ is the concatenation of $c_i$ and $s_i$, now we have both context-aware sentence representation and topic-related representation. Then we added a fully-connected layer, followed by a Softmax to do classification with cross-entropy loss function. Topic Matrix To generate the topic matrix $T_{topic}$ as in Equation DISPLAY_FORM9, we propose a convolution-based method to generate topic vectors. We first pre-trained a topic model using the Latent Dirichlet Allocation (LDA) BIBREF25 model on MIMIC-III notes as used by the ELMo model. We set the number of topics to be 50 and ran for 30 iterations. Then we were able to get a list of top $k$ words (we set $k=100$) for each topic. To get the topic vector $t$ for a specific topic: where $e_j$ (column vector) is the pre-trained Doc2vec word embedding for the top word $j$ from the current topic, and $Conv(\cdot )$ indicates a convolutional layer; $maxpool(\cdot )$ is a max pooling layer. We finally reshaped the output $t$ into a 1-dimension vector. Eventually we collected all topic vectors as the topic matrix $T_{topic}$ in Figure SECREF5. Evaluation We did three groups of experiments including two baseline groups and our proposed model. The first group was the TF-IDF features in Section SECREF3 for traditional models. The Naïve Bayesian classifier (NB) has the highest scores among all traditional methods. The second group of experiments used neural models described in Section SECREF3, where LSTM with self attention model (LSTM-self) has competitive results among this group, we choose this model as our base model. Notably, this is the first study that compares and evaluates LSTM-based models on the medical term abbreviation disambiguation task. tableExperimental Results: we report macro F1 in all the experiments. figureTopic-attention Model The last group contains the results of our proposed model with different settings. We used Topic Only setting on top of the base model, where we only added the topic-attention layer, and all the word embeddings were from our pre-trained Doc2vec model and were fine-tuned during back propagation. We could observe that compared with the base model, we have an improvement of 7.36% on accuracy and 9.69% on F1 score. Another model (ELMo Only) is to initialize the word embeddings of the base model using the pre-trained ELMo model, and here no topic information was added. Here we have higher scores than Topic Only, and the accuracy and F1 score increased by 9.87% and 12.26% respectively compared with the base model. We then conducted a combination of both(ELMo+Topic), where the word embeddings from the sentences were computed from the pre-trained ELMo model, and the topic representations were from the pre-trained Doc2vec model. We have a remarkable increase of 12.27% on the accuracy and 14.86% on F1 score. To further compare our proposed topic-attention model and the base model, we report an average of area under the curve(AUC) score among all 30 terms: the base model has an average AUC of 0.7189, and our topic-attention model (ELMo+Topic) achieves an average AUC of 0.8196. We provide a case study in Appendix SECREF9. The results show that the model can benefit from ELMo, which considers contextual features, and the topic-attention module, which brings in topic information. We can conclude that under the few-shot learning setting, our proposed model can better capture the sentence features when only limited training samples are explored in a small-scale dataset. Conclusion In this paper, we propose a neural topic-attention model with few-shot learning for medical abbreviation disambiguation task. We also manually cleaned and collected training and testing data for this task, which we release to promote related research in NLP with clinical notes. In addition, we evaluated and compared a comprehensive set of baseline models, some of which had never been applied to the medical term abbreviation disambiguation task. Future work would be to adapt other models like BioBERT or BERT to our proposed topic-attention model. We will also extend the method into other clinical notes tasks such as drug name recognition and ICD-9 code auto-assigning BIBREF26. In addition, other LDA-based approach can be investigated. Dataset Details Figure FIGREF11 shows the histogram for the distribution of term-sense pair sample numbers. The X-axis gives the pair sample numbers and the Y-axis shows the counts. For example, the first bar shows that there are 43 term-sense pairs that have the sample number in the range of 0-50. We also show histogram of class numbers for all terms in Figure FIGREF11. The Y-axis gives the counts while the X-axis gives the number of classes. For instance, the first bar means there are 12 terms contain 2 classes. Testing Dataset Since the training dataset is unbalanced and relatively small, it is hard to split it further into training and testing. Existing work BIBREF6, BIBREF5 conducted fold validation on the datasets and we found there are extreme rare senses for which only one or two samples exist. Besides, we believe that it is better to evaluate on a balanced dataset to determine whether it performs equally well on all classes. While most of the works deal with unbalanced training and testing that which may lead to very high accuracy if there is a dominating class in both training and testing set, the model may have a poor performance in rare testing cases because only a few samples have been seen during training. To be fair to all the classes, a good performance on these rare cases is also required otherwise it may lead to a severe situation. In this work, we are very interested to see how the model works among all senses especially the rare ones. Also, we can prevent the model from trivially predicting the dominant class and achieving high accuracy. As a result, we decided to create a dataset with the same number of samples for each case in the test dataset. Our testing dataset takes MIMIC-III BIBREF18 and PubMed as data sources. Here we are referring to all the notes data from MIMIC-III (NOTEEVENTS table ) and Case Reports articles from PubMed, as these contents are close to medical notes. To create the test set, we first followed the approach by BIBREF6 who applied an auto-generating method. Initially, we built a term-sense dictionary from the training dataset. Then we did matching for the sense words or phrases in the MIMIC-III notes dataset, and once there is a match, we replaced the words or phrases with the abbreviation term . We then asked two researchers with a medical background to check the matched samples manually with the following judgment: given this sentence and the abbreviation term and its sense, do you think the content is enough for you to guess the sense and whether this is the right sense? To estimate the agreement on the annotations, we selected a subset which contains 120 samples randomly and let the two annotators annotate individually. We got a Kappa score BIBREF27 of 0.96, which is considered as a near perfect agreement. We then distributed the work to the two annotators, and each of them labeled a half of the dataset, which means each sample was only labeled by a single annotator. For some rare term-sense pairs, we failed to find samples from MIMIC-III. The annotators then searched these senses via PubMed data source manually, aiming to find clinical notes-like sentences. They picked good sentences from these results as testing samples where the keywords exist and the content is informative. For those senses that are extremely rare, we let the annotators create sentences in the clinical note style as testing samples according to their experiences. Eventually, we have a balanced testing dataset, where each term-sense pair has around 15 samples for testing (on average, each pair has 14.56 samples and the median sample number is 15), and a comparison with training dataset is shown in Figure FIGREF11. Due to the difficulty for collecting the testing dataset, we decided to only collect for a random selection of 30 terms. On average, it took few hours to generate testing samples for each abbreviation term per researcher. Case Study We now select two representative terms AC, IM and plot their receiver operating characteristic(ROC) curves. The term has a relatively large number of classes and the second one has extremely unbalanced training samples. We show the details in Table TABREF16. We have 8 classes in term AC. Figure FIGREF15(a) illustrates the results of our best performed model and Figure FIGREF15(b) shows the results of the base model. The accuracy and F1 score have an improvement from 0.3898 and 0.2830 to 0.4915 and 0.4059 respectively. Regarding the rare senses (for example, class 0, 1, 4 and 6), we can observe an increase in the ROC areas. Class 6 has an obvious improvement from 0.75 to 1.00. Such improvements in the rare senses make a huge difference in the reported average accuracy and F1 score, since we have a nearly equal number of samples for each class in the testing data. Similarly, we show the plots for IM term in Figure FIGREF15(c) and FIGREF15(d). IM has only two classes, but they are very unbalanced in training set, as shown in Table TABREF16. The accuracy and F1 scores improved from 0.6667 and 0.6250 to 0.8667 and 0.8667 respectively. We observe improvements in the ROC areas for both classes. This observation further shows that our model is more sensitive to all the class samples compared to the base model, even for the terms that have only a few samples in the training set. Again, by plotting the ROC curves and comparing AUC areas, we show that our model, which applies ELMo and topic-attention, has a better representation ability under the setting of few-shot learning.
[ "7.36% on accuracy and 9.69% on F1 score", "it has 0.024 improvement in accuracy comparing to ELMO Only and 0.006 improvement in F1 score comparing to ELMO Only too" ]
2,890
qasper_e
en
null
d3944ab2d7e6cb9d6f722987f0406aa0aa01c8ace8608a89
Do they use datasets with transcribed text or do they determine text from the audio?
Introduction Recently, deep learning algorithms have successfully addressed problems in various fields, such as image classification, machine translation, speech recognition, text-to-speech generation and other machine learning related areas BIBREF0 , BIBREF1 , BIBREF2 . Similarly, substantial improvements in performance have been obtained when deep learning algorithms have been applied to statistical speech processing BIBREF3 . These fundamental improvements have led researchers to investigate additional topics related to human nature, which have long been objects of study. One such topic involves understanding human emotions and reflecting it through machine intelligence, such as emotional dialogue models BIBREF4 , BIBREF5 . In developing emotionally aware intelligence, the very first step is building robust emotion classifiers that display good performance regardless of the application; this outcome is considered to be one of the fundamental research goals in affective computing BIBREF6 . In particular, the speech emotion recognition task is one of the most important problems in the field of paralinguistics. This field has recently broadened its applications, as it is a crucial factor in optimal human-computer interactions, including dialog systems. The goal of speech emotion recognition is to predict the emotional content of speech and to classify speech according to one of several labels (i.e., happy, sad, neutral, and angry). Various types of deep learning methods have been applied to increase the performance of emotion classifiers; however, this task is still considered to be challenging for several reasons. First, insufficient data for training complex neural network-based models are available, due to the costs associated with human involvement. Second, the characteristics of emotions must be learned from low-level speech signals. Feature-based models display limited skills when applied to this problem. To overcome these limitations, we propose a model that uses high-level text transcription, as well as low-level audio signals, to utilize the information contained within low-resource datasets to a greater degree. Given recent improvements in automatic speech recognition (ASR) technology BIBREF7 , BIBREF2 , BIBREF8 , BIBREF9 , speech transcription can be carried out using audio signals with considerable skill. The emotional content of speech is clearly indicated by the emotion words contained in a sentence BIBREF10 , such as “lovely” and “awesome,” which carry strong emotions compared to generic (non-emotion) words, such as “person” and “day.” Thus, we hypothesize that the speech emotion recognition model will be benefit from the incorporation of high-level textual input. In this paper, we propose a novel deep dual recurrent encoder model that simultaneously utilizes audio and text data in recognizing emotions from speech. Extensive experiments are conducted to investigate the efficacy and properties of the proposed model. Our proposed model outperforms previous state-of-the-art methods by 68.8% to 71.8% when applied to the IEMOCAP dataset, which is one of the most well-studied datasets. Based on an error analysis of the models, we show that our proposed model accurately identifies emotion classes. Moreover, the neutral class misclassification bias frequently exhibited by previous models, which focus on audio features, is less pronounced in our model. Related work Classical machine learning algorithms, such as hidden Markov models (HMMs), support vector machines (SVMs), and decision tree-based methods, have been employed in speech emotion recognition problems BIBREF11 , BIBREF12 , BIBREF13 . Recently, researchers have proposed various neural network-based architectures to improve the performance of speech emotion recognition. An initial study utilized deep neural networks (DNNs) to extract high-level features from raw audio data and demonstrated its effectiveness in speech emotion recognition BIBREF14 . With the advancement of deep learning methods, more complex neural-based architectures have been proposed. Convolutional neural network (CNN)-based models have been trained on information derived from raw audio signals using spectrograms or audio features such as Mel-frequency cepstral coefficients (MFCCs) and low-level descriptors (LLDs) BIBREF15 , BIBREF16 , BIBREF17 . These neural network-based models are combined to produce higher-complexity models BIBREF18 , BIBREF19 , and these models achieved the best-recorded performance when applied to the IEMOCAP dataset. Another line of research has focused on adopting variant machine learning techniques combined with neural network-based models. One researcher utilized the multiobject learning approach and used gender and naturalness as auxiliary tasks so that the neural network-based model learned more features from a given dataset BIBREF20 . Another researcher investigated transfer learning methods, leveraging external data from related domains BIBREF21 . As emotional dialogue is composed of sound and spoken content, researchers have also investigated the combination of acoustic features and language information, built belief network-based methods of identifying emotional key phrases, and assessed the emotional salience of verbal cues from both phoneme sequences and words BIBREF22 , BIBREF23 . However, none of these studies have utilized information from speech signals and text sequences simultaneously in an end-to-end learning neural network-based model to classify emotions. Model This section describes the methodologies that are applied to the speech emotion recognition task. We start by introducing the recurrent encoder model for the audio and text modalities individually. We then propose a multimodal approach that encodes both audio and textual information simultaneously via a dual recurrent encoder. Audio Recurrent Encoder (ARE) Motivated by the architecture used in BIBREF24 , BIBREF25 , we build an audio recurrent encoder (ARE) to predict the class of a given audio signal. Once MFCC features have been extracted from an audio signal, a subset of the sequential features is fed into the RNN (i.e., gated recurrent units (GRUs)), which leads to the formation of the network's internal hidden state INLINEFORM0 to model the time series patterns. This internal hidden state is updated at each time step with the input data INLINEFORM1 and the hidden state of the previous time step INLINEFORM2 as follows: DISPLAYFORM0 where INLINEFORM0 is the RNN function with weight parameter INLINEFORM1 , INLINEFORM2 represents the hidden state at t- INLINEFORM3 time step, and INLINEFORM4 represents the t- INLINEFORM5 MFCC features in INLINEFORM6 . After encoding the audio signal INLINEFORM7 with the RNN, the last hidden state of the RNN, INLINEFORM8 , is considered to be the representative vector that contains all of the sequential audio data. This vector is then concatenated with another prosodic feature vector, INLINEFORM9 , to generate a more informative vector representation of the signal, INLINEFORM10 . The MFCC and the prosodic features are extracted from the audio signal using the openSMILE toolkit BIBREF26 , INLINEFORM11 , respectively. Finally, the emotion class is predicted by applying the softmax function to the vector INLINEFORM12 . For a given audio sample INLINEFORM13 , we assume that INLINEFORM14 is the true label vector, which contains all zeros but contains a one at the correct class, and INLINEFORM15 is the predicted probability distribution from the softmax layer. The training objective then takes the following form: DISPLAYFORM0 where INLINEFORM0 is the calculated representative vector of the audio signal with dimensionality INLINEFORM1 . The INLINEFORM2 and the bias INLINEFORM3 are learned model parameters. C is the total number of classes, and N is the total number of samples used in training. The upper part of Figure shows the architecture of the ARE model. Text Recurrent Encoder (TRE) We assume that speech transcripts can be extracted from audio signals with high accuracy, given the advancement of ASR technologies BIBREF7 . We attempt to use the processed textual information as another modality in predicting the emotion class of a given signal. To use textual information, a speech transcript is tokenized and indexed into a sequence of tokens using the Natural Language Toolkit (NLTK) BIBREF27 . Each token is then passed through a word-embedding layer that converts a word index to a corresponding 300-dimensional vector that contains additional contextual meaning between words. The sequence of embedded tokens is fed into a text recurrent encoder (TRE) in such a way that the audio MFCC features are encoded using the ARE represented by equation EQREF2 . In this case, INLINEFORM0 is the t- INLINEFORM1 embedded token from the text input. Finally, the emotion class is predicted from the last hidden state of the text-RNN using the softmax function. We use the same training objective as the ARE model, and the predicted probability distribution for the target class is as follows: DISPLAYFORM0 where INLINEFORM0 is last hidden state of the text-RNN, INLINEFORM1 , and the INLINEFORM2 and bias INLINEFORM3 are learned model parameters. The lower part of Figure indicates the architecture of the TRE model. Multimodal Dual Recurrent Encoder (MDRE) We present a novel architecture called the multimodal dual recurrent encoder (MDRE) to overcome the limitations of existing approaches. In this study, we consider multiple modalities, such as MFCC features, prosodic features and transcripts, which contain sequential audio information, statistical audio information and textual information, respectively. These types of data are the same as those used in the ARE and TRE cases. The MDRE model employs two RNNs to encode data from the audio signal and textual inputs independently. The audio-RNN encodes MFCC features from the audio signal using equation EQREF2 . The last hidden state of the audio-RNN is concatenated with the prosodic features to form the final vector representation INLINEFORM0 , and this vector is then passed through a fully connected neural network layer to form the audio encoding vector A. On the other hand, the text-RNN encodes the word sequence of the transcript using equation EQREF2 . The final hidden states of the text-RNN are also passed through another fully connected neural network layer to form a textual encoding vector T. Finally, the emotion class is predicted by applying the softmax function to the concatenation of the vectors A and T. We use the same training objective as the ARE model, and the predicted probability distribution for the target class is as follows: DISPLAYFORM0 where INLINEFORM0 is the feed-forward neural network with weight parameter INLINEFORM1 , and INLINEFORM2 , INLINEFORM3 are final encoding vectors from the audio-RNN and text-RNN, respectively. INLINEFORM4 and the bias INLINEFORM5 are learned model parameters. Multimodal Dual Recurrent Encoder with Attention (MDREA) Inspired by the concept of the attention mechanism used in neural machine translation BIBREF28 , we propose a novel multimodal attention method to focus on the specific parts of a transcript that contain strong emotional information, conditioning on the audio information. Figure shows the architecture of the MDREA model. First, the audio data and text data are encoded with the audio-RNN and text-RNN using equation EQREF2 . We then consider the final audio encoding vector INLINEFORM0 as a context vector. As seen in equation EQREF9 , during each time step t, the dot product between the context vector e and the hidden state of the text-RNN at each t-th sequence INLINEFORM1 is evaluated to calculate a similarity score INLINEFORM2 . Using this score INLINEFORM3 as a weight parameter, the weighted sum of the sequences of the hidden state of the text-RNN, INLINEFORM4 , is calculated to generate an attention-application vector Z. This attention-application vector is concatenated with the final encoding vector of the audio-RNN INLINEFORM5 (equation EQREF7 ), which will be passed through the softmax function to predict the emotion class. We use the same training objective as the ARE model, and the predicted probability distribution for the target class is as follows: DISPLAYFORM0 where INLINEFORM0 and the bias INLINEFORM1 are learned model parameters. Dataset We evaluate our model using the Interactive Emotional Dyadic Motion Capture (IEMOCAP) BIBREF18 dataset. This dataset was collected following theatrical theory in order to simulate natural dyadic interactions between actors. We use categorical evaluations with majority agreement. We use only four emotional categories happy, sad, angry, and neutral to compare the performance of our model with other research using the same categories. The IEMOCAP dataset includes five sessions, and each session contains utterances from two speakers (one male and one female). This data collection process resulted in 10 unique speakers. For consistent comparison with previous work, we merge the excitement dataset with the happiness dataset. The final dataset contains a total of 5531 utterances (1636 happy, 1084 sad, 1103 angry, 1708 neutral). Feature extraction To extract speech information from audio signals, we use MFCC values, which are widely used in analyzing audio signals. The MFCC feature set contains a total of 39 features, which include 12 MFCC parameters (1-12) from the 26 Mel-frequency bands and log-energy parameters, 13 delta and 13 acceleration coefficients The frame size is set to 25 ms at a rate of 10 ms with the Hamming function. According to the length of each wave file, the sequential step of the MFCC features is varied. To extract additional information from the data, we also use prosodic features, which show effectiveness in affective computing. The prosodic features are composed of 35 features, which include the F0 frequency, the voicing probability, and the loudness contours. All of these MFCC and prosodic features are extracted from the data using the OpenSMILE toolkit BIBREF26 . Implementation details Among the variants of the RNN function, we use GRUs as they yield comparable performance to that of the LSTM and include a smaller number of weight parameters BIBREF29 . We use a max encoder step of 750 for the audio input, based on the implementation choices presented in BIBREF30 and 128 for the text input because it covers the maximum length of the transcripts. The vocabulary size of the dataset is 3,747, including the “_UNK_" token, which represents unknown words, and the “_PAD_" token, which is used to indicate padding information added while preparing mini-batch data. The number of hidden units and the number of layers in the RNN for each model (ARE, TRE, MDRE and MDREA) are selected based on extensive hyperparameter search experiments. The weights of the hidden units are initialized using orthogonal weights BIBREF31 ], and the text embedding layer is initialized from pretrained word-embedding vectors BIBREF32 . In preparing the textual dataset, we first use the released transcripts of the IEMOCAP dataset for simplicity. To investigate the practical performance, we then process all of the IEMOCAP audio data using an ASR system (the Google Cloud Speech API) and retrieve the transcripts. The performance of the Google ASR system is reflected by its word error rate (WER) of 5.53%. Performance evaluation As the dataset is not explicitly split beforehand into training, development, and testing sets, we perform 5-fold cross validation to determine the overall performance of the model. The data in each fold are split into training, development, and testing datasets (8:0.5:1.5, respectively). After training the model, we measure the weighted average precision (WAP) over the 5-fold dataset. We train and evaluate the model 10 times per fold, and the model performance is assessed in terms of the mean score and standard deviation. We examine the WAP values, which are shown in Table 1. First, our ARE model shows the baseline performance because we use minimal audio features, such as the MFCC and prosodic features with simple architectures. On the other hand, the TRE model shows higher performance gain compared to the ARE. From this result, we note that textual data are informative in emotion prediction tasks, and the recurrent encoder model is effective in understanding these types of sequential data. Second, the newly proposed model, MDRE, shows a substantial performance gain. It thus achieves the state-of-the-art performance with a WAP value of 0.718. This result shows that multimodal information is a key factor in affective computing. Lastly, the attention model, MDREA, also outperforms the best existing research results (WAP 0.690 to 0.688) BIBREF19 . However, the MDREA model does not match the performance of the MDRE model, even though it utilizes a more complex architecture. We believe that this result arises because insufficient data are available to properly determine the complex model parameters in the MDREA model. Moreover, we presume that this model will show better performance when the audio signals are aligned with the textual sequence while applying the attention mechanism. We leave the implementation of this point as a future research direction. To investigate the practical performance of the proposed models, we conduct further experiments with the ASR-processed transcript data (see “-ASR” models in Table ). The label accuracy of the processed transcripts is 5.53% WER. The TRE-ASR, MDRE-ASR and MDREA-ASR models reflect degraded performance compared to that of the TRE, MDRE and MDREA models. However, the performance of these models is still competitive; in particular, the MDRE-ASR model outperforms the previous best-performing model, 3CNN-LSTM10H (WAP 0.691 to 0.688). Error analysis We analyze the predictions of the ARE, TRE, and MDRE models. Figure shows the confusion matrix of each model. The ARE model (Fig. ) incorrectly classifies most instances of happy as neutral (43.51%); thus, it shows reduced accuracy (35.15%) in predicting the the happy class. Overall, most of the emotion classes are frequently confused with the neutral class. This observation is in line with the findings of BIBREF30 , who noted that the neutral class is located in the center of the activation-valence space, complicating its discrimination from the other classes. Interestingly, the TRE model (Fig. ) shows greater prediction gains in predicting the happy class when compared to the ARE model (35.15% to 75.73%). This result seems plausible because the model can benefit from the differences among the distributions of words in happy and neutral expressions, which gives more emotional information to the model than that of the audio signal data. On the other hand, it is striking that the TRE model incorrectly predicts instances of the sad class as the happy class 16.20% of the time, even though these emotional states are opposites of one another. The MDRE model (Fig. ) compensates for the weaknesses of the previous two models (ARE and TRE) and benefits from their strengths to a surprising degree. The values arranged along the diagonal axis show that all of the accuracies of the correctly predicted class have increased. Furthermore, the occurrence of the incorrect “sad-to-happy" cases in the TRE model is reduced from 16.20% to 9.15%. Conclusions In this paper, we propose a novel multimodal dual recurrent encoder model that simultaneously utilizes text data, as well as audio signals, to permit the better understanding of speech data. Our model encodes the information from audio and text sequences using dual RNNs and then combines the information from these sources using a feed-forward neural model to predict the emotion class. Extensive experiments show that our proposed model outperforms other state-of-the-art methods in classifying the four emotion categories, and accuracies ranging from 68.8% to 71.8% are obtained when the model is applied to the IEMOCAP dataset. In particular, it resolves the issue in which predictions frequently incorrectly yield the neutral class, as occurs in previous models that focus on audio features. In the future work, we aim to extend the modalities to audio, text and video inputs. Furthermore, we plan to investigate the application of the attention mechanism to data derived from multiple modalities. This approach seems likely to uncover enhanced learning schemes that will increase performance in both speech emotion recognition and other multimodal classification tasks. Acknowledgments K. Jung is with the Department of Electrical and Computer Engineering, ASRI, Seoul National University, Seoul, Korea. This work was supported by the Ministry of Trade, Industry & Energy (MOTIE, Korea) under Industrial Technology Innovation Program (No.10073144).
[ "They use text transcription.", "both" ]
3,198
qasper_e
en
null
13e1dbbe70ad33f26ac4d63352332ab577dbdf318691ace0
What clustering algorithms were used?
Introduction Performance appraisal (PA) is an important HR process, particularly for modern organizations that crucially depend on the skills and expertise of their workforce. The PA process enables an organization to periodically measure and evaluate every employee's performance. It also provides a mechanism to link the goals established by the organization to its each employee's day-to-day activities and performance. Design and analysis of PA processes is a lively area of research within the HR community BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . The PA process in any modern organization is nowadays implemented and tracked through an IT system (the PA system) that records the interactions that happen in various steps. Availability of this data in a computer-readable database opens up opportunities to analyze it using automated statistical, data-mining and text-mining techniques, to generate novel and actionable insights / patterns and to help in improving the quality and effectiveness of the PA process BIBREF4 , BIBREF5 , BIBREF6 . Automated analysis of large-scale PA data is now facilitated by technological and algorithmic advances, and is becoming essential for large organizations containing thousands of geographically distributed employees handling a wide variety of roles and tasks. A typical PA process involves purposeful multi-step multi-modal communication between employees, their supervisors and their peers. In most PA processes, the communication includes the following steps: (i) in self-appraisal, an employee records his/her achievements, activities, tasks handled etc.; (ii) in supervisor assessment, the supervisor provides the criticism, evaluation and suggestions for improvement of performance etc.; and (iii) in peer feedback (aka INLINEFORM0 view), the peers of the employee provide their feedback. There are several business questions that managers are interested in. Examples: In this paper, we develop text mining techniques that can automatically produce answers to these questions. Since the intended users are HR executives, ideally, the techniques should work with minimum training data and experimentation with parameter setting. These techniques have been implemented and are being used in a PA system in a large multi-national IT company. The rest of the paper is organized as follows. Section SECREF2 summarizes related work. Section SECREF3 summarizes the PA dataset used in this paper. Section SECREF4 applies sentence classification algorithms to automatically discover three important classes of sentences in the PA corpus viz., sentences that discuss strengths, weaknesses of employees and contain suggestions for improving her performance. Section SECREF5 considers the problem of mapping the actual targets mentioned in strengths, weaknesses and suggestions to a fixed set of attributes. In Section SECREF6 , we discuss how the feedback from peers for a particular employee can be summarized. In Section SECREF7 we draw conclusions and identify some further work. Related Work We first review some work related to sentence classification. Semantically classifying sentences (based on the sentence's purpose) is a much harder task, and is gaining increasing attention from linguists and NLP researchers. McKnight and Srinivasan BIBREF7 and Yamamoto and Takagi BIBREF8 used SVM to classify sentences in biomedical abstracts into classes such as INTRODUCTION, BACKGROUND, PURPOSE, METHOD, RESULT, CONCLUSION. Cohen et al. BIBREF9 applied SVM and other techniques to learn classifiers for sentences in emails into classes, which are speech acts defined by a verb-noun pair, with verbs such as request, propose, amend, commit, deliver and nouns such as meeting, document, committee; see also BIBREF10 . Khoo et al. BIBREF11 uses various classifiers to classify sentences in emails into classes such as APOLOGY, INSTRUCTION, QUESTION, REQUEST, SALUTATION, STATEMENT, SUGGESTION, THANKING etc. Qadir and Riloff BIBREF12 proposes several filters and classifiers to classify sentences on message boards (community QA systems) into 4 speech acts: COMMISSIVE (speaker commits to a future action), DIRECTIVE (speaker expects listener to take some action), EXPRESSIVE (speaker expresses his or her psychological state to the listener), REPRESENTATIVE (represents the speaker's belief of something). Hachey and Grover BIBREF13 used SVM and maximum entropy classifiers to classify sentences in legal documents into classes such as FACT, PROCEEDINGS, BACKGROUND, FRAMING, DISPOSAL; see also BIBREF14 . Deshpande et al. BIBREF15 proposes unsupervised linguistic patterns to classify sentences into classes SUGGESTION, COMPLAINT. There is much work on a closely related problem viz., classifying sentences in dialogues through dialogue-specific categories called dialogue acts BIBREF16 , which we will not review here. Just as one example, Cotterill BIBREF17 classifies questions in emails into the dialogue acts of YES_NO_QUESTION, WH_QUESTION, ACTION_REQUEST, RHETORICAL, MULTIPLE_CHOICE etc. We could not find much work related to mining of performance appraisals data. Pawar et al. BIBREF18 uses kernel-based classification to classify sentences in both performance appraisal text and product reviews into classes SUGGESTION, APPRECIATION, COMPLAINT. Apte et al. BIBREF6 provides two algorithms for matching the descriptions of goals or tasks assigned to employees to a standard template of model goals. One algorithm is based on the co-training framework and uses goal descriptions and self-appraisal comments as two separate perspectives. The second approach uses semantic similarity under a weak supervision framework. Ramrakhiyani et al. BIBREF5 proposes label propagation algorithms to discover aspects in supervisor assessments in performance appraisals, where an aspect is modelled as a verb-noun pair (e.g. conduct training, improve coding). Dataset In this paper, we used the supervisor assessment and peer feedback text produced during the performance appraisal of 4528 employees in a large multi-national IT company. The corpus of supervisor assessment has 26972 sentences. The summary statistics about the number of words in a sentence is: min:4 max:217 average:15.5 STDEV:9.2 Q1:9 Q2:14 Q3:19. Sentence Classification The PA corpus contains several classes of sentences that are of interest. In this paper, we focus on three important classes of sentences viz., sentences that discuss strengths (class STRENGTH), weaknesses of employees (class WEAKNESS) and suggestions for improving her performance (class SUGGESTION). The strengths or weaknesses are mostly about the performance in work carried out, but sometimes they can be about the working style or other personal qualities. The classes WEAKNESS and SUGGESTION are somewhat overlapping; e.g., a suggestion may address a perceived weakness. Following are two example sentences in each class. STRENGTH: WEAKNESS: SUGGESTION: Several linguistic aspects of these classes of sentences are apparent. The subject is implicit in many sentences. The strengths are often mentioned as either noun phrases (NP) with positive adjectives (Excellent technology leadership) or positive nouns (engineering strength) or through verbs with positive polarity (dedicated) or as verb phrases containing positive adjectives (delivers innovative solutions). Similarly for weaknesses, where negation is more frequently used (presentations are not his forte), or alternatively, the polarities of verbs (avoid) or adjectives (poor) tend to be negative. However, sometimes the form of both the strengths and weaknesses is the same, typically a stand-alone sentiment-neutral NP, making it difficult to distinguish between them; e.g., adherence to timing or timely closure. Suggestions often have an imperative mood and contain secondary verbs such as need to, should, has to. Suggestions are sometimes expressed using comparatives (better process compliance). We built a simple set of patterns for each of the 3 classes on the POS-tagged form of the sentences. We use each set of these patterns as an unsupervised sentence classifier for that class. If a particular sentence matched with patterns for multiple classes, then we have simple tie-breaking rules for picking the final class. The pattern for the STRENGTH class looks for the presence of positive words / phrases like takes ownership, excellent, hard working, commitment, etc. Similarly, the pattern for the WEAKNESS class looks for the presence of negative words / phrases like lacking, diffident, slow learner, less focused, etc. The SUGGESTION pattern not only looks for keywords like should, needs to but also for POS based pattern like “a verb in the base form (VB) in the beginning of a sentence”. We randomly selected 2000 sentences from the supervisor assessment corpus and manually tagged them (dataset D1). This labelled dataset contained 705, 103, 822 and 370 sentences having the class labels STRENGTH, WEAKNESS, SUGGESTION or OTHER respectively. We trained several multi-class classifiers on this dataset. Table TABREF10 shows the results of 5-fold cross-validation experiments on dataset D1. For the first 5 classifiers, we used their implementation from the SciKit Learn library in Python (scikit-learn.org). The features used for these classifiers were simply the sentence words along with their frequencies. For the last 2 classifiers (in Table TABREF10 ), we used our own implementation. The overall accuracy for a classifier is defined as INLINEFORM0 , where the denominator is 2000 for dataset D1. Note that the pattern-based approach is unsupervised i.e., it did not use any training data. Hence, the results shown for it are for the entire dataset and not based on cross-validation. Comparison with Sentiment Analyzer We also explored whether a sentiment analyzer can be used as a baseline for identifying the class labels STRENGTH and WEAKNESS. We used an implementation of sentiment analyzer from TextBlob to get a polarity score for each sentence. Table TABREF13 shows the distribution of positive, negative and neutral sentiments across the 3 class labels STRENGTH, WEAKNESS and SUGGESTION. It can be observed that distribution of positive and negative sentiments is almost similar in STRENGTH as well as SUGGESTION sentences, hence we can conclude that the information about sentiments is not much useful for our classification problem. Discovering Clusters within Sentence Classes After identifying sentences in each class, we can now answer question (1) in Section SECREF1 . From 12742 sentences predicted to have label STRENGTH, we extract nouns that indicate the actual strength, and cluster them using a simple clustering algorithm which uses the cosine similarity between word embeddings of these nouns. We repeat this for the 9160 sentences with predicted label WEAKNESS or SUGGESTION as a single class. Tables TABREF15 and TABREF16 show a few representative clusters in strengths and in weaknesses, respectively. We also explored clustering 12742 STRENGTH sentences directly using CLUTO BIBREF19 and Carrot2 Lingo BIBREF20 clustering algorithms. Carrot2 Lingo discovered 167 clusters and also assigned labels to these clusters. We then generated 167 clusters using CLUTO as well. CLUTO does not generate cluster labels automatically, hence we used 5 most frequent words within the cluster as its labels. Table TABREF19 shows the largest 5 clusters by both the algorithms. It was observed that the clusters created by CLUTO were more meaningful and informative as compared to those by Carrot2 Lingo. Also, it was observed that there is some correspondence between noun clusters and sentence clusters. E.g. the nouns cluster motivation expertise knowledge talent skill (Table TABREF15 ) corresponds to the CLUTO sentence cluster skill customer management knowledge team (Table TABREF19 ). But overall, users found the nouns clusters to be more meaningful than the sentence clusters. PA along Attributes In many organizations, PA is done from a predefined set of perspectives, which we call attributes. Each attribute covers one specific aspect of the work done by the employees. This has the advantage that we can easily compare the performance of any two employees (or groups of employees) along any given attribute. We can correlate various performance attributes and find dependencies among them. We can also cluster employees in the workforce using their supervisor ratings for each attribute to discover interesting insights into the workforce. The HR managers in the organization considered in this paper have defined 15 attributes (Table TABREF20 ). Each attribute is essentially a work item or work category described at an abstract level. For example, FUNCTIONAL_EXCELLENCE covers any tasks, goals or activities related to the software engineering life-cycle (e.g., requirements analysis, design, coding, testing etc.) as well as technologies such as databases, web services and GUI. In the example in Section SECREF4 , the first sentence (which has class STRENGTH) can be mapped to two attributes: FUNCTIONAL_EXCELLENCE and BUILDING_EFFECTIVE_TEAMS. Similarly, the third sentence (which has class WEAKNESS) can be mapped to the attribute INTERPERSONAL_EFFECTIVENESS and so forth. Thus, in order to answer the second question in Section SECREF1 , we need to map each sentence in each of the 3 classes to zero, one, two or more attributes, which is a multi-class multi-label classification problem. We manually tagged the same 2000 sentences in Dataset D1 with attributes, where each sentence may get 0, 1, 2, etc. up to 15 class labels (this is dataset D2). This labelled dataset contained 749, 206, 289, 207, 91, 223, 191, 144, 103, 80, 82, 42, 29, 15, 24 sentences having the class labels listed in Table TABREF20 in the same order. The number of sentences having 0, 1, 2, or more than 2 attributes are: 321, 1070, 470 and 139 respectively. We trained several multi-class multi-label classifiers on this dataset. Table TABREF21 shows the results of 5-fold cross-validation experiments on dataset D2. Precision, Recall and F-measure for this multi-label classification are computed using a strategy similar to the one described in BIBREF21 . Let INLINEFORM0 be the set of predicted labels and INLINEFORM1 be the set of actual labels for the INLINEFORM2 instance. Precision and recall for this instance are computed as follows: INLINEFORM3 It can be observed that INLINEFORM0 would be undefined if INLINEFORM1 is empty and similarly INLINEFORM2 would be undefined when INLINEFORM3 is empty. Hence, overall precision and recall are computed by averaging over all the instances except where they are undefined. Instance-level F-measure can not be computed for instances where either precision or recall are undefined. Therefore, overall F-measure is computed using the overall precision and recall. Summarization of Peer Feedback using ILP The PA system includes a set of peer feedback comments for each employee. To answer the third question in Section SECREF1 , we need to create a summary of all the peer feedback comments about a given employee. As an example, following are the feedback comments from 5 peers of an employee. The individual sentences in the comments written by each peer are first identified and then POS tags are assigned to each sentence. We hypothesize that a good summary of these multiple comments can be constructed by identifying a set of important text fragments or phrases. Initially, a set of candidate phrases is extracted from these comments and a subset of these candidate phrases is chosen as the final summary, using Integer Linear Programming (ILP). The details of the ILP formulation are shown in Table TABREF36 . As an example, following is the summary generated for the above 5 peer comments. humble nature, effective communication, technical expertise, always supportive, vast knowledge Following rules are used to identify candidate phrases: Various parameters are used to evaluate a candidate phrase for its importance. A candidate phrase is more important: A complete list of parameters is described in detail in Table TABREF36 . There is a trivial constraint INLINEFORM0 which makes sure that only INLINEFORM1 out of INLINEFORM2 candidate phrases are chosen. A suitable value of INLINEFORM3 is used for each employee depending on number of candidate phrases identified across all peers (see Algorithm SECREF6 ). Another set of constraints ( INLINEFORM4 to INLINEFORM5 ) make sure that at least one phrase is selected for each of the leadership attributes. The constraint INLINEFORM6 makes sure that multiple phrases sharing the same headword are not chosen at a time. Also, single word candidate phrases are chosen only if they are adjectives or nouns with lexical category noun.attribute. This is imposed by the constraint INLINEFORM7 . It is important to note that all the constraints except INLINEFORM8 are soft constraints, i.e. there may be feasible solutions which do not satisfy some of these constraints. But each constraint which is not satisfied, results in a penalty through the use of slack variables. These constraints are described in detail in Table TABREF36 . The objective function maximizes the total importance score of the selected candidate phrases. At the same time, it also minimizes the sum of all slack variables so that the minimum number of constraints are broken. INLINEFORM0 : No. of candidate phrases INLINEFORM1 : No. of phrases to select as part of summary INLINEFORM0 INLINEFORM1 INLINEFORM2 INLINEFORM3 INLINEFORM4 INLINEFORM5 INLINEFORM6 INLINEFORM7 INLINEFORM8 INLINEFORM0 and INLINEFORM1 INLINEFORM2 INLINEFORM3 INLINEFORM4 INLINEFORM5 INLINEFORM6 INLINEFORM0 (For determining number of phrases to select to include in summary) Evaluation of auto-generated summaries We considered a dataset of 100 employees, where for each employee multiple peer comments were recorded. Also, for each employee, a manual summary was generated by an HR personnel. The summaries generated by our ILP-based approach were compared with the corresponding manual summaries using the ROUGE BIBREF22 unigram score. For comparing performance of our ILP-based summarization algorithm, we explored a few summarization algorithms provided by the Sumy package. A common parameter which is required by all these algorithms is number of sentences keep in the final summary. ILP-based summarization requires a similar parameter K, which is automatically decided based on number of total candidate phrases. Assuming a sentence is equivalent to roughly 3 phrases, for Sumy algorithms, we set number of sentences parameter to the ceiling of K/3. Table TABREF51 shows average and standard deviation of ROUGE unigram f1 scores for each algorithm, over the 100 summaries. The performance of ILP-based summarization is comparable with the other algorithms, as the two sample t-test does not show statistically significant difference. Also, human evaluators preferred phrase-based summary generated by our approach to the other sentence-based summaries. Conclusions and Further Work In this paper, we presented an analysis of the text generated in Performance Appraisal (PA) process in a large multi-national IT company. We performed sentence classification to identify strengths, weaknesses and suggestions for improvements found in the supervisor assessments and then used clustering to discover broad categories among them. As this is non-topical classification, we found that SVM with ADWS kernel BIBREF18 produced the best results. We also used multi-class multi-label classification techniques to match supervisor assessments to predefined broad perspectives on performance. Logistic Regression classifier was observed to produce the best results for this topical classification. Finally, we proposed an ILP-based summarization technique to produce a summary of peer feedback comments for a given employee and compared it with manual summaries. The PA process also generates much structured data, such as supervisor ratings. It is an interesting problem to compare and combine the insights from discovered from structured data and unstructured text. Also, we are planning to automatically discover any additional performance attributes to the list of 15 attributes currently used by HR.
[ "CLUTO, Carrot2 Lingo", "simple clustering algorithm which uses the cosine similarity between word embeddings" ]
3,039
qasper_e
en
null
14ee47ba1692905a1a19d0ea0d07143d75ba33030668f927
What type of neural model was used?
Introduction Privacy policies are the documents which disclose the ways in which a company gathers, uses, shares and manages a user's data. As legal documents, they function using the principle of notice and choice BIBREF0, where companies post their policies, and theoretically, users read the policies and decide to use a company's products or services only if they find the conditions outlined in its privacy policy acceptable. Many legal jurisdictions around the world accept this framework, including the United States and the European Union BIBREF1, BIBREF2. However, the legitimacy of this framework depends upon users actually reading and understanding privacy policies to determine whether company practices are acceptable to them BIBREF3. In practice this is seldom the case BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10. This is further complicated by the highly individual and nuanced compromises that users are willing to make with their data BIBREF11, discouraging a `one-size-fits-all' approach to notice of data practices in privacy documents. With devices constantly monitoring our environment, including our personal space and our bodies, lack of awareness of how our data is being used easily leads to problematic situations where users are outraged by information misuse, but companies insist that users have consented. The discovery of increasingly egregious uses of data by companies, such as the scandals involving Facebook and Cambridge Analytica BIBREF12, have further brought public attention to the privacy concerns of the internet and ubiquitous computing. This makes privacy a well-motivated application domain for NLP researchers, where advances in enabling users to quickly identify the privacy issues most salient to them can potentially have large real-world impact. [1]https://play.google.com/store/apps/details?id=com.gotokeep.keep.intl [2]https://play.google.com/store/apps/details?id=com.viber.voip [3]A question might not have any supporting evidence for an answer within the privacy policy. Motivated by this need, we contribute PrivacyQA, a corpus consisting of 1750 questions about the contents of privacy policies, paired with over 3500 expert annotations. The goal of this effort is to kickstart the development of question-answering methods for this domain, to address the (unrealistic) expectation that a large population should be reading many policies per day. In doing so, we identify several understudied challenges to our ability to answer these questions, with broad implications for systems seeking to serve users' information-seeking intent. By releasing this resource, we hope to provide an impetus to develop systems capable of language understanding in this increasingly important domain. Related Work Prior work has aimed to make privacy policies easier to understand. Prescriptive approaches towards communicating privacy information BIBREF21, BIBREF22, BIBREF23 have not been widely adopted by industry. Recently, there have been significant research effort devoted to understanding privacy policies by leveraging NLP techniques BIBREF24, BIBREF25, BIBREF26, BIBREF27, BIBREF28, especially by identifying specific data practices within a privacy policy. We adopt a personalized approach to understanding privacy policies, that allows users to query a document and selectively explore content salient to them. Most similar is the PolisisQA corpus BIBREF29, which examines questions users ask corporations on Twitter. Our approach differs in several ways: 1) The PrivacyQA dataset is larger, containing 10x as many questions and answers. 2) Answers are formulated by domain experts with legal training. 3) PrivacyQA includes diverse question types, including unanswerable and subjective questions. Our work is also related to reading comprehension in the open domain, which is frequently based upon Wikipedia passages BIBREF16, BIBREF17, BIBREF15, BIBREF30 and news articles BIBREF20, BIBREF31, BIBREF32. Table.TABREF4 presents the desirable attributes our dataset shares with past approaches. This work is also tied into research in applying NLP approaches to legal documents BIBREF33, BIBREF34, BIBREF35, BIBREF36, BIBREF37, BIBREF38, BIBREF39. While privacy policies have legal implications, their intended audience consists of the general public rather than individuals with legal expertise. This arrangement is problematic because the entities that write privacy policies often have different goals than the audience. feng2015applying, tan-EtAl:2016:P16-1 examine question answering in the insurance domain, another specialized domain similar to privacy, where the intended audience is the general public. Data Collection We describe the data collection methodology used to construct PrivacyQA. With the goal of achieving broad coverage across application types, we collect privacy policies from 35 mobile applications representing a number of different categories in the Google Play Store. One of our goals is to include both policies from well-known applications, which are likely to have carefully-constructed privacy policies, and lesser-known applications with smaller install bases, whose policies might be considerably less sophisticated. Thus, setting 5 million installs as a threshold, we ensure each category includes applications with installs on both sides of this threshold. All policies included in the corpus are in English, and were collected before April 1, 2018, predating many companies' GDPR-focused BIBREF41 updates. We leave it to future studies BIBREF42 to look at the impact of the GDPR (e.g., to what extent GDPR requirements contribute to making it possible to provide users with more informative answers, and to what extent their disclosures continue to omit issues that matter to users). Data Collection ::: Crowdsourced Question Elicitation The intended audience for privacy policies consists of the general public. This informs the decision to elicit questions from crowdworkers on the contents of privacy policies. We choose not to show the contents of privacy policies to crowdworkers, a procedure motivated by a desire to avoid inadvertent biases BIBREF43, BIBREF44, BIBREF45, BIBREF46, BIBREF47, and encourage crowdworkers to ask a variety of questions beyond only asking questions based on practices described in the document. Instead, crowdworkers are presented with public information about a mobile application available on the Google Play Store including its name, description and navigable screenshots. Figure FIGREF9 shows an example of our user interface. Crowdworkers are asked to imagine they have access to a trusted third-party privacy assistant, to whom they can ask any privacy question about a given mobile application. We use the Amazon Mechanical Turk platform and recruit crowdworkers who have been conferred “master” status and are located within the United States of America. Turkers are asked to provide five questions per mobile application, and are paid $2 per assignment, taking ~eight minutes to complete the task. Data Collection ::: Answer Selection To identify legally sound answers, we recruit seven experts with legal training to construct answers to Turker questions. Experts identify relevant evidence within the privacy policy, as well as provide meta-annotation on the question's relevance, subjectivity, OPP-115 category BIBREF49, and how likely any privacy policy is to contain the answer to the question asked. Data Collection ::: Analysis Table.TABREF17 presents aggregate statistics of the PrivacyQA dataset. 1750 questions are posed to our imaginary privacy assistant over 35 mobile applications and their associated privacy documents. As an initial step, we formulate the problem of answering user questions as an extractive sentence selection task, ignoring for now background knowledge, statistical data and legal expertise that could otherwise be brought to bear. The dataset is partitioned into a training set featuring 27 mobile applications and 1350 questions, and a test set consisting of 400 questions over 8 policy documents. This ensures that documents in training and test splits are mutually exclusive. Every question is answered by at least one expert. In addition, in order to estimate annotation reliability and provide for better evaluation, every question in the test set is answered by at least two additional experts. Table TABREF14 describes the distribution over first words of questions posed by crowdworkers. We also observe low redundancy in the questions posed by crowdworkers over each policy, with each policy receiving ~49.94 unique questions despite crowdworkers independently posing questions. Questions are on average 8.4 words long. As declining to answer a question can be a legally sound response but is seldom practically useful, answers to questions where a minority of experts abstain to answer are filtered from the dataset. Privacy policies are ~3000 words long on average. The answers to the question asked by the users typically have ~100 words of evidence in the privacy policy document. Data Collection ::: Analysis ::: Categories of Questions Questions are organized under nine categories from the OPP-115 Corpus annotation scheme BIBREF49: First Party Collection/Use: What, why and how information is collected by the service provider Third Party Sharing/Collection: What, why and how information shared with or collected by third parties Data Security: Protection measures for user information Data Retention: How long user information will be stored User Choice/Control: Control options available to users User Access, Edit and Deletion: If/how users can access, edit or delete information Policy Change: Informing users if policy information has been changed International and Specific Audiences: Practices pertaining to a specific group of users Other: General text, contact information or practices not covered by other categories. For each question, domain experts indicate one or more relevant OPP-115 categories. We mark a category as relevant to a question if it is identified as such by at least two annotators. If no such category exists, the category is marked as `Other' if atleast one annotator has identified the `Other' category to be relevant. If neither of these conditions is satisfied, we label the question as having no agreement. The distribution of questions in the corpus across OPP-115 categories is as shown in Table.TABREF16. First party and third party related questions are the largest categories, forming nearly 66.4% of all questions asked to the privacy assistant. Data Collection ::: Analysis ::: Answer Validation When do experts disagree? We would like to analyze the reasons for potential disagreement on the annotation task, to ensure disagreements arise due to valid differences in opinion rather than lack of adequate specification in annotation guidelines. It is important to note that the annotators are experts rather than crowdworkers. Accordingly, their judgements can be considered valid, legally-informed opinions even when their perspectives differ. For the sake of this question we randomly sample 100 instances in the test data and analyze them for likely reasons for disagreements. We consider a disagreement to have occurred when more than one expert does not agree with the majority consensus. By disagreement we mean there is no overlap between the text identified as relevant by one expert and another. We find that the annotators agree on the answer for 74% of the questions, even if the supporting evidence they identify is not identical i.e full overlap. They disagree on the remaining 26%. Sources of apparent disagreement correspond to situations when different experts: have differing interpretations of question intent (11%) (for example, when a user asks 'who can contact me through the app', the questions admits multiple interpretations, including seeking information about the features of the app, asking about first party collection/use of data or asking about third party collection/use of data), identify different sources of evidence for questions that ask if a practice is performed or not (4%), have differing interpretations of policy content (3%), identify a partial answer to a question in the privacy policy (2%) (for example, when the user asks `who is allowed to use the app' a majority of our annotators decline to answer, but the remaining annotators highlight partial evidence in the privacy policy which states that children under the age of 13 are not allowed to use the app), and other legitimate sources of disagreement (6%) which include personal subjective views of the annotators (for example, when the user asks `is my DNA information used in any way other than what is specified', some experts consider the boilerplate text of the privacy policy which states that it abides to practices described in the policy document as sufficient evidence to answer this question, whereas others do not). Experimental Setup We evaluate the ability of machine learning methods to identify relevant evidence for questions in the privacy domain. We establish baselines for the subtask of deciding on the answerability (§SECREF33) of a question, as well as the overall task of identifying evidence for questions from policies (§SECREF37). We describe aspects of the question that can render it unanswerable within the privacy domain (§SECREF41). Experimental Setup ::: Answerability Identification Baselines We define answerability identification as a binary classification task, evaluating model ability to predict if a question can be answered, given a question in isolation. This can serve as a prior for downstream question-answering. We describe three baselines on the answerability task, and find they considerably improve performance over a majority-class baseline. SVM: We define 3 sets of features to characterize each question. The first is a simple bag-of-words set of features over the question (SVM-BOW), the second is bag-of-words features of the question as well as length of the question in words (SVM-BOW + LEN), and lastly we extract bag-of-words features, length of the question in words as well as part-of-speech tags for the question (SVM-BOW + LEN + POS). This results in vectors of 200, 201 and 228 dimensions respectively, which are provided to an SVM with a linear kernel. CNN: We utilize a CNN neural encoder for answerability prediction. We use GloVe word embeddings BIBREF50, and a filter size of 5 with 64 filters to encode questions. BERT: BERT BIBREF51 is a bidirectional transformer-based language-model BIBREF52. We fine-tune BERT-base on our binary answerability identification task with a learning rate of 2e-5 for 3 epochs, with a maximum sequence length of 128. Experimental Setup ::: Privacy Question Answering Our goal is to identify evidence within a privacy policy for questions asked by a user. This is framed as an answer sentence selection task, where models identify a set of evidence sentences from all candidate sentences in each policy. Experimental Setup ::: Privacy Question Answering ::: Evaluation Metric Our evaluation metric for answer-sentence selection is sentence-level F1, implemented similar to BIBREF30, BIBREF16. Precision and recall are implemented by measuring the overlap between predicted sentences and sets of gold-reference sentences. We report the average of the maximum F1 from each n$-$1 subset, in relation to the heldout reference. Experimental Setup ::: Privacy Question Answering ::: Baselines We describe baselines on this task, including a human performance baseline. No-Answer Baseline (NA) : Most of the questions we receive are difficult to answer in a legally-sound way on the basis of information present in the privacy policy. We establish a simple baseline to quantify the effect of identifying every question as unanswerable. Word Count Baseline : To quantify the effect of using simple lexical matching to answer the questions, we retrieve the top candidate policy sentences for each question using a word count baseline BIBREF53, which counts the number of question words that also appear in a sentence. We include the top 2, 3 and 5 candidates as baselines. BERT: We implement two BERT-based baselines BIBREF51 for evidence identification. First, we train BERT on each query-policy sentence pair as a binary classification task to identify if the sentence is evidence for the question or not (Bert). We also experiment with a two-stage classifier, where we separately train the model on questions only to predict answerability. At inference time, if the answerable classifier predicts the question is answerable, the evidence identification classifier produces a set of candidate sentences (Bert + Unanswerable). Human Performance: We pick each reference answer provided by an annotator, and compute the F1 with respect to the remaining references, as described in section 4.2.1. Each reference answer is treated as the prediction, and the remaining n-1 answers are treated as the gold reference. The average of the maximum F1 across all reference answers is computed as the human baseline. Results and Discussion The results of the answerability baselines are presented in Table TABREF31, and on answer sentence selection in Table TABREF32. We observe that bert exhibits the best performance on a binary answerability identification task. However, most baselines considerably exceed the performance of a majority-class baseline. This suggests considerable information in the question, indicating it's possible answerability within this domain. Table.TABREF32 describes the performance of our baselines on the answer sentence selection task. The No-answer (NA) baseline performs at 28 F1, providing a lower bound on performance at this task. We observe that our best-performing baseline, Bert + Unanswerable achieves an F1 of 39.8. This suggest that bert is capable of making some progress towards answering questions in this difficult domain, while still leaving considerable headroom for improvement to reach human performance. Bert + Unanswerable performance suggests that incorporating information about answerability can help in this difficult domain. We examine this challenging phenomena of unanswerability further in Section . Results and Discussion ::: Error Analysis Disagreements are analyzed based on the OPP-115 categories of each question (Table.TABREF34). We compare our best performing BERT variant against the NA model and human performance. We observe significant room for improvement across all categories of questions but especially for first party, third party and data retention categories. We analyze the performance of our strongest BERT variant, to identify classes of errors and directions for future improvement (Table.8). We observe that a majority of answerability mistakes made by the BERT model are questions which are in fact answerable, but are identified as unanswerable by BERT. We observe that BERT makes 124 such mistakes on the test set. We collect expert judgments on relevance, subjectivity , silence and information about how likely the question is to be answered from the privacy policy from our experts. We find that most of these mistakes are relevant questions. However many of them were identified as subjective by the annotators, and at least one annotator marked 19 of these questions as having no answer within the privacy policy. However, only 6 of these questions were unexpected or do not usually have an answer in privacy policies. These findings suggest that a more nuanced understanding of answerability might help improve model performance in his challenging domain. Results and Discussion ::: What makes Questions Unanswerable? We further ask legal experts to identify potential causes of unanswerability of questions. This analysis has considerable implications. While past work BIBREF17 has treated unanswerable questions as homogeneous, a question answering system might wish to have different treatments for different categories of `unanswerable' questions. The following factors were identified to play a role in unanswerability: Incomprehensibility: If a question is incomprehensible to the extent that its meaning is not intelligible. Relevance: Is this question in the scope of what could be answered by reading the privacy policy. Ill-formedness: Is this question ambiguous or vague. An ambiguous statement will typically contain expressions that can refer to multiple potential explanations, whereas a vague statement carries a concept with an unclear or soft definition. Silence: Other policies answer this type of question but this one does not. Atypicality: The question is of a nature such that it is unlikely for any policy policy to have an answer to the question. Our experts attempt to identify the different `unanswerable' factors for all 573 such questions in the corpus. 4.18% of the questions were identified as being incomprehensible (for example, `any difficulties to occupy the privacy assistant'). Amongst the comprehendable questions, 50% were identified as likely to have an answer within the privacy policy, 33.1% were identified as being privacy-related questions but not within the scope of a privacy policy (e.g., 'has Viber had any privacy breaches in the past?') and 16.9% of questions were identified as completely out-of-scope (e.g., `'will the app consume much space?'). In the questions identified as relevant, 32% were ill-formed questions that were phrased by the user in a manner considered vague or ambiguous. Of the questions that were both relevant as well as `well-formed', 95.7% of the questions were not answered by the policy in question but it was reasonable to expect that a privacy policy would contain an answer. The remaining 4.3% were described as reasonable questions, but of a nature generally not discussed in privacy policies. This suggests that the answerability of questions over privacy policies is a complex issue, and future systems should consider each of these factors when serving user's information seeking intent. We examine a large-scale dataset of “natural” unanswerable questions BIBREF54 based on real user search engine queries to identify if similar unanswerability factors exist. It is important to note that these questions have previously been filtered, according to a criteria for bad questions defined as “(questions that are) ambiguous, incomprehensible, dependent on clear false presuppositions, opinion-seeking, or not clearly a request for factual information.” Annotators made the decision based on the content of the question without viewing the equivalent Wikipedia page. We randomly sample 100 questions from the development set which were identified as unanswerable, and find that 20% of the questions are not questions (e.g., “all I want for christmas is you mariah carey tour”). 12% of questions are unlikely to ever contain an answer on Wikipedia, corresponding closely to our atypicality category. 3% of questions are unlikely to have an answer anywhere (e.g., `what guides Santa home after he has delivered presents?'). 7% of questions are incomplete or open-ended (e.g., `the south west wind blows across nigeria between'). 3% of questions have an unresolvable coreference (e.g., `how do i get to Warsaw Missouri from here'). 4% of questions are vague, and a further 7% have unknown sources of error. 2% still contain false presuppositions (e.g., `what is the only fruit that does not have seeds?') and the remaining 42% do not have an answer within the document. This reinforces our belief that though they have been understudied in past work, any question answering system interacting with real users should expect to receive such unanticipated and unanswerable questions. Conclusion We present PrivacyQA, the first significant corpus of privacy policy questions and more than 3500 expert annotations of relevant answers. The goal of this work is to promote question-answering research in the specialized privacy domain, where it can have large real-world impact. Strong neural baselines on PrivacyQA achieve a performance of only 39.8 F1 on this corpus, indicating considerable room for future research. Further, we shed light on several important considerations that affect the answerability of questions. We hope this contribution leads to multidisciplinary efforts to precisely understand user intent and reconcile it with information in policy documents, from both the privacy and NLP communities. Acknowledgements This research was supported in part by grants from the National Science Foundation Secure and Trustworthy Computing program (CNS-1330596, CNS-1330214, CNS-15-13957, CNS-1801316, CNS-1914486, CNS-1914444) and a DARPA Brandeis grant on Personalized Privacy Assistants (FA8750-15-2-0277). The US Government is authorized to reproduce and distribute reprints for Governmental purposes not withstanding any copyright notation. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the NSF, DARPA, or the US Government. The authors would like to extend their gratitude to Elias Wright, Gian Mascioli, Kiara Pillay, Harrison Kay, Eliel Talo, Alexander Fagella and N. Cameron Russell for providing their valuable expertise and insight to this effort. The authors are also grateful to Eduard Hovy, Lorrie Cranor, Florian Schaub, Joel Reidenberg, Aditya Potukuchi and Igor Shalyminov for helpful discussions related to this work, and to the three anonymous reviewers of this draft for their constructive feedback. Finally, the authors would like to thank all crowdworkers who consented to participate in this study.
[ "Bert + Unanswerable", "CNN, BERT" ]
3,845
qasper_e
en
null
d317e4799402ebdb0a7b6e80fa58ca1074d86a15403492af
what are the pivot-based baselines?
Introduction Although Neural Machine Translation (NMT) has dominated recent research on translation tasks BIBREF0, BIBREF1, BIBREF2, NMT heavily relies on large-scale parallel data, resulting in poor performance on low-resource or zero-resource language pairs BIBREF3. Translation between these low-resource languages (e.g., Arabic$\rightarrow $Spanish) is usually accomplished with pivoting through a rich-resource language (such as English), i.e., Arabic (source) sentence is translated to English (pivot) first which is later translated to Spanish (target) BIBREF4, BIBREF5. However, the pivot-based method requires doubled decoding time and suffers from the propagation of translation errors. One common alternative to avoid pivoting in NMT is transfer learning BIBREF6, BIBREF7, BIBREF8, BIBREF9 which leverages a high-resource pivot$\rightarrow $target model (parent) to initialize a low-resource source$\rightarrow $target model (child) that is further optimized with a small amount of available parallel data. Although this approach has achieved success in some low-resource language pairs, it still performs very poorly in extremely low-resource or zero-resource translation scenario. Specifically, BIBREF8 reports that without any child model training data, the performance of the parent model on the child test set is miserable. In this work, we argue that the language space mismatch problem, also named domain shift problem BIBREF10, brings about the zero-shot translation failure in transfer learning. It is because transfer learning has no explicit training process to guarantee that the source and pivot languages share the same feature distributions, causing that the child model inherited from the parent model fails in such a situation. For instance, as illustrated in the left of Figure FIGREF1, the points of the sentence pair with the same semantics are not overlapping in source space, resulting in that the shared decoder will generate different translations denoted by different points in target space. Actually, transfer learning for NMT can be viewed as a multi-domain problem where each source language forms a new domain. Minimizing the discrepancy between the feature distributions of different source languages, i.e., different domains, will ensure the smooth transition between the parent and child models, as shown in the right of Figure FIGREF1. One way to achieve this goal is the fine-tuning technique, which forces the model to forget the specific knowledge from parent data and learn new features from child data. However, the domain shift problem still exists, and the demand of parallel child data for fine-tuning heavily hinders transfer learning for NMT towards the zero-resource setting. In this paper, we explore the transfer learning in a common zero-shot scenario where there are a lot of source$\leftrightarrow $pivot and pivot$\leftrightarrow $target parallel data but no source$\leftrightarrow $target parallel data. In this scenario, we propose a simple but effective transfer approach, the key idea of which is to relieve the burden of the domain shift problem by means of cross-lingual pre-training. To this end, we firstly investigate the performance of two existing cross-lingual pre-training methods proposed by BIBREF11 in zero-shot translation scenario. Besides, a novel pre-training method called BRidge Language Modeling (BRLM) is designed to make full use of the source$\leftrightarrow $pivot bilingual data to obtain a universal encoder for different languages. Once the universal encoder is constructed, we only need to train the pivot$\rightarrow $target model and then test this model in source$\rightarrow $target direction directly. The main contributions of this paper are as follows: We propose a new transfer learning approach for NMT which uses the cross-lingual language model pre-training to enable a high performance on zero-shot translation. We propose a novel pre-training method called BRLM, which can effectively alleviates the distance between different source language spaces. Our proposed approach significantly improves zero-shot translation performance, consistently surpassing pivoting and multilingual approaches. Meanwhile, the performance on supervised translation direction remains the same level or even better when using our method. Related Work In recent years, zero-shot translation in NMT has attracted widespread attention in academic research. Existing methods are mainly divided into four categories: pivot-based method, transfer learning, multilingual NMT, and unsupervised NMT. Pivot-based Method is a common strategy to obtain a source$\rightarrow $target model by introducing a pivot language. This approach is further divided into pivoting and pivot-synthetic. While the former firstly translates a source language into the pivot language which is later translated to the target language BIBREF4, BIBREF5, BIBREF12, the latter trains a source$\rightarrow $target model with pseudo data generated from source-pivot or pivot-target parallel data BIBREF13, BIBREF14. Although the pivot-based methods can achieve not bad performance, it always falls into a computation-expensive and parameter-vast dilemma of quadratic growth in the number of source languages, and suffers from the error propagation problem BIBREF15. Transfer Learning is firstly introduced for NMT by BIBREF6, which leverages a high-resource parent model to initialize the low-resource child model. On this basis, BIBREF7 and BIBREF8 use shared vocabularies for source/target language to improve transfer learning, while BIBREF16 relieve the vocabulary mismatch by mainly using cross-lingual word embedding. Although these methods are successful in the low-resource scene, they have limited effects in zero-shot translation. Multilingual NMT (MNMT) enables training a single model that supports translation from multiple source languages into multiple target languages, even those unseen language pairs BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21. Aside from simpler deployment, MNMT benefits from transfer learning where low-resource language pairs are trained together with high-resource ones. However, BIBREF22 point out that MNMT for zero-shot translation easily fails, and is sensitive to the hyper-parameter setting. Also, MNMT usually performs worse than the pivot-based method in zero-shot translation setting BIBREF23. Unsupervised NMT (UNMT) considers a harder setting, in which only large-scale monolingual corpora are available for training. Recently, many methods have been proposed to improve the performance of UNMT, including using denoising auto-encoder, statistic machine translation (SMT) and unsupervised pre-training BIBREF24, BIBREF25, BIBREF26, BIBREF11. Since UNMT performs well between similar languages (e.g., English-German translation), its performance between distant languages is still far from expectation. Our proposed method belongs to the transfer learning, but it is different from traditional transfer methods which train a parent model as starting point. Before training a parent model, our approach fully leverages cross-lingual pre-training methods to make all source languages share the same feature space and thus enables a smooth transition for zero-shot translation. Approach In this section, we will present a cross-lingual pre-training based transfer approach. This method is designed for a common zero-shot scenario where there are a lot of source$\leftrightarrow $pivot and pivot$\leftrightarrow $target bilingual data but no source$\leftrightarrow $target parallel data, and the whole training process can be summarized as follows step by step: Pre-train a universal encoder with source/pivot monolingual or source$\leftrightarrow $pivot bilingual data. Train a pivot$\rightarrow $target parent model built on the pre-trained universal encoder with the available parallel data. During the training process, we freeze several layers of the pre-trained universal encoder to avoid the degeneracy issue BIBREF27. Directly translate source sentences into target sentences with the parent model, which benefits from the availability of the universal encoder. The key difficulty of this method is to ensure the intermediate representations of the universal encoder are language invariant. In the rest of this section, we first present two existing methods yet to be explored in zero-shot translation, and then propose a straightforward but effective cross-lingual pre-training method. In the end, we present the whole training and inference protocol for transfer. Approach ::: Masked and Translation Language Model Pretraining Two existing cross-lingual pre-training methods, Masked Language Modeling (MLM) and Translation Language Modeling (TLM), have shown their effectiveness on XNLI cross-lingual classification task BIBREF11, BIBREF28, but these methods have not been well studied on cross-lingual generation tasks in zero-shot condition. We attempt to take advantage of the cross-lingual ability of the two methods for zero-shot translation. Specifically, MLM adopts the Cloze objective of BERT BIBREF29 and predicts the masked words that are randomly selected and replaced with [MASK] token on monolingual corpus. In practice, MLM takes different language monolingual corpora as input to find features shared across different languages. With this method, word pieces shared in all languages have been mapped into a shared space, which makes the sentence representations across different languages close BIBREF30. Since MLM objective is unsupervised and only requires monolingual data, TLM is designed to leverage parallel data when it is available. Actually, TLM is a simple extension of MLM, with the difference that TLM concatenates sentence pair into a whole sentence, and then randomly masks words in both the source and target sentences. In this way, the model can either attend to surrounding words or to the translation sentence, implicitly encouraging the model to align the source and target language representations. Note that although each sentence pair is formed into one sentence, the positions of the target sentence are reset to count form zero. Approach ::: Bridge Language Model Pretraining Aside from MLM and TLM, we propose BRidge Language Modeling (BRLM) to further obtain word-level representation alignment between different languages. This method is inspired by the assumption that if the feature spaces of different languages are aligned very well, the masked words in the corrupted sentence can also be guessed by the context of the correspondingly aligned words on the other side. To achieve this goal, BRLM is designed to strengthen the ability to infer words across languages based on alignment information, instead of inferring words within monolingual sentence as in MLM or within the pseudo sentence formed by concatenating sentence pair as in TLM. As illustrated in Figure FIGREF9, BRLM stacks shared encoder over both side sentences separately. In particular, we design two network structures for BRLM, which are divided into Hard Alignment (BRLM-HA) and Soft Alignment (BRLM-SA) according to the way of generating the alignment information. These two structures actually extend MLM into a bilingual scenario, with the difference that BRLM leverages external aligner tool or additional attention layer to explicitly introduce alignment information during model training. Hard Alignment (BRLM-HA). We first use external aligner tool on source$\leftrightarrow $pivot parallel data to extract the alignment information of sentence pair. During model training, given source$\leftrightarrow $pivot sentence pair, BRLM-HA randomly masks some words in source sentence and leverages alignment information to obtain the aligned words in pivot sentence for masked words. Based on the processed input, BRLM-HA adopts the Transformer BIBREF1 encoder to gain the hidden states for source and pivot sentences respectively. Then the training objective of BRLM-HA is to predict the masked words by not only the surrounding words in source sentence but also the encoder outputs of the aligned words. Note that this training process is also carried out in a symmetric situation, in which we mask some words in pivot sentence and obtain the aligned words in the source sentence. Soft Alignment (BRLM-SA). Instead of using external aligner tool, BRLM-SA introduces an additional attention layer to learn the alignment information together with model training. In this way, BRLM-SA avoids the effect caused by external wrong alignment information and enables many-to-one soft alignment during model training. Similar with BRLM-HA, the training objective of BRLM-SA is to predict the masked words by not only the surrounding words in source sentence but also the outputs of attention layer. In our implementation, the attention layer is a multi-head attention layer adopted in Transformer, where the queries come from the masked source sentence, the keys and values come from the pivot sentence. In principle, MLM and TLM can learn some implicit alignment information during model training. However, the alignment process in MLM is inefficient since the shared word pieces only account for a small proportion of the whole corpus, resulting in the difficulty of expanding the shared information to align the whole corpus. TLM also lacks effort in alignment between the source and target sentences since TLM concatenates the sentence pair into one sequence, making the explicit alignment between the source and target infeasible. BRLM fully utilizes the alignment information to obtain better word-level representation alignment between different languages, which better relieves the burden of the domain shift problem. Approach ::: Transfer Protocol We consider the typical zero-shot translation scenario in which a high resource pivot language has parallel data with both source and target languages, while source and target languages has no parallel data between themselves. Our proposed cross-lingual pretraining based transfer approach for source$\rightarrow $target zero-shot translation is mainly divided into two phrases: the pretraining phase and the transfer phase. In the pretraining phase, we first pretrain MLM on monolingual corpora of both source and pivot languages, and continue to pretrain TLM or the proposed BRLM on the available parallel data between source and pivot languages, in order to build a cross-lingual encoder shared by the source and pivot languages. In the transfer phase, we train pivot$\rightarrow $target NMT model initialized by the cross-lingually pre-trained encoder, and finally transfer the trained NMT model to source$\rightarrow $target translation thanks to the shared encoder. Note that during training pivot$\rightarrow $target NMT model, we freeze several layers of the cross-lingually pre-trained encoder to avoid the degeneracy issue. For the more complicated scenario that either the source side or the target side has multiple languages, the encoder and the decoder are also shared across each side languages for efficient deployment of translation between multiple languages. Experiments ::: Setup We evaluate our cross-lingual pre-training based transfer approach against several strong baselines on two public datatsets, Europarl BIBREF31 and MultiUN BIBREF32, which contain multi-parallel evaluation data to assess the zero-shot performance. In all experiments, we use BLEU as the automatic metric for translation evaluation. Experiments ::: Setup ::: Datasets. The statistics of Europarl and MultiUN corpora are summarized in Table TABREF18. For Europarl corpus, we evaluate on French-English-Spanish (Fr-En-Es), German-English-French (De-En-Fr) and Romanian-English-German (Ro-En-De), where English acts as the pivot language, its left side is the source language, and its right side is the target language. We remove the multi-parallel sentences between different training corpora to ensure zero-shot settings. We use the devtest2006 as the validation set and the test2006 as the test set for Fr$\rightarrow $Es and De$\rightarrow $Fr. For distant language pair Ro$\rightarrow $De, we extract 1,000 overlapping sentences from newstest2016 as the test set and the 2,000 overlapping sentences split from the training set as the validation set since there is no official validation and test sets. For vocabulary, we use 60K sub-word tokens based on Byte Pair Encoding (BPE) BIBREF33. For MultiUN corpus, we use four languages: English (En) is set as the pivot language, which has parallel data with other three languages which do not have parallel data between each other. The three languages are Arabic (Ar), Spanish (Es), and Russian (Ru), and mutual translation between themselves constitutes six zero-shot translation direction for evaluation. We use 80K BPE splits as the vocabulary. Note that all sentences are tokenized by the tokenize.perl script, and we lowercase all data to avoid a large vocabulary for the MultiUN corpus. Experiments ::: Setup ::: Experimental Details. We use traditional transfer learning, pivot-based method and multilingual NMT as our baselines. For the fair comparison, the Transformer-big model with 1024 embedding/hidden units, 4096 feed-forward filter size, 6 layers and 8 heads per layer is adopted for all translation models in our experiments. We set the batch size to 2400 per batch and limit sentence length to 100 BPE tokens. We set the $\text{attn}\_\text{drop}=0$ (a dropout rate on each attention head), which is favorable to the zero-shot translation and has no effect on supervised translation directions BIBREF22. For the model initialization, we use Facebook's cross-lingual pretrained models released by XLM to initialize the encoder part, and the rest parameters are initialized with xavier uniform. We employ the Adam optimizer with $\text{lr}=0.0001$, $t_{\text{warm}\_\text{up}}=4000$ and $\text{dropout}=0.1$. At decoding time, we generate greedily with length penalty $\alpha =1.0$. Regarding MLM, TLM and BRLM, as mentioned in the pre-training phase of transfer protocol, we first pre-train MLM on monolingual data of both source and pivot languages, then leverage the parameters of MLM to initialize TLM and the proposed BRLM, which are continued to be optimized with source-pivot bilingual data. In our experiments, we use MLM+TLM, MLM+BRLM to represent this training process. For the masking strategy during training, following BIBREF29, $15\%$ of BPE tokens are selected to be masked. Among the selected tokens, $80\%$ of them are replaced with [MASK] token, $10\%$ are replaced with a random BPE token, and $10\%$ unchanged. The prediction accuracy of masked words is used as a stopping criterion in the pre-training stage. Besides, we use fastalign tool BIBREF34 to extract word alignments for BRLM-HA. Experiments ::: Main Results Table TABREF19 and TABREF26 report zero-shot results on Europarl and Multi-UN evaluation sets, respectively. We compare our approaches with related approaches of pivoting, multilingual NMT (MNMT) BIBREF19, and cross-lingual transfer without pretraining BIBREF16. The results show that our approaches consistently outperform other approaches across languages and datasets, especially surpass pivoting, which is a strong baseline in the zero-shot scenario that multilingual NMT systems often fail to beat BIBREF19, BIBREF20, BIBREF23. Pivoting translates source to pivot then to target in two steps, causing inefficient translation process. Our approaches use one encoder-decoder model to translate between any zero-shot directions, which is more efficient than pivoting. Regarding the comparison between transfer approaches, our cross-lingual pretraining based transfer outperforms transfer method that does not use pretraining by a large margin. Experiments ::: Main Results ::: Results on Europarl Dataset. Regarding comparison between the baselines in table TABREF19, we find that pivoting is the strongest baseline that has significant advantage over other two baselines. Cross-lingual transfer for languages without shared vocabularies BIBREF16 manifests the worst performance because of not using source$\leftrightarrow $pivot parallel data, which is utilized as beneficial supervised signal for the other two baselines. Our best approach of MLM+BRLM-SA achieves the significant superior performance to all baselines in the zero-shot directions, improving by 0.9-4.8 BLEU points over the strong pivoting. Meanwhile, in the supervised direction of pivot$\rightarrow $target, our approaches performs even better than the original supervised Transformer thanks to the shared encoder trained on both large-scale monolingual data and parallel data between multiple languages. MLM alone that does not use source$\leftrightarrow $pivot parallel data performs much better than the cross-lingual transfer, and achieves comparable results to pivoting. When MLM is combined with TLM or the proposed BRLM, the performance is further improved. MLM+BRLM-SA performs the best, and is better than MLM+BRLM-HA indicating that soft alignment is helpful than hard alignment for the cross-lingual pretraining. Experiments ::: Main Results ::: Results on MultiUN Dataset. Like experimental results on Europarl, MLM+BRLM-SA performs the best among all proposed cross-lingual pretraining based transfer approaches as shown in Table TABREF26. When comparing systems consisting of one encoder-decoder model for all zero-shot translation, our approaches performs significantly better than MNMT BIBREF19. Although it is challenging for one model to translate all zero-shot directions between multiple distant language pairs of MultiUN, MLM+BRLM-SA still achieves better performances on Es $\rightarrow $ Ar and Es $\rightarrow $ Ru than strong pivoting$_{\rm m}$, which uses MNMT to translate source to pivot then to target in two separate steps with each step receiving supervised signal of parallel corpora. Our approaches surpass pivoting$_{\rm m}$ in all zero-shot directions by adding back translation BIBREF33 to generate pseudo parallel sentences for all zero-shot directions based on our pretrained models such as MLM+BRLM-SA, and further training our universal encoder-decoder model with these pseudo data. BIBREF22 gu2019improved introduces back translation into MNMT, while we adopt it in our transfer approaches. Finally, our best MLM+BRLM-SA with back translation outperforms pivoting$_{\rm m}$ by 2.4 BLEU points averagely, and outperforms MNMT BIBREF22 by 4.6 BLEU points averagely. Again, in supervised translation directions, MLM+BRLM-SA with back translation also achieves better performance than the original supervised Transformer. Experiments ::: Analysis ::: Sentence Representation. We first evaluate the representational invariance across languages for all cross-lingual pre-training methods. Following BIBREF23, we adopt max-pooling operation to collect the sentence representation of each encoder layer for all source-pivot sentence pairs in the Europarl validation sets. Then we calculate the cosine similarity for each sentence pair and average all cosine scores. As shown in Figure FIGREF27, we can observe that, MLM+BRLM-SA has the most stable and similar cross-lingual representations of sentence pairs on all layers, while it achieves the best performance in zero-shot translation. This demonstrates that better cross-lingual representations can benefit for the process of transfer learning. Besides, MLM+BRLM-HA is not as superior as MLM+BRLM-SA and even worse than MLM+TLM on Fr-En, since MLM+BRLM-HA may suffer from the wrong alignment knowledge from an external aligner tool. We also find an interesting phenomenon that as the number of layers increases, the cosine similarity decreases. Experiments ::: Analysis ::: Contextualized Word Representation. We further sample an English-Russian sentence pair from the MultiUN validation sets and visualize the cosine similarity between hidden states of the top encoder layer to further investigate the difference of all cross-lingual pre-training methods. As shown in Figure FIGREF38, the hidden states generated by MLM+BRLM-SA have higher similarity for two aligned words. It indicates that MLM+BRLM-SA can gain better word-level representation alignment between source and pivot languages, which better relieves the burden of the domain shift problem. Experiments ::: Analysis ::: The Effect of Freezing Parameters. To freeze parameters is a common strategy to avoid catastrophic forgetting in transfer learning BIBREF27. Table TABREF43 shows the performance of transfer learning with freezing different layers on MultiUN test set, in which En$\rightarrow $Ru denotes the parent model, Ar$\rightarrow $Ru and Es$\rightarrow $Ru are two child models, and all models are based on MLM+BRLM-SA. We can find that updating all parameters during training will cause a notable drop on the zero-shot direction due to the catastrophic forgetting. On the contrary, freezing all the parameters leads to the decline on supervised direction because the language features extracted during pre-training is not sufficient for MT task. Freezing the first four layers of the transformer shows the best performance and keeps the balance between pre-training and fine-tuning. Conclusion In this paper, we propose a cross-lingual pretraining based transfer approach for the challenging zero-shot translation task, in which source and target languages have no parallel data, while they both have parallel data with a high resource pivot language. With the aim of building the language invariant representation between source and pivot languages for smooth transfer of the parent model of pivot$\rightarrow $target direction to the child model of source$\rightarrow $target direction, we introduce one monolingual pretraining method and two bilingual pretraining methods to construct an universal encoder for the source and pivot languages. Experiments on public datasets show that our approaches significantly outperforms several strong baseline systems, and manifest the language invariance characteristics in both sentence level and word level neural representations. Acknowledgments We would like to thank the anonymous reviewers for the helpful comments. This work was supported by National Key R&D Program of China (Grant No. 2016YFE0132100), National Natural Science Foundation of China (Grant No. 61525205, 61673289). This work was also partially supported by Alibaba Group through Alibaba Innovative Research Program and the Priority Academic Program Development (PAPD) of Jiangsu Higher Education Institutions.
[ "pivoting, pivoting$_{\\rm m}$", "firstly translates a source language into the pivot language which is later translated to the target language" ]
3,815
qasper_e
en
null
a3db996005cb6103e9e3a8d19712c6866285e0df89a1b264
what datasets were used?
Introduction Emotion detection has long been a topic of interest to scholars in natural language processing (NLP) domain. Researchers aim to recognize the emotion behind the text and distribute similar ones into the same group. Establishing an emotion classifier can not only understand each user's feeling but also be extended to various application, for example, the motivation behind a user's interests BIBREF0. Based on releasing of large text corpus on social media and the emotion categories proposed by BIBREF1, BIBREF2, numerous models have provided and achieved fabulous precision so far. For example, DeepMoji BIBREF3 which utilized transfer learning concept to enhance emotions and sarcasm understanding behind the target sentence. CARER BIBREF4 learned contextualized affect representations to make itself more sensitive to rare words and the scenario behind the texts. As methods become mature, text-based emotion detecting applications can be extended from a single utterance to a dialogue contributed by a series of utterances. Table TABREF2 illustrates the difference between single utterance and dialogue emotion recognition. The same utterances in Table TABREF2, even the same person said the same sentence, the emotion it convey may be various, which may depend on different background of the conversation, tone of speaking or personality. Therefore, for emotion detection, the information from preceding utterances in a conversation is relatively critical. In SocialNLP 2019 EmotionX, the challenge is to recognize emotions for all utterances in EmotionLines dataset, a dataset consists of dialogues. According to the needs for considering context at the same time, we develop two classification models, inspired by bidirectional encoder representations from transformers (BERT) BIBREF5, FriendsBERT and ChatBERT. In this paper, we introduce our approaches including causal utterance modeling, model pre-training, and fine-turning. Dataset EmotionLines BIBREF6 is a dialogue dataset composed of two subsets, Friends and EmotionPush, according to the source of the dialogues. The former comes from the scripts of the Friends TV sitcom. The other is made up of Facebook messenger chats. Each subset includes $1,000$ English dialogues, and each dialogue can be further divided into a few consecutive utterances. All the utterances are annotated by five annotators on a crowd-sourcing platform (Amazon Mechanical Turk), and the labeling work is only based on the textual content. Annotator votes for one of the seven emotions, namely Ekman’s six basic emotions BIBREF1, plus the neutral. If none of the emotion gets more than three votes, the utterance will be marked as “non-neutral”. For the datasets, there are properties worth additional mentioning. Although Friends and EmotionPush share the same data format, they are quite different in nature. Friends is a speech-based dataset which is annotated dialogues from the TV sitcom. It means most of the utterances are generated by the a few main characters. The personality of a character often affects the way of speaking, and therefore “who is the speaker" might provide extra clues for emotion prediction. In contrast, EmotionPush does not have this trait due to the anonymous mechanism. In addition, features such as typo, hyperlink, and emoji that only appear in chat-based data will need some domain-specific techniques to process. Incidentally, the objective of the challenge is to predict the emotion for each utterance. Just, according to EmotionX 2019 specification, there are only four emotions be selected as our label candidates, which are Joy, Sadness, Anger, and Neutral. These emotions will be considered during performance evaluation. The technical detail will also be introduced and discussed in following Section SECREF13 and Section SECREF26. Model Description For this challenge, we adapt BERT which is proposed by BIBREF5 to help understand the context at the same time. Technically, BERT, designed on end-to-end architecture, is a deep pre-trained transformer encoder that dynamically provides language representation and BERT already achieved multiple state-of-the-art results on GLUE benchmark BIBREF7 and many tasks. A quick recap for BERT's architecture and its pre-training tasks will be illustrated in the following subsections. Model Description ::: Model Architecture BERT, the Bidirectional Encoder Representations from Transformers, consists of several transformer encoder layers that enable the model to extract very deep language features on both token-level and sentence-level. Each transformer encoder contains multi-head self-attention layers that provide ability to learn multiple attention feature of each word from their bidirectional context. The transformer and its self-attention mechanism are proposed by BIBREF8. This self-attention mechanism can be interpreted as a key-value mapping given query. By given the embedding vector for token input, the query ($Q$), key ($K$) and value ($V$) are produced by the projection from each three parameter matrices where $W^Q \in \mathbb {R}^{d_{{\rm model}} \times d_{k}}, W^K \in \mathbb {R}^{d_{\rm model} \times d_{k}}$ and $W^V \in \mathbb {R}^{d_{\rm model} \times d_{v}}$. The self-attention BIBREF8 is formally represented as: The $ d_k = d_v = d_{\rm model} = 1024$ in BERT large version and 768 in BERT base version. Once model can extract attention feature, we can extend one self-attention into multi-head self-attention, this extension makes sub-space features can be extracted in same time by this multi-head configuration. Overall, the multi-attention mechanism is adopt for each transformer encoder, and several of encoder layer will be stacked together to form a deep transformer encoder. For the model input, BERT allow us take one sentence as input sequence or two sentences together as one input sequence, and the maximum length of input sequence is 512. The way that BERT was designed is for giving model the sentence-level and token-level understanding. In two sentences case, a special token ([SEP]) will be inserted between two sentences. In addition, the first input token is also a special token ([CLS]), and its corresponding ouput will be vector place for classification during fine-tuning. The outputs of the last encoder layer corresponding to each input token can be treated as word representations for each token, and the word representation of the first token ([CLS]) will be consider as classification (output) representation for further fine-tuning tasks. In BERT, this vector is denoted as $C \in \mathbb {R}^{d_{\rm model}} $, and a classification layer is denoted as $ W \in \mathbb {R}^{K \times d_{\rm model}}$, where $K$ is number of classification labels. Finally, the prediction $P$ of BERT is represented as $P = {\rm softmax}(CW^T)$. Model Description ::: Pre-training Tasks In pre-training, intead of using unidirectional language models, BERT developed two pre-training tasks: (1) Masked LM (cloze test) and (2) Next Sentence Prediction. At the first pre-training task, bidirectional language modeling can be done at this cloze-like pre-training. In detail, 15% tokens of input sequence will be masked at random and model need to predict those masked tokens. The encoder will try to learn contextual representations from every given tokens due to masking tokens at random. Model will not know which part of the input is going to be masked, so that the information of each masked tokens should be inferred by remaining tokens. At Next Sentence Prediction, two sentences concatenated together will be considered as model input. In order to give model a good nature language understanding, knowing relationship between sentence is one of important abilities. When generating input sequences, 50% of time the sentence B is actually followed by sentence A, and rest 50% of the time the sentence B will be picked randomly from dataset, and model need to predict if the sentence B is next sentence of sentence A. That is, the attention information will be shared between sentences. Such sentence-level understanding may have difficulties to be learned at first pre-training task (Masked LM), therefore, the pre-training task (NSP) is developed as second training goal to capture the cross sentence relationship. In this competition, limited by the size of dataset and the challenge in contextual emotion recognition, we consider BERT with both two pre-training tasks can give a good starting point to extract emotion changing during dialogue-like conversation. Especially the second pre-training task, it might be more important for dialogue-like conversation where the emotion may various by the context of continuous utterances. That is, given a set of continues conversations, the emotion of current utterance might be influenced by previous utterance. By this assumption and with supporting from the experiment results of BERT, we can take sentence A as one-sentence context and consider sentence B as the target sentence for emotion prediction. The detail will be described in Section SECREF4. Methodology The main goal of the present work is to predict the emotion of utterance within the dialogue. Following are four major difficulties we concern about: The emotion of the utterances depends not only on the text but also on the interaction happened earlier. The source of the two datasets are different. Friends is speech-based dialogues and EmotionPush is chat-based dialogues. It makes datasets possess different characteristics. There are only $1,000$ dialogues in both training datasets which are not large enough for the stability of training a complex neural-based model. The prediction targets (emotion labels) are highly unbalanced. The proposed approach is summarized in Figure FIGREF3, which aims to overcome these challenges. The framework could be separated into three steps and described as follow: Methodology ::: Causal Utterance Modeling Given a dialogue $D^{(i)}$ which includes sequence of utterances denoted as $D^{(i)}=(u^{(i)}_{1}, u^{(i)}_{2}, ..., u^{(i)}_{n})$, where $i$ is the index in dataset and $n$ is the number of utterances in the given dialogue. In order to conserve the emotional information of both utterance and conversation, we rearrange each two consecutive utterances $u_{t}, u_{t-1}$ into a single sentence representation $x_{t}$ as The corresponding sentence representation corpus $X^{(i)}$ are denoted as $X^{(i)}=(x^{(i)}_{1}, x^{(i)}_{2}, ..., x^{(i)}_{n})$. Note that the first utterance within a conversation does not have its causal utterance (previous sentence), therefore, the causal utterance will be set as [None]. A practical example of sentence representation is shown in Table TABREF11. Since the characteristics of two datasets are not identical, we customize different causal utterance modeling strategies to refine the information in text. For Friends, there are two specific properties. The first one is that most dialogues are surrounding with the six main characters, including Rachel, Monica, Phoebe, Joey, Chandler, and Ross. The utterance ratio of given by the six roles is up to $83.4\%$. Second, the personal characteristics of the six characters are very clear. Each leading role has its own emotion undulated rule. To make use of these features, we introduce the personality tokenization which help learning the personality of the six characters. Personality tokenization concatenate the speaker and says tokens before the input utterance if the speaker is one of the six characters. The example is shown in Table TABREF12. For EmotionPush, the text are informal chats which including like slang, acronym, typo, hyperlink, and emoji. Another characteristic is that the specific name entities are tokenized with random index. (e.g. “organization_80”, “person_01”, and “time_12”). We consider some of these informal text are related to expressing emotion such as repeated typing, purposed capitalization, and emoji (e.g. “:D”, “:(”, and “<3”)). Therefore, we keep most informal expressions but only process hyperlinks, empty utterance, and name entities by unifying the tokens. Methodology ::: Model Pre-training Since the size of both datasets are not large enough for complex neural-based model training as well as BERT model is only pre-train on formal text datasets, the issues of overfitting and domain bias are important considerations for design the pre-training process. To avoid our model overfitting on the training data and increase the understanding of informal text, we adapted BERT and derived two models, namely FriendsBERT and ChatBERT, with different pre-training tasks before the formal training process for Friends and EmotionPush dataset, respectively. The pre-training strategies are described below. For pre-training FriendsBERT, we collect the completed scripts of all ten seasons of Friends TV shows from emorynlp which includes 3,107 scenes within 61,309 utterances. All the utterances are followed the preprocessing methods mentions above to compose the corpus for Masked language model pre-training task. The consequent utterances in the same scenes are considered as the consequent sentences to pre-train the Next Sentence Prediction task. In the pre-training process, the training loss is the sum of the mean likelihood of two pre-train tasks. For pre-training ChatBERT, we pre-train our model on the Twitter dataset, since the text and writing style on Twitter are close to the chat text where both may involved with many informal words or emoticons as well. The Twitter emotion dataset, 8 basic emotions from emotion wheel BIBREF1, was collected by twitter streaming API with specific emotion-related hashtags, such as #anger, #joy, #cry, #sad and etc. The hashtags in tweets are treated as emotion label for model fine-tuning. The tweets were fine-grined processing followed the rules in BIBREF9, BIBREF4, including duplicate tweets removing, the emotion hashtags must appearing in the last position of a tweet, and etc. The statis of tweets were summarized in Table TABREF17. Each tweet and corresponding emotion label composes an emotion classification dataset for pre-training. Methodology ::: Fine-tuning Since our emotion recognition task is treated as a sequence-level classification task, the model would be fine-tuned on the processed training data. Following the BERT construction, we take the first embedding vector which corresponds to the special token [CLS] from the final hidden state of the Transformer encoder. This vector represents the embedding vector of the corresponding conversation utterances which is denoted as $\mathbf {C} \in \mathbb {R}^{H}$, where $H$ is the embedding size. A dense neural layer is treated as a classification layer which consists of parameters $\mathbf {W} \in \mathbb {R}^{K\times H}$ and $\mathbf {b} \in \mathbb {R}^{K}$, where $K$ is the number of emotion class. The emotion prediction probabilities $\mathbf {P} \in \mathbb {R}^{K}$ are computed by a softmax activation function as All the parameters in BERT and the classification layer would be fine-turned together to minimize the Negative Log Likelihood (NLL) loss function, as Equation (DISPLAY_FORM22), based on the ground truth emotion label $c$. In order to tackle the problem of highly unbalanced emotion labels, we apply weighted balanced warming on NLL loss function, as Equation (DISPLAY_FORM23), in the first epoch of fine-tuning procedure. where $\mathbf {w}$ are the weights of corresponding emotion label $c$ which are computed and normalize by the frequency as By adding the weighted balanced warming on NLL loss, the model could learn to predict the minor emotions (e.g. anger and sadness) earlier and make the training process more stable. Since the major evaluation metrics micro F1-score is effect by the number of each label, we only apply the weighted balanced warming in first epoch to optimize the performance. Experiments Since the EmotionX challenge only provided the gold labels in training data, we pick the best performance model (weights) to predict the testing data. In this section, we present the experiment and evaluation results. Experiments ::: Experimental Setup The EmotionX challenge consists of $1,000$ dialogues for both Friends and EmotionPush. In all of our experiments, each dataset is separated into top 800 dialogues for training and last 200 dialogues for validation. Since the EmotionX challenge considers only the four emotions (anger, joy, neutral, and sadness) in the evaluation stage, we ignore all the data point corresponding to other emotions directly. The details of emotions distribution are shown in Table TABREF18. The hyperparameters and training setup of our models (FriendsBERT and ChatBERT) are shown in Table TABREF25. Some common and easily implemented methods are selected as the baselines embedding methods and classification models. The baseline embedding methods are including bag-of-words (BOW), term frequency–inverse document frequency (TFIDF), and neural-based word embedding. The classification models are including Logistic Regression (LR), Random Forest (RF), TextCNN BIBREF10 with initial word embedding as GloVe BIBREF11, and our proposed model. All the experiment results are based on the best performances of validation results. Experiments ::: Performance The experiment results of validation on Friends are shown in Table TABREF19. The proposed model and baselines are evaluated based on the Precision (P.), Recall (R.), and F1-measure (F1). For the traditional baselines, namely BOW and TFIDF, we observe that they achieve surprising high F1 scores around $0.81$, however, the scores for Anger and Sadness are lower. This explains that traditional approaches tend to predict the labels with large sample size, such as Joy and Neutral, but fail to take of scarce samples even when an ensemble random forest classifier is adopted. In order to prevent the unbalanced learning, we choose the weighted loss mechanism for both TextCNN and causal modeling TextCNN (C-TextCNN), these models suffer less than the traditional baselines and achieve a slightly balance performance, where there are around 15% and 7% improvement on Anger and Sadness, respectively. We following adopt the casual utterance modeling to original TextCNN, mapping previous utterance as well as target utterance into model. The causal utterance modeling improve the C-TextCNN over TextCNN for 6%, 2% and 1% on Anger, Joy and overall F1 score. Motivated from these preliminary experiments, the proposed FriendsBERT also adopt the ideas of both weighted loss and causal utterance modeling. As compared to the original BERT, single sentence BERT (FriendsBERT-base-s), the proposed FriendsBERT-base improve 1% for Joy and overall F1, and 2% for Sadness. For the final validation performance, our proposed approach achieves the highest scores, which are $0.85$ and $0.86$ for FriendsBERT-base and FriendsBERT-large, respectively. Overall, the proposed FriendsBERT successfully captures the sentence-level context-awarded information and outperforms all the baselines, which not only achieves high performance on large sample labels, but also on small sample labels. The similar settings are also adapted to EmotionPush dataset for the final evaluation. Experiments ::: Evaluation Results The testing dataset consists of 240 dialogues including $3,296$ and $3,536$ utterances in Friends and EmotionPush respectively. We re-train our FriendsBERT and ChatBERT with top 920 training dialogues and predict the evaluation results using the model performing the best validation results. The results are shown in Table TABREF29 and Table TABREF30. The present method achieves $81.5\%$ and $88.5\%$ micro F1-score on the testing dataset of Friends and EmotionPush, respectively. Conclusion and Future work In the present work, we propose FriendsBERT and ChatBERT for the multi-utterance emotion recognition task on EmotionLines dataset. The proposed models are adapted from BERT BIBREF5 with three main improvement during the model training procedure, which are the causal utterance modeling mechanism, specific model pre-training, and adapt weighted loss. The causal utterance modeling takes the advantages of the sentence-level context information during model inference. The specific model pre-training helps to against the bias in different text domain. The weighted loss avoids our model to only predict on large size sample. The effectiveness and generalizability of the proposed methods are demonstrated from the experiments. In future work, we consider to include the conditional probabilistic constraint $P ({\rm Emo}_{B} | \hat{\rm Emo}_{A})$. Model should predict the emotion based on a certain understanding about context emotions. This might be more reasonable for guiding model than just predicting emotion of ${\rm Sentence}_B$ directly. In addition, due to the limitation of BERT input format, ambiguous number of input sentences is now becoming an important design requirement for our future work. Also, personality embedding development will be another future work of the emotion recognition. The personality embedding will be considered as sentence embedding injected into word embedding, and it seems this additional information can contribute some improvement potentially.
[ "Friends, EmotionPush", "EmotionLines BIBREF6" ]
3,178
qasper_e
en
null
2dd2de242f407468d7b403c1835dc62579f875dd09a00801
what evaluation protocols are provided?
Introduction Nowadays deep learning techniques outperform the other conventional methods in most of the speech-related tasks. Training robust deep neural networks for each task depends on the availability of powerful processing GPUs, as well as standard and large scale datasets. In text-independent speaker verification, large-scale datasets are available, thanks to the NIST SRE evaluations and other data collection projects such as VoxCeleb BIBREF0. In text-dependent speaker recognition, experiments with end-to-end architectures conducted on large proprietary databases have demonstrated their superiority over traditional approaches BIBREF1. Yet, contrary to text-independent speaker recognition, text-dependent speaker recognition lacks large-scale publicly available databases. The two most well-known datasets are probably RSR2015 BIBREF2 and RedDots BIBREF3. The former contains speech data collected from 300 individuals in a controlled manner, while the latter is used primarily for evaluation rather than training, due to its small number of speakers (only 64). Motivated by this lack of large-scale dataset for text-dependent speaker verification, we chose to proceed with the collection of the DeepMine dataset, which we expect to become a standard benchmark for the task. Apart from speaker recognition, large amounts of training data are required also for training automatic speech recognition (ASR) systems. Such datasets should not only be large in size, they should also be characterized by high variability with respect to speakers, age and dialects. While several datasets with these properties are available for languages like English, Mandarin, French, this is not the case for several other languages, such as Persian. To this end, we proceeded with collecting a large-scale dataset, suitable for building robust ASR models in Persian. The main goal of the DeepMine project was to collect speech from at least a few thousand speakers, enabling research and development of deep learning methods. The project started at the beginning of 2017, and after designing the database and the developing Android and server applications, the data collection began in the middle of 2017. The project finished at the end of 2018 and the cleaned-up and final version of the database was released at the beginning of 2019. In BIBREF4, the running project and its data collection scenarios were described, alongside with some preliminary results and statistics. In this paper, we announce the final and cleaned-up version of the database, describe its different parts and provide various evaluation setups for each part. Finally, since the database was designed mainly for text-dependent speaker verification purposes, some baseline results are reported for this task on the official evaluation setups. Additional baseline results are also reported for Persian speech recognition. However, due to the space limitation in this paper, the baseline results are not reported for all the database parts and conditions. They will be defined and reported in the database technical documentation and in a future journal paper. Data Collection DeepMine is publicly available for everybody with a variety of licenses for different users. It was collected using crowdsourcing BIBREF4. The data collection was done using an Android application. Each respondent installed the application on his/her personal device and recorded several phrases in different sessions. The Android application did various checks on each utterance and if it passed all of them, the respondent was directed to the next phrase. For more information about data collection scenario, please refer to BIBREF4. Data Collection ::: Post-Processing In order to clean-up the database, the main post-processing step was to filter out problematic utterances. Possible problems include speaker word insertions (e.g. repeating some part of a phrase), deletions, substitutions, and involuntary disfluencies. To detect these, we implemented an alignment stage, similar to the second alignment stage in the LibriSpeech project BIBREF5. In this method, a custom decoding graph was generated for each phrase. The decoding graph allows for word skipping and word insertion in the phrase. For text-dependent and text-prompted parts of the database, such errors are not allowed. Hence, any utterances with errors were removed from the enrollment and test lists. For the speech recognition part, a sub-part of the utterance which is correctly aligned to the corresponding transcription is kept. After the cleaning step, around 190 thousand utterances with full transcription and 10 thousand with sub-part alignment have remained in the database. Data Collection ::: Statistics After processing the database and removing problematic respondents and utterances, 1969 respondents remained in the database, with 1149 of them being male and 820 female. 297 of the respondents could not read English and have therefore read only the Persian prompts. About 13200 sessions were recorded by females and similarly, about 9500 sessions by males, i.e. women are over-represented in terms of sessions, even though their number is 17% smaller than that of males. Other useful statistics related to the database are shown in Table TABREF4. The last status of the database, as well as other related and useful information about its availability can be found on its website, together with a limited number of samples. DeepMine Database Parts The DeepMine database consists of three parts. The first one contains fixed common phrases to perform text-dependent speaker verification. The second part consists of random sequences of words useful for text-prompted speaker verification, and the last part includes phrases with word- and phoneme-level transcription, useful for text-independent speaker verification using a random phrase (similar to Part4 of RedDots). This part can also serve for Persian ASR training. Each part is described in more details below. Table TABREF11 shows the number of unique phrases in each part of the database. For the English text-dependent part, the following phrases were selected from part1 of the RedDots database, hence the RedDots can be used as an additional training set for this part: “My voice is my password.” “OK Google.” “Artificial intelligence is for real.” “Actions speak louder than words.” “There is no such thing as a free lunch.” DeepMine Database Parts ::: Part1 - Text-dependent (TD) This part contains a set of fixed phrases which are used to verify speakers in text-dependent mode. Each speaker utters 5 Persian phrases, and if the speaker can read English, 5 phrases selected from Part1 of the RedDots database are also recorded. We have created three experimental setups with different numbers of speakers in the evaluation set. For each setup, speakers with more recording sessions are included in the evaluation set and the rest of the speakers are used for training in the background set (in the database, all background sets are basically training data). The rows in Table TABREF13 corresponds to the different experimental setups and shows the numbers of speakers in each set. Note that, for English, we have filtered the (Persian native) speakers by the ability to read English. Therefore, there are fewer speakers in each set for English than for Persian. There is a small “dev” set in each setup which can be used for parameter tuning to prevent over-tuning on the evaluation set. For each experimental setup, we have defined several official trial lists with different numbers of enrollment utterances per trial in order to investigate the effects of having different amounts of enrollment data. All trials in one trial list have the same number of enrollment utterances (3 to 6) and only one test utterance. All enrollment utterances in a trial are taken from different consecutive sessions and the test utterance is taken from yet another session. From all the setups and conditions, the 100-spk with 3-session enrollment (3-sess) is considered as the main evaluation condition. In Table TABREF14, the number of trials for Persian 3-sess are shown for the different types of trial in the text-dependent speaker verification (SV). Note that for Imposter-Wrong (IW) trials (i.e. imposter speaker pronouncing wrong phrase), we merely create one wrong trial for each Imposter-Correct (IC) trial to limit the huge number of possible trials for this case. So, the number of trials for IC and IW cases are the same. DeepMine Database Parts ::: Part2 - Text-prompted (TP) For this part, in each session, 3 random sequences of Persian month names are shown to the respondent in two modes: In the first mode, the sequence consists of all 12 months, which will be used for speaker enrollment. The second mode contains a sequence of 3 month names that will be used as a test utterance. In each 8 sessions received by a respondent from the server, there are 3 enrollment phrases of all 12 months (all in just one session), and $7 \times 3$ other test phrases, containing fewer words. For a respondent who can read English, 3 random sequences of English digits are also recorded in each session. In one of the sessions, these sequences contain all digits and the remaining ones contain only 4 digits. Similar to the text-dependent case, three experimental setups with different number of speaker in the evaluation set are defined (corresponding to the rows in Table TABREF16). However, different strategy is used for defining trials: Depending on the enrollment condition (1- to 3-sess), trials are enrolled on utterances of all words from 1 to 3 different sessions (i.e. 3 to 9 utterances). Further, we consider two conditions for test utterances: seq test utterance with only 3 or 4 words and full test utterances with all words (i.e. same words as in enrollment but in different order). From all setups an all conditions, the 100-spk with 1-session enrolment (1-sess) is considered as the main evaluation condition for the text-prompted case. In Table TABREF16, the numbers of trials (sum for both seq and full conditions) for Persian 1-sess are shown for the different types of trials in the text-prompted SV. Again, we just create one IW trial for each IC trial. DeepMine Database Parts ::: Part3 - Text-independent (TI) In this part, 8 Persian phrases that have already been transcribed on the phone level are displayed to the respondent. These phrases are chosen mostly from news and Persian Wikipedia. If the respondent is unable to read English, instead of 5 fixed phrases and 3 random digit strings, 8 other Persian phrases are also prompted to the respondent to have exactly 24 phrases in each recording session. This part can be useful at least for three potential applications. First, it can be used for text-independent speaker verification. The second application of this part (same as Part4 of RedDots) is text-prompted speaker verification using random text (instead of a random sequence of words). Finally, the third application is large vocabulary speech recognition in Persian (explained in the next sub-section). Based on the recording sessions, we created two experimental setups for speaker verification. In the first one, respondents with at least 17 recording sessions are included to the evaluation set, respondents with 16 sessions to the development and the rest of respondents to the background set (can be used as training data). In the second setup, respondents with at least 8 sessions are included to the evaluation set, respondents with 6 or 7 sessions to the development and the rest of respondents to the background set. Table TABREF18 shows numbers of speakers in each set of the database for text-independent SV case. For text-independent SV, we have considered 4 scenarios for enrollment and 4 scenarios for test. The speaker can be enrolled using utterances from 1, 2 or 3 consecutive sessions (1sess to 3sess) or using 8 utterances from 8 different sessions. The test speech can be one utterance (1utt) for short duration scenario or all utterances in one session (1sess) for long duration case. In addition, test speech can be selected from 5 English phrases for cross-language testing (enrollment using Persian utterances and test using English utterances). From all setups, 1sess-1utt and 1sess-1sess for 438-spk set are considered as the main evaluation setups for text-independent case. Table TABREF19 shows number of trials for these setups. For text-prompted SV with random text, the same setup as text-independent case together with corresponding utterance transcriptions can be used. DeepMine Database Parts ::: Part3 - Speech Recognition As explained before, Part3 of the DeepMine database can be used for Persian read speech recognition. There are only a few databases for speech recognition in Persian BIBREF6, BIBREF7. Hence, this part can at least partly address this problem and enable robust speech recognition applications in Persian. Additionally, it can be used for speaker recognition applications, such as training deep neural networks (DNNs) for extracting bottleneck features BIBREF8, or for collecting sufficient statistics using DNNs for i-vector training. We have randomly selected 50 speakers (25 for each gender) from the all speakers in the database which have net speech (without silence parts) between 25 minutes to 50 minutes as test speakers. For each speaker, the utterances in the first 5 sessions are included to (small) test-set and the other utterances of test speakers are considered as a large-test-set. The remaining utterances of the other speakers are included in the training set. The test-set, large-test-set and train-set contain 5.9, 28.5 and 450 hours of speech respectively. There are about 8300 utterances in Part3 which contain only Persian full names (i.e. first and family name pairs). Each phrase consists of several full names and their phoneme transcriptions were extracted automatically using a trained Grapheme-to-Phoneme (G2P). These utterances can be used to evaluate the performance of a systems for name recognition, which is usually more difficult than the normal speech recognition because of the lack of a reliable language model. Experiments and Results Due to the space limitation, we present results only for the Persian text-dependent speaker verification and speech recognition. Experiments and Results ::: Speaker Verification Experiments We conducted an experiment on text-dependent speaker verification part of the database, using the i-vector based method proposed in BIBREF9, BIBREF10 and applied it to the Persian portion of Part1. In this experiment, 20-dimensional MFCC features along with first and second derivatives are extracted from 16 kHz signals using HTK BIBREF11 with 25 ms Hamming windowed frames with 15 ms overlap. The reported results are obtained with a 400-dimensional gender independent i-vector based system. The i-vectors are first length-normalized and are further normalized using phrase- and gender-dependent Regularized Within-Class Covariance Normalization (RWCCN) BIBREF10. Cosine distance is used to obtain speaker verification scores and phrase- and gender-dependent s-norm is used for normalizing the scores. For aligning speech frames to Gaussian components, monophone HMMs with 3 states and 8 Gaussian components in each state are used BIBREF10. We only model the phonemes which appear in the 5 Persian text-dependent phrases. For speaker verification experiments, the results were reported in terms of Equal Error Rate (EER) and Normalized Detection Cost Function as defined for NIST SRE08 ($\mathrm {NDCF_{0.01}^{min}}$) and NIST SRE10 ($\mathrm {NDCF_{0.001}^{min}}$). As shown in Table TABREF22, in text-dependent SV there are 4 types of trials: Target-Correct and Imposter-Correct refer to trials when the pass-phrase is uttered correctly by target and imposter speakers respectively, and in same manner, Target-Wrong and Imposter-Wrong refer to trials when speakers uttered a wrong pass-phrase. In this paper, only the correct trials (i.e. Target-Correct as target trials vs Imposter-Correct as non-target trials) are considered for evaluating systems as it has been proved that these are the most challenging trials in text-dependent SV BIBREF8, BIBREF12. Table TABREF23 shows the results of text-dependent experiments using Persian 100-spk and 3-sess setup. For filtering trials, the respondents' mobile brand and model were used in this experiment. In the table, the first two letters in the filter notation relate to the target trials and the second two letters (i.e. right side of the colon) relate for non-target trials. For target trials, the first Y means the enrolment and test utterances were recorded using a device with the same brand by the target speaker. The second Y letter means both recordings were done using exactly the same device model. Similarly, the first Y for non-target trials means that the devices of target and imposter speakers are from the same brand (i.e. manufacturer). The second Y means that, in addition to the same brand, both devices have the same model. So, the most difficult target trials are “NN”, where the speaker has used different a device at the test time. In the same manner, the most difficult non-target trials which should be rejected by the system are “YY” where the imposter speaker has used the same device model as the target speaker (note that it does not mean physically the same device because each speaker participated in the project using a personal mobile device). Hence, the similarity in the recording channel makes rejection more difficult. The first row in Table TABREF23 shows the results for all trials. By comparing the results with the best published results on RSR2015 and RedDots BIBREF10, BIBREF8, BIBREF12, it is clear that the DeepMine database is more challenging than both RSR2015 and RedDots databases. For RSR2015, the same i-vector/HMM-based method with both RWCCN and s-norm has achieved EER less than 0.3% for both genders (Table VI in BIBREF10). The conventional Relevance MAP adaptation with HMM alignment without applying any channel-compensation techniques (i.e. without applying RWCCN and s-norm due to the lack of suitable training data) on RedDots Part1 for the male has achieved EER around 1.5% (Table XI in BIBREF10). It is worth noting that EERs for DeepMine database without any channel-compensation techniques are 2.1 and 3.7% for males and females respectively. One interesting advantage of the DeepMine database compared to both RSR2015 and RedDots is having several target speakers with more than one mobile device. This is allows us to analyse the effects of channel compensation methods. The second row in Table TABREF23 corresponds to the most difficult trials where the target trials come from mobile devices with different models while imposter trials come from the same device models. It is clear that severe degradation was caused by this kind of channel effects (i.e. decreasing within-speaker similarities while increasing between-speaker similarities), especially for females. The results in the third row show the condition when target speakers at the test time use exactly the same device that was used for enrollment. Comparing this row with the results in the first row proves how much improvement can be achieved when exactly the same device is used by the target speaker. The results in the fourth row show the condition when imposter speakers also use the same device model at test time to fool the system. So, in this case, there is no device mismatch in all trials. By comparing the results with the third row, we can see how much degradation is caused if we only consider the non-target trials with the same device. The fifth row shows similar results when the imposter speakers use device of the same brand as the target speaker but with a different model. Surprisingly, in this case, the degradation is negligible and it means that mobiles from a specific brand (manufacturer) have different recording channel properties. The degraded female results in the sixth row as compared to the third row show the effect of using a different device model from the same brand for target trials. For males, the filters brings almost the same subsets of trials, which explains the very similar results in this case. Looking at the first two and the last row of Table TABREF23, one can notice the significantly worse performance obtained for the female trials as compared to males. Note that these three rows include target trials where the devices used for enrollment do not necessarily match the devices used for recording test utterances. On the other hand, in rows 3 to 6, which exclude such mismatched trials, the performance for males and females is comparable. This suggest that the degraded results for females are caused by some problematic trials with device mismatch. The exact reason for this degradation is so far unclear and needs a further investigation. In the last row of the table, the condition of the second row is relaxed: the target device should have different model possibly from the same brand and imposter device only needs to be from the same brand. In this case, as was expected, the performance degradation is smaller than in the second row. Experiments and Results ::: Speech Recognition Experiments In addition to speaker verification, we present several speech recognition experiments on Part3. The experiments were performed with the Kaldi toolkit BIBREF13. For training HMM-based MonoPhone model, only 20 thousands of shortest utterances are used and for other models the whole training data is used. The DNN based acoustic model is a time-delay DNN with low-rank factorized layers and skip connections without i-vector adaptation (a modified network from one of the best performing LibriSpeech recipes). The network is shown in Table TABREF25: there are 16 F-TDNN layers, with dimension 1536 and linear bottleneck layers of dimension 256. The acoustic model is trained for 10 epochs using lattice-free maximum mutual information (LF-MMI) with cross-entropy regularization BIBREF14. Re-scoring is done using a pruned trigram language model and the size of the dictionary is around 90,000 words. Table TABREF26 shows the results in terms of word error rate (WER) for different evaluated methods. As can be seen, the created database can be used to train well performing and practically usable Persian ASR models. Conclusions In this paper, we have described the final version of a large speech corpus, the DeepMine database. It has been collected using crowdsourcing and, according to the best of our knowledge, it is the largest public text-dependent and text-prompted speaker verification database in two languages: Persian and English. In addition, it is the largest text-independent speaker verification evaluation database, making it suitable to robustly evaluate state-of-the-art methods on different conditions. Alongside these appealing properties, it comes with phone-level transcription, making it suitable to train deep neural network models for Persian speech recognition. We provided several evaluation protocols for each part of the database. The protocols allow researchers to investigate the performance of different methods in various scenarios and study the effects of channels, duration and phrase text on the performance. We also provide two test sets for speech recognition: One normal test set with a few minutes of speech for each speaker and one large test set with more (30 minutes on average) speech that can be used for any speaker adaptation method. As baseline results, we reported the performance of an i-vector/HMM based method on Persian text-dependent part. Moreover, we conducted speech recognition experiments using conventional HMM-based methods, as well as state-of-the-art deep neural network based method using Kaldi toolkit with promising performance. Text-dependent results have shown that the DeepMine database is more challenging than RSR2015 and RedDots databases. Acknowledgments The data collection project was mainly supported by Sharif DeepMine company. The work on the paper was supported by Czech National Science Foundation (GACR) project "NEUREM3" No. 19-26934X and the National Programme of Sustainability (NPU II) project "IT4Innovations excellence in science - LQ1602".
[ "three experimental setups with different numbers of speakers in the evaluation set, three experimental setups with different number of speaker in the evaluation set are defined, first one, respondents with at least 17 recording sessions are included to the evaluation set, respondents with 16 sessions to the development and the rest of respondents to the background set, second setup, respondents with at least 8 sessions are included to the evaluation set, respondents with 6 or 7 sessions to the development and the rest of respondents to the background set" ]
3,880
qasper_e
en
null
b66b3c973729802d35f81fa66a7dbb1b846445856909a378
How does Gaussian-masked directional multi-head attention works?
Introduction Chinese word segmentation (CWS) is a task for Chinese natural language process to delimit word boundary. CWS is a basic and essential task for Chinese which is written without explicit word delimiters and different from alphabetical languages like English. BIBREF0 treats Chinese word segmentation (CWS) as a sequence labeling task with character position tags, which is followed by BIBREF1, BIBREF2, BIBREF3. Traditional CWS models depend on the design of features heavily which effects the performance of model. To minimize the effort in feature engineering, some CWS models BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11 are developed following neural network architecture for sequence labeling tasks BIBREF12. Neural CWS models perform strong ability of feature representation, employing unigram and bigram character embedding as input and approach good performance. The CWS task is often modeled as one graph model based on a scoring model that means it is composed of two parts, one part is an encoder which is used to generate the representation of characters from the input sequence, the other part is a decoder which performs segmentation according to the encoder scoring. Table TABREF1 summarizes typical CWS models according to their decoding ways for both traditional and neural models. Markov models such as BIBREF13 and BIBREF4 depend on the maximum entropy model or maximum entropy Markov model both with a Viterbi decoder. Besides, conditional random field (CRF) or Semi-CRF for sequence labeling has been used for both traditional and neural models though with different representations BIBREF2, BIBREF15, BIBREF10, BIBREF17, BIBREF18. Generally speaking, the major difference between traditional and neural network models is about the way to represent input sentences. Recent works about neural CWS which focus on benchmark dataset, namely SIGHAN Bakeoff BIBREF21, may be put into the following three categories roughly. Encoder. Practice in various natural language processing tasks has been shown that effective representation is essential to the performance improvement. Thus for better CWS, it is crucial to encode the input character, word or sentence into effective representation. Table TABREF2 summarizes regular feature sets for typical CWS models including ours as well. The building blocks that encoders use include recurrent neural network (RNN) and convolutional neural network (CNN), and long-term memory network (LSTM). Graph model. As CWS is a kind of structure learning task, the graph model determines which type of decoder should be adopted for segmentation, also it may limit the capability of defining feature, as shown in Table 2, not all graph models can support the word features. Thus recent work focused on finding more general or flexible graph model to make model learn the representation of segmentation more effective as BIBREF9, BIBREF11. External data and pre-trained embedding. Whereas both encoder and graph model are about exploring a way to get better performance only by improving the model strength itself. Using external resource such as pre-trained embeddings or language representation is an alternative for the same purpose BIBREF22, BIBREF23. SIGHAN Bakeoff defines two types of evaluation settings, closed test limits all the data for learning should not be beyond the given training set, while open test does not take this limitation BIBREF21. In this work, we will focus on the closed test setting by finding a better model design for further CWS performance improvement. Shown in Table TABREF1, different decoders have particular decoding algorithms to match the respective CWS models. Markov models and CRF-based models often use Viterbi decoders with polynomial time complexity. In general graph model, search space may be too large for model to search. Thus it forces graph models to use an approximate beam search strategy. Beam search algorithm has a kind low-order polynomial time complexity. Especially, when beam width $b$=1, the beam search algorithm will reduce to greedy algorithm with a better time complexity $O(Mn)$ against the general beam search time complexity $O(Mnb^2)$, where $n$ is the number of units in one sentences, $M$ is a constant representing the model complexity. Greedy decoding algorithm can bring the fastest speed of decoding while it is not easy to guarantee the precision of decoding when the encoder is not strong enough. In this paper, we focus on more effective encoder design which is capable of offering fast and accurate Chinese word segmentation with only unigram feature and greedy decoding. Our proposed encoder will only consist of attention mechanisms as building blocks but nothing else. Motivated by the Transformer BIBREF24 and its strength of capturing long-range dependencies of input sentences, we use a self-attention network to generate the representation of input which makes the model encode sentences at once without feeding input iteratively. Considering the weakness of the Transformer to model relative and absolute position information directly BIBREF25 and the importance of localness information, position information and directional information for CWS, we further improve the architecture of standard multi-head self-attention of the Transformer with a directional Gaussian mask and get a variant called Gaussian-masked directional multi-head attention. Based on the newly improved attention mechanism, we expand the encoder of the Transformer to capture different directional information. With our powerful encoder, our model uses only simple unigram features to generate representation of sentences. For decoder which directly performs the segmentation, we use the bi-affinal attention scorer, which has been used in dependency parsing BIBREF26 and semantic role labeling BIBREF27, to implement greedy decoding on finding the boundaries of words. In our proposed model, greedy decoding ensures a fast segmentation while powerful encoder design ensures a good enough segmentation performance even working with greedy decoder together. Our model will be strictly evaluated on benchmark datasets from SIGHAN Bakeoff shared task on CWS in terms of closed test setting, and the experimental results show that our proposed model achieves new state-of-the-art. The technical contributions of this paper can be summarized as follows. We propose a CWS model with only attention structure. The encoder and decoder are both based on attention structure. With a powerful enough encoder, we for the first time show that unigram (character) featues can help yield strong performance instead of diverse $n$-gram (character and word) features in most of previous work. To capture the representation of localness information and directional information, we propose a variant of directional multi-head self-attention to further enhance the state-of-the-art Transformer encoder. Models The CWS task is often modelled as one graph model based on an encoder-based scoring model. The model for CWS task is composed of an encoder to represent the input and a decoder based on the encoder to perform actual segmentation. Figure FIGREF6 is the architecture of our model. The model feeds sentence into encoder. Embedding captures the vector $e=(e_1,...,e_n)$ of the input character sequences of $c=(c_1,...,c_n)$. The encoder maps vector sequences of $ {e}=(e_1,..,e_n)$ to two sequences of vector which are $ {v^b}=(v_1^b,...,v_n^b)$ and ${v^f}=(v_1^f,...v_n^f)$ as the representation of sentences. With $v^b$ and $v^f$, the bi-affinal scorer calculates the probability of each segmentation gaps and predicts the word boundaries of input. Similar as the Transformer, the encoder is an attention network with stacked self-attention and point-wise, fully connected layers while our encoder includes three independent directional encoders. Models ::: Encoder Stacks In the Transformer, the encoder is composed of a stack of N identical layers and each layer has one multi-head self-attention layer and one position-wise fully connected feed-forward layer. One residual connection is around two sub-layers and followed by layer normalization BIBREF24. This architecture provides the Transformer a good ability to generate representation of sentence. With the variant of multi-head self-attention, we design a Gaussian-masked directional encoder to capture representation of different directions to improve the ability of capturing the localness information and position information for the importance of adjacent characters. One unidirectional encoder can capture information of one particular direction. For CWS tasks, one gap of characters, which is from a word boundary, can divide one sequence into two parts, one part in front of the gap and one part in the rear of it. The forward encoder and backward encoder are used to capture information of two directions which correspond to two parts divided by the gap. One central encoder is paralleled with forward and backward encoders to capture the information of entire sentences. The central encoder is a special directional encoder for forward and backward information of sentences. The central encoder can fuse the information and enable the encoder to capture the global information. The encoder outputs one forward information and one backward information of each positions. The representation of sentence generated by center encoder will be added to these information directly: where $v^{b}=(v^b_1,...,v^b_n)$ is the backward information, $v^{f}=(v^f_1,...,v^f_n)$ is the forward information, $r^{b}=(r^b_1,...,r^b_n)$ is the output of backward encoder, $r^{c}=(r^c_1,...,r^c_n)$ is the output of center encoder and $r^{f}=(r^f_1,...,r^f_n)$ is the output of forward encoder. Models ::: Gaussian-Masked Directional Multi-Head Attention Similar as scaled dot-product attention BIBREF24, Gaussian-masked directional attention can be described as a function to map queries and key-value pairs to the representation of input. Here queries, keys and values are all vectors. Standard scaled dot-product attention is calculated by dotting query $Q$ with all keys $K$, dividing each values by $\sqrt{d_k}$, where $\sqrt{d_k}$ is the dimension of keys, and apply a softmax function to generate the weights in the attention: Different from scaled dot-product attention, Gaussian-masked directional attention expects to pay attention to the adjacent characters of each positions and cast the localness relationship between characters as a fix Gaussian weight for attention. We assume that the Gaussian weight only relys on the distance between characters. Firstly we introduce the Gaussian weight matrix $G$ which presents the localness relationship between each two characters: where $g_{ij}$ is the Gaussian weight between character $i$ and $j$, $dis_{ij}$ is the distance between character $i$ and $j$, $\Phi (x)$ is the cumulative distribution function of Gaussian, $\sigma $ is the standard deviation of Gaussian function and it is a hyperparameter in our method. Equation (DISPLAY_FORM13) can ensure the Gaussian weight equals 1 when $dis_{ij}$ is 0. The larger distance between charactersis, the smaller the weight is, which makes one character can affect its adjacent characters more compared with other characters. To combine the Gaussian weight to the self-attention, we produce the Hadamard product of Gaussian weight matrix $G$ and the score matrix produced by $Q{K^{T}}$ where $AG$ is the Gaussian-masked attention. It ensures that the relationship between two characters with long distances is weaker than adjacent characters. The scaled dot-product attention models the relationship between two characters without regard to their distances in one sequence. For CWS task, the weight between adjacent characters should be more important while it is hard for self-attention to achieve the effect explicitly because the self-attention cannot get the order of sentences directly. The Gaussian-masked attention adjusts the weight between characters and their adjacent character to a larger value which stands for the effect of adjacent characters. For forward and backward encoder, the self-attention sublayer needs to use a triangular matrix mask to let the self-attention focus on different weights: where $pos_i$ is the position of character $c_i$. The triangular matrix for forward and backward encode are: $\left[ \begin{matrix} 1 & 0 & 0 & \cdots &0\\ 1 & 1 & 0 & \cdots &0\\ 1 & 1 & 1 & \cdots &0\\ \vdots &\vdots &\vdots &\ddots &\vdots \\ 1 & 1 & 1 & \cdots & 1\\ \end{matrix} \right]$ $\left[ \begin{matrix} 1 & 1 & 1 & \cdots &1 \\ 0 & 1 & 1 & \cdots &1 \\ 0 & 0& 1 & \cdots &1 \\ \vdots &\vdots &\vdots &\ddots &\vdots \\ 0 & 0 & 0 & \cdots & 1\\ \end{matrix}\right]$ Similar as BIBREF24, we use multi-head attention to capture information from different dimension positions as Figure FIGREF16 and get Gaussian-masked directional multi-head attention. With multi-head attention architecture, the representation of input can be captured by where $MH$ is the Gaussian-masked multi-head attention, ${W_i^q, W_i^k,W_i^v} \in \mathbb {R}^{d_k \times d_h}$ is the parameter matrices to generate heads, $d_k$ is the dimension of model and $d_h$ is the dimension of one head. Models ::: Bi-affinal Attention Scorer Regarding word boundaries as gaps between any adjacent words converts the character labeling task to the gap labeling task. Different from character labeling task, gap labeling task requires information of two adjacent characters. The relationship between adjacent characters can be represented as the type of gap. The characteristic of word boundaries makes bi-affine attention an appropriate scorer for CWS task. Bi-affinal attention scorer is the component that we use to label the gap. Bi-affinal attention is developed from bilinear attention which has been used in dependency parsing BIBREF26 and SRL BIBREF27. The distribution of labels in a labeling task is often uneven which makes the output layer often include a fixed bias term for the prior probability of different labels BIBREF27. Bi-affine attention uses bias terms to alleviate the burden of the fixed bias term and get the prior probability which makes it different from bilinear attention. The distribution of the gap is uneven that is similar as other labeling task which fits bi-affine. Bi-affinal attention scorer labels the target depending on information of independent unit and the joint information of two units. In bi-affinal attention, the score $s_{ij}$ of characters $c_i$ and $c_j$ $(i < j)$ is calculated by: where $v_i^f$ is the forward information of $c_i$ and $v_i^b$ is the backward information of $c_j$. In Equation (DISPLAY_FORM21), $W$, $U$ and $b$ are all parameters that can be updated in training. $W$ is a matrix with shape $(d_i \times N\times d_j)$ and $U$ is a $(N\times (d_i + d_j))$ matrix where $d_i$ is the dimension of vector $v_i^f$ and $N$ is the number of labels. In our model, the biaffine scorer uses the forward information of character in front of the gap and the backward information of the character behind the gap to distinguish the position of characters. Figure FIGREF22 is an example of labeling gap. The method of using biaffine scorer ensures that the boundaries of words can be determined by adjacent characters with different directional information. The score vector of the gap is formed by the probability of being a boundary of word. Further, the model generates all boundaries using activation function in a greedy decoding way. Experiments ::: Experimental Settings ::: Data We train and evaluate our model on datasets from SIGHAN Bakeoff 2005 BIBREF21 which has four datasets, PKU, MSR, AS and CITYU. Table TABREF23 shows the statistics of train data. We use F-score to evaluate CWS models. To train model with pre-trained embeddings in AS and CITYU, we use OpenCC to transfer data from traditional Chinese to simplified Chinese. Experiments ::: Experimental Settings ::: Pre-trained Embedding We only use unigram feature so we only trained character embeddings. Our pre-trained embedding are pre-trained on Chinese Wikipedia corpus by word2vec BIBREF29 toolkit. The corpus used for pre-trained embedding is all transferred to simplified Chinese and not segmented. On closed test, we use embeddings initialized randomly. Experiments ::: Experimental Settings ::: Hyperparameters For different datasets, we use two kinds of hyperparameters which are presented in Table TABREF24. We use hyperparameters in Table TABREF24 for small corpora (PKU and CITYU) and normal corpora (MSR and AS). We set the standard deviation of Gaussian function in Equation (DISPLAY_FORM13) to 2. Each training batch contains sentences with at most 4096 tokens. Experiments ::: Experimental Settings ::: Optimizer To train our model, we use the Adam BIBREF30 optimizer with $\beta _1=0.9$, $\beta _2=0.98$ and $\epsilon =10^{-9}$. The learning rate schedule is the same as BIBREF24: where $d$ is the dimension of embeddings, $step$ is the step number of training and $warmup_step$ is the step number of warmup. When the number of steps is smaller than the step of warmup, the learning rate increases linearly and then decreases. Experiments ::: Hardware and Implements We trained our models on a single CPU (Intel i7-5960X) with an nVidia 1080 Ti GPU. We implement our model in Python with Pytorch 1.0. Experiments ::: Results Tables TABREF25 and TABREF26 reports the performance of recent models and ours in terms of closed test setting. Without the assistance of unsupervised segmentation features userd in BIBREF20, our model outperforms all the other models in MSR and AS except BIBREF18 and get comparable performance in PKU and CITYU. Note that all the other models for this comparison adopt various $n$-gram features while only our model takes unigram ones. With unsupervised segmentation features introduced by BIBREF20, our model gets a higher result. Specially, the results in MSR and AS achieve new state-of-the-art and approaching previous state-of-the-art in CITYU and PKU. The unsupervised segmentation features are derived from the given training dataset, thus using them does not violate the rule of closed test of SIGHAN Bakeoff. Table TABREF36 compares our model and recent neural models in terms of open test setting in which any external resources, especially pre-trained embeddings or language models can be used. In MSR and AS, our model gets a comparable result while our results in CITYU and PKU are not remarkable. However, it is well known that it is always hard to compare models when using open test setting, especially with pre-trained embedding. Not all models may use the same method and data to pre-train. Though pre-trained embedding or language model can improve the performance, the performance improvement itself may be from multiple sources. It often that there is a success of pre-trained embedding to improve the performance, while it cannot prove that the model is better. Compared with other LSTM models, our model performs better in AS and MSR than in CITYU and PKU. Considering the scale of different corpora, we believe that the size of corpus affects our model and the larger size is, the better model performs. For small corpus, the model tends to be overfitting. Tables TABREF25 and TABREF26 also show the decoding time in different datasets. Our model finishes the segmentation with the least decoding time in all four datasets, thanks to the architecture of model which only takes attention mechanism as basic block. Related Work ::: Chinese Word Segmentation CWS is a task for Chinese natural language process to delimit word boundary. BIBREF0 for the first time formulize CWS as a sequence labeling task. BIBREF3 show that different character tag sets can make essential impact for CWS. BIBREF2 use CRFs as a model for CWS, achieving new state-of-the-art. Works of statistical CWS has built the basis for neural CWS. Neural word segmentation has been widely used to minimize the efforts in feature engineering which was important in statistical CWS. BIBREF4 introduce the neural model with sliding-window based sequence labeling. BIBREF6 propose a gated recursive neural network (GRNN) for CWS to incorporate complicated combination of contextual character and n-gram features. BIBREF7 use LSTM to learn long distance information. BIBREF9 propose a neural framework that eliminates context windows and utilize complete segmentation history. BIBREF33 explore a joint model that performs segmentation, POS-Tagging and chunking simultaneously. BIBREF34 propose a feature-enriched neural model for joint CWS and part-of-speech tagging. BIBREF35 present a joint model to enhance the segmentation of Chinese microtext by performing CWS and informal word detection simultaneously. BIBREF17 propose a character-based convolutional neural model to capture $n$-gram features automatically and an effective approach to incorporate word embeddings. BIBREF11 improve the model in BIBREF9 and propose a greedy neural word segmenter with balanced word and character embedding inputs. BIBREF23 propose a novel neural network model to incorporate unlabeled and partially-labeled data. BIBREF36 propose two methods that extend the Bi-LSTM to perform incorporating dictionaries into neural networks for CWS. BIBREF37 propose Switch-LSTMs to segment words and provided a more flexible solution for multi-criteria CWS which is easy to transfer the learned knowledge to new criteria. Related Work ::: Transformer Transformer BIBREF24 is an attention-based neural machine translation model. The Transformer is one kind of self-attention networks (SANs) which is proposed in BIBREF38. Encoder of the Transformer consists of one self-attention layer and a position-wise feed-forward layer. Decoder of the Transformer contains one self-attention layer, one encoder-decoder attention layer and one position-wise feed-forward layer. The Transformer uses residual connections around the sublayers and then followed by a layer normalization layer. Scaled dot-product attention is the key component in the Transformer. The input of attention contains queries, keys, and values of input sequences. The attention is generated using queries and keys like Equation (DISPLAY_FORM11). Structure of scaled dot-product attention allows the self-attention layer generate the representation of sentences at once and contain the information of the sentence which is different from RNN that process characters of sentences one by one. Standard self-attention is similar as Gaussian-masked direction attention while it does not have directional mask and gaussian mask. BIBREF24 also propose multi-head attention which is better to generate representation of sentence by dividing queries, keys and values to different heads and get information from different subspaces. Conclusion In this paper, we propose an attention mechanism only based Chinese word segmentation model. Our model uses self-attention from the Transformer encoder to take sequence input and bi-affine attention scorer to predict the label of gaps. To improve the ability of capturing the localness and directional information of self-attention based encoder, we propose a variant of self-attention called Gaussian-masked directional multi-head attention to replace the standard self-attention. We also extend the Transformer encoder to capture directional features. Our model uses only unigram features instead of multiple $n$-gram features in previous work. Our model is evaluated on standard benchmark dataset, SIGHAN Bakeoff 2005, which shows not only our model performs segmentation faster than any previous models but also gives new higher or comparable segmentation performance against previous state-of-the-art models.
[ "pays attentions to adjacent characters and casts a localness relationship between the characters as a fixed Gaussian weight assuming the weight relies on the distance between characters", "Gaussian-masked directional attention can be described as a function to map queries and key-value pairs to the representation of input, Gaussian-masked directional attention expects to pay attention to the adjacent characters of each positions and cast the localness relationship between characters as a fix Gaussian weight for attention, Gaussian weight only relys on the distance between characters" ]
3,612
qasper_e
en
null
4528fad5e57b98d3022a13b580bd4daa018c7129c12d50ea
How much training data from the non-English language is used by the system?
Introduction Pre-trained models BIBREF0, BIBREF1 have received much of attention recently thanks to their impressive results in many down stream NLP tasks. Additionally, multilingual pre-trained models enable many NLP applications for other languages via zero-short cross-lingual transfer. Zero-shot cross-lingual transfer has shown promising results for rapidly building applications for low resource languages. BIBREF2 show the potential of multilingual-BERT BIBREF0 in zero-shot transfer for a large number of languages from different language families on five NLP tasks, namely, natural language inference, document classification, named entity recognition, part-of-speech tagging, and dependency parsing. Although multilingual models are an important ingredient for enhancing language technology in many languages, recent research on improving pre-trained models puts much emphasis on English BIBREF3, BIBREF4, BIBREF5. The current state of affairs makes it difficult to translate advancements in pre-training from English to non-English languages. To our best knowledge, there are only three available multilingual pre-trained models to date: (1) the multilingual-BERT (mBERT) that supports 104 languages, (2) cross-lingual language model BIBREF6 that supports 100 languages, and (3) Language Agnostic SEntence Representations BIBREF7 that supports 93 languages. Among the three models, LASER is based on neural machine translation approach and strictly requires parallel data to train. Do multilingual models always need to be trained from scratch? Can we transfer linguistic knowledge learned by English pre-trained models to other languages? In this work, we develop a technique to rapidly transfer an existing pre-trained model from English to other languages in an energy efficient way BIBREF8. As the first step, we focus on building a bilingual language model (LM) of English and a target language. Starting from a pre-trained English LM, we learn the target language specific parameters (i.e., word embeddings), while keeping the encoder layers of the pre-trained English LM fixed. We then fine-tune both English and target model to obtain the bilingual LM. We apply our approach to autoencoding language models with masked language model objective and show the advantage of the proposed approach in zero-shot transfer. Our main contributions in this work are: We propose a fast adaptation method for obtaining a bilingual BERT$_{\textsc {base}}$ of English and a target language within a day using one Tesla V100 16GB GPU. We evaluate our bilingual LMs for six languages on two zero-shot cross-lingual transfer tasks, namely natural language inference BIBREF9 and universal dependency parsing. We show that our models offer competitive performance or even better that mBERT. We illustrate that our bilingual LMs can serve as an excellent feature extractor in supervised dependency parsing task. Bilingual Pre-trained LMs We first provide some background of pre-trained language models. Let $_e$ be English word-embeddings and $\Psi ()$ be the Transformer BIBREF10 encoder with parameters $$. Let $_{w_i}$ denote the embedding of word $w_i$ (i.e., $_{w_i} = _e[w_1]$). We omit positional embeddings and bias for clarity. A pre-trained LM typically performs the following computations: (i) transform a sequence of input tokens to contextualized representations $[_{w_1},\dots ,_{w_n}] = \Psi (_{w_1}, \dots , _{w_n}; )$, and (ii) predict an output word $y_i$ at $i^{\text{th}}$ position $p(y_i | _{w_i}) \propto \exp (_{w_i}^\top _{y_i})$. Autoencoding LM BIBREF0 corrupts some input tokens $w_i$ by replacing them with a special token [MASK]. It then predicts the original tokens $y_i = w_i$ from the corrupted tokens. Autoregressive LM BIBREF3 predicts the next token ($y_i = w_{i+1}$) given all the previous tokens. The recently proposed XLNet model BIBREF5 is an autoregressive LM that factorizes output with all possible permutations, which shows empirical performance improvement over GPT-2 due to the ability to capture bidirectional context. Here we assume that the encoder performs necessary masking with respect to each training objective. Given an English pre-trained LM, we wish to learn a bilingual LM for English and a given target language $f$ under a limited computational resource budget. To quickly build a bilingual LM, we directly adapt the English pre-traind model to the target model. Our approach consists of three steps. First, we initialize target language word-embeddings $_f$ in the English embedding space such that embeddings of a target word and its English equivalents are close together (§SECREF8). Next, we create a target LM from the target embeddings and the English encoder $\Psi ()$. We then fine-tune target embeddings while keeping $\Psi ()$ fixed (§SECREF14). Finally, we construct a bilingual LM of $_e$, $_f$, and $\Psi ()$ and fine-tune all the parameters (§SECREF15). Figure FIGREF7 illustrates the last two steps in our approach. Bilingual Pre-trained LMs ::: Initializing Target Embeddings Our approach to learn the initial foreign word embeddings $_f \in ^{|V_f| \times d}$ is based on the idea of mapping the trained English word embeddings $_e \in ^{|V_e| \times d}$ to $_f$ such that if a foreign word and an English word are similar in meaning then their embeddings are similar. Borrowing the idea of universal lexical sharing from BIBREF11, we represent each foreign word embedding $_f[i] \in ^d$ as a linear combination of English word embeddings $_e[j] \in ^d$ where $_i\in ^{|V_e|}$ is a sparse vector and $\sum _j^{|V_e|} \alpha _{ij} = 1$. In this step of initializing foreign embeddings, having a good estimation of $$ could speed of the convergence when tuning the foreign model and enable zero-shot transfer (§SECREF5). In the following, we discuss how to estimate $_i\;\forall i\in \lbrace 1,2, \dots , |V_f|\rbrace $ under two scenarios: (i) we have parallel data of English-foreign, and (ii) we only rely on English and foreign monolingual data. Bilingual Pre-trained LMs ::: Initializing Target Embeddings ::: Learning from Parallel Corpus Given an English-foreign parallel corpus, we can estimate word translation probability $p(e\,|\,f)$ for any (English-foreign) pair $(e, f)$ using popular word-alignment BIBREF12 toolkits such as fast-align BIBREF13. We then assign: Since $_i$ is estimated from word alignment, it is a sparse vector. Bilingual Pre-trained LMs ::: Initializing Target Embeddings ::: Learning from Monolingual Corpus For low resource languages, parallel data may not be available. In this case, we rely only on monolingual data (e.g., Wikipedias). We estimate word translation probabilities from word embeddings of the two languages. Word vectors of these languages can be learned using fastText BIBREF14 and then are aligned into a shared space with English BIBREF15, BIBREF16. Unlike learning contextualized representations, learning word vectors is fast and computationally cheap. Given the aligned vectors $\bar{}_f$ of foreign and $\bar{}_e$ of English, we calculate the word translation matrix $\in ^{|V_f|\times |V_e|}$ as Here, we use $\mathrm {sparsemax}$ BIBREF17 instead of softmax. Sparsemax is a sparse version of softmax and it puts zero probabilities on most of the word in the English vocabulary except few English words that are similar to a given foreign word. This property is desirable in our approach since it leads to a better initialization of the foreign embeddings. Bilingual Pre-trained LMs ::: Fine-tuning Target Embeddings After initializing foreign word-embeddings, we replace English word-embeddings in the English pre-trained LM with foreign word-embeddings to obtain the foreign LM. We then fine-tune only foreign word-embeddings on monolingual data. The training objective is the same as the training objective of the English pre-trained LM (i.e., masked LM for BERT). Since the trained encoder $\Psi ()$ is good at capturing association, the purpose of this step is to further optimize target embeddings such that the target LM can utilized the trained encoder for association task. For example, if the words Albert Camus presented in a French input sequence, the self-attention in the encoder more likely attends to words absurde and existentialisme once their embeddings are tuned. Bilingual Pre-trained LMs ::: Fine-tuning Bilingual LM We create a bilingual LM by plugging foreign language specific parameters to the pre-trained English LM (Figure FIGREF7). The new model has two separate embedding layers and output layers, one for English and one for foreign language. The encoder layer in between is shared. We then fine-tune this model using English and foreign monolingual data. Here, we keep tuning the model on English to ensure that it does not forget what it has learned in English and that we can use the resulting model for zero-shot transfer (§SECREF3). In this step, the encoder parameters are also updated so that in can learn syntactic aspects (i.e., word order, morphological agreement) of the target languages. Zero-shot Experiments We build our bilingual LMs, named RAMEN, starting from BERT$_{\textsc {base}}$, BERT$_{\textsc {large}}$, RoBERTa$_{\textsc {base}}$, and RoBERTa$_{\textsc {large}}$ pre-trained models. Using BERT$_{\textsc {base}}$ allows us to compare the results with mBERT model. Using BERT$_{\textsc {large}}$ and RoBERTa allows us to investigate whether the performance of the target LM correlates with the performance of the source LM. We evaluate our models on two cross-lingual zero-shot tasks: (1) Cross-lingual Natural Language Inference (XNLI) and (2) dependency parsing. Zero-shot Experiments ::: Data We evaluate our approach for six target languages: French (fr), Russian (ru), Arabic (ar), Chinese (zh), Hindi (hi), and Vietnamese (vi). These languages belong to four different language families. French, Russian, and Hindi are Indo-European languages, similar to English. Arabic, Chinese, and Vietnamese belong to Afro-Asiatic, Sino-Tibetan, and Austro-Asiatic family respectively. The choice of the six languages also reflects different training conditions depending on the amount of monolingual data. French and Russian, and Arabic can be regarded as high resource languages whereas Hindi has far less data and can be considered as low resource. For experiments that use parallel data to initialize foreign specific parameters, we use the same datasets in the work of BIBREF6. Specifically, we use United Nations Parallel Corpus BIBREF18 for en-ru, en-ar, en-zh, and en-fr. We collect en-hi parallel data from IIT Bombay corpus BIBREF19 and en-vi data from OpenSubtitles 2018. For experiments that use only monolingual data to initialize foreign parameters, instead of training word-vectors from the scratch, we use the pre-trained word vectors from fastText BIBREF14 to estimate word translation probabilities (Eq. DISPLAY_FORM13). We align these vectors into a common space using orthogonal Procrustes BIBREF20, BIBREF15, BIBREF16. We only use identical words between the two languages as the supervised signal. We use WikiExtractor to extract extract raw sentences from Wikipedias as monolingual data for fine-tuning target embeddings and bilingual LMs (§SECREF15). We do not lowercase or remove accents in our data preprocessing pipeline. We tokenize English using the provided tokenizer from pre-trained models. For target languages, we use fastBPE to learn 30,000 BPE codes and 50,000 codes when transferring from BERT and RoBERTa respectively. We truncate the BPE vocabulary of foreign languages to match the size of the English vocabulary in the source models. Precisely, the size of foreign vocabulary is set to 32,000 when transferring from BERT and 50,000 when transferring from RoBERTa. We use XNLI dataset BIBREF9 for classification task and Universal Dependencies v2.4 BIBREF21 for parsing task. Since a language might have more than one treebank in Universal Dependencies, we use the following treebanks: en_ewt (English), fr_gsd (French), ru_syntagrus (Russian) ar_padt (Arabic), vi_vtb (Vietnamese), hi_hdtb (Hindi), and zh_gsd (Chinese). Zero-shot Experiments ::: Data ::: Remark on BPE BIBREF22 show that sharing subwords between languages improves alignments between embedding spaces. BIBREF2 observe a strong correlation between the percentage of overlapping subwords and mBERT's performances for cross-lingual zero-shot transfer. However, in our current approach, subwords between source and target are not shared. A subword that is in both English and foreign vocabulary has two different embeddings. Zero-shot Experiments ::: Estimating translation probabilities Since pre-trained models operate on subword level, we need to estimate subword translation probabilities. Therefore, we subsample 2M sentence pairs from each parallel corpus and tokenize the data into subwords before running fast-align BIBREF13. Estimating subword translation probabilities from aligned word vectors requires an additional processing step since the provided vectors from fastText are not at subword level. We use the following approximation to obtain subword vectors: the vector $_s$ of subword $s$ is the weighted average of all the aligned word vectors $_{w_i}$ that have $s$ as an subword where $p(w_j)$ is the unigram probability of word $w_j$ and $n_s = \sum _{w_j:\, s\in w_j} p(w_j)$. We take the top 50,000 words in each aligned word-vectors to compute subword vectors. In both cases, not all the words in the foreign vocabulary can be initialized from the English word-embeddings. Those words are initialized randomly from a Gaussian $\mathcal {N}(0, {1}{d^2})$. Zero-shot Experiments ::: Hyper-parameters In all the experiments, we tune RAMEN$_{\textsc {base}}$ for 175,000 updates and RAMEN$_{\textsc {large}}$ for 275,000 updates where the first 25,000 updates are for language specific parameters. The sequence length is set to 256. The mini-batch size are 64 and 24 when tuning language specific parameters using RAMEN$_{\textsc {base}}$ and RAMEN$_{\textsc {large}}$ respectively. For tuning bilingual LMs, we use a mini-batch size of 64 for RAMEN$_{\textsc {base}}$ and 24 for RAMEN$_{\textsc {large}}$ where half of the batch are English sequences and the other half are foreign sequences. This strategy of balancing mini-batch has been used in multilingual neural machine translation BIBREF23, BIBREF24. We optimize RAMEN$_{\textsc {base}}$ using Lookahead optimizer BIBREF25 wrapped around Adam with the learning rate of $10^{-4}$, the number of fast weight updates $k=5$, and interpolation parameter $\alpha =0.5$. We choose Lookahead optimizer because it has been shown to be robust to the initial parameters of the based optimizer (Adam). For Adam optimizer, we linearly increase the learning rate from $10^{-7}$ to $10^{-4}$ in the first 4000 updates and then follow an inverse square root decay. All RAMEN$_{\textsc {large}}$ models are optimized with Adam due to memory limit. When fine-tuning RAMEN on XNLI and UD, we use a mini-batch size of 32, Adam's learning rate of $10^{-5}$. The number of epochs are set to 4 and 50 for XNLI and UD tasks respectively. All experiments are carried out on a single Tesla V100 16GB GPU. Each RAMEN$_{\textsc {base}}$ model is trained within a day and each RAMEN$_{\textsc {large}}$ is trained within two days. Results In this section, we present the results of out models for two zero-shot cross lingual transfer tasks: XNLI and dependency parsing. Results ::: Cross-lingual Natural Language Inference Table TABREF32 shows the XNLI test accuracy. For reference, we also include the scores from the previous work, notably the state-of-the-art system XLM BIBREF6. Before discussing the results, we spell out that the fairest comparison in this experiment is the comparison between mBERT and RAMEN$_{\textsc {base}}$+BERT trained with monolingual only. We first discuss the transfer results from BERT. Initialized from fastText vectors, RAMEN$_{\textsc {base}}$ slightly outperforms mBERT by 1.9 points on average and widen the gap of 3.3 points on Arabic. RAMEN$_{\textsc {base}}$ gains extra 0.8 points on average when initialized from parallel data. With triple number of parameters, RAMEN$_{\textsc {large}}$ offers an additional boost in term of accuracy and initialization with parallel data consistently improves the performance. It has been shown that BERT$_{\textsc {large}}$ significantly outperforms BERT$_{\textsc {base}}$ on 11 English NLP tasks BIBREF0, the strength of BERT$_{\textsc {large}}$ also shows up when adapted to foreign languages. Transferring from RoBERTa leads to better zero-shot accuracies. With the same initializing condition, RAMEN$_{\textsc {base}}$+RoBERTa outperforms RAMEN$_{\textsc {base}}$+BERT on average by 2.9 and 2.3 points when initializing from monolingual and parallel data respectively. This result show that with similar number of parameters, our approach benefits from a better English pre-trained model. When transferring from RoBERTa$_{\textsc {large}}$, we obtain state-of-the-art results for five languages. Currently, RAMEN only uses parallel data to initialize foreign embeddings. RAMEN can also exploit parallel data through translation objective proposed in XLM. We believe that by utilizing parallel data during the fine-tuning of RAMEN would bring additional benefits for zero-shot tasks. We leave this exploration to future work. In summary, starting from BERT$_{\textsc {base}}$, our approach obtains competitive bilingual LMs with mBERT for zero-shot XNLI. Our approach shows the accuracy gains when adapting from a better pre-trained model. Results ::: Universal Dependency Parsing We build on top of RAMEN a graph-based dependency parser BIBREF27. For the purpose of evaluating the contextual representations learned by our model, we do not use part-of-speech tags. Contextualized representations are directly fed into Deep-Biaffine layers to predict arc and label scores. Table TABREF34 presents the Labeled Attachment Scores (LAS) for zero-shot dependency parsing. We first look at the fairest comparison between mBERT and monolingually initialized RAMEN$_{\textsc {base}}$+BERT. The latter outperforms the former on five languages except Arabic. We observe the largest gain of +5.2 LAS for French. Chinese enjoys +3.1 LAS from our approach. With similar architecture (12 or 24 layers) and initialization (using monolingual or parallel data), RAMEN+RoBERTa performs better than RAMEN+BERT for most of the languages. Arabic and Hindi benefit the most from bigger models. For the other four languages, RAMEN$_{\textsc {large}}$ renders a modest improvement over RAMEN$_{\textsc {base}}$. Analysis ::: Impact of initialization Initializing foreign embeddings is the backbone of our approach. A good initialization leads to better zero-shot transfer results and enables fast adaptation. To verify the importance of a good initialization, we train a RAMEN$_{\textsc {base}}$+RoBERTa with foreign word-embeddings are initialized randomly from $\mathcal {N}(0, {1}{d^2})$. For a fair comparison, we use the same hyper-parameters in §SECREF27. Table TABREF36 shows the results of XNLI and UD parsing of random initialization. In comparison to the initialization using aligned fastText vectors, random initialization decreases the zero-shot performance of RAMEN$_{\textsc {base}}$ by 15.9% for XNLI and 27.8 points for UD parsing on average. We also see that zero-shot parsing of SOV languages (Arabic and Hindi) suffers random initialization. Analysis ::: Are contextual representations from RAMEN also good for supervised parsing? All the RAMEN models are built from English and tuned on English for zero-shot cross-lingual tasks. It is reasonable to expect RAMENs do well in those tasks as we have shown in our experiments. But are they also a good feature extractor for supervised tasks? We offer a partial answer to this question by evaluating our model for supervised dependency parsing on UD datasets. We used train/dev/test splits provided in UD to train and evaluate our RAMEN-based parser. Table TABREF38 summarizes the results (LAS) of our supervised parser. For a fair comparison, we choose mBERT as the baseline and all the RAMEN models are initialized from aligned fastText vectors. With the same architecture of 12 Transformer layers, RAMEN$_{\textsc {base}}$+BERT performs competitive to mBERT and outshines mBERT by +1.2 points for Vietnamese. The best LAS results are obtained by RAMEN$_{\textsc {large}}$+RoBERTa with 24 Transformer layers. Overall, our results indicate the potential of using contextual representations from RAMEN for supervised tasks. Analysis ::: How does linguistic knowledge transfer happen through each training stages? We evaluate the performance of RAMEN+RoBERTa$_{\textsc {base}}$ (initialized from monolingual data) at each training steps: initialization of word embeddings (0K update), fine-tuning target embeddings (25K), and fine-tuning the model on both English and target language (at each 25K updates). The results are presented in Figure FIGREF40. Without fine-tuning, the average accuracy of XLNI is 39.7% for a three-ways classification task, and the average LAS score is 3.6 for dependency parsing. We see the biggest leap in the performance after 50K updates. While semantic similarity task profits significantly at 25K updates of the target embeddings, syntactic task benefits with further fine-tuning the encoder. This is expected since the target languages might exhibit different syntactic structures than English and fine-tuning encoder helps to capture language specific structures. We observe a substantial gain of 19-30 LAS for all languages except French after 50K updates. Language similarities have more impact on transferring syntax than semantics. Without tuning the English encoder, French enjoys 50.3 LAS for being closely related to English, whereas Arabic and Hindi, SOV languages, modestly reach 4.2 and 6.4 points using the SVO encoder. Although Chinese has SVO order, it is often seen as head-final while English is strong head-initial. Perhaps, this explains the poor performance for Chinese. Limitations While we have successfully adapted autoencoding pre-trained LMs from English to other languages, the question whether our approach can also be applied for autoregressive LM such as XLNet still remains. We leave the investigation to future work. Conclusions In this work, we have presented a simple and effective approach for rapidly building a bilingual LM under a limited computational budget. Using BERT as the starting point, we demonstrate that our approach produces better than mBERT on two cross-lingual zero-shot sentence classification and dependency parsing. We find that the performance of our bilingual LM, RAMEN, correlates with the performance of the original pre-trained English models. We also find that RAMEN is also a powerful feature extractor in supervised dependency parsing. Finally, we hope that our work sparks of interest in developing fast and effective methods for transferring pre-trained English models to other languages.
[ "No data. Pretrained model is used." ]
3,409
qasper_e
en
null
4ec441a68f88d757c758ad09d9c710c7af94895bb544f3b6
In what cases is attention different from alignment?
Introduction Neural machine translation (NMT) has gained a lot of attention recently due to its substantial improvements in machine translation quality achieving state-of-the-art performance for several languages BIBREF0 , BIBREF1 , BIBREF2 . The core architecture of neural machine translation models is based on the general encoder-decoder approach BIBREF3 . Neural machine translation is an end-to-end approach that learns to encode source sentences into distributed representations and decode these representations into sentences in the target language. Among the different neural MT models, attentional NMT BIBREF4 , BIBREF5 has become popular due to its capability to use the most relevant parts of the source sentence at each translation step. This capability also makes the attentional model superior in translating longer sentences BIBREF4 , BIBREF5 . Figure FIGREF1 shows an example of how attention uses the most relevant source words to generate a target word at each step of the translation. In this paper we focus on studying the relevance of the attended parts, especially cases where attention is `smeared out' over multiple source words where their relevance is not entirely obvious, see, e.g., “would" and “like" in Figure FIGREF1 . Here, we ask whether these are due to errors of the attention mechanism or are a desired behavior of the model. Since the introduction of attention models in neural machine translation BIBREF4 various modifications have been proposed BIBREF5 , BIBREF6 , BIBREF7 . However, to the best of our knowledge there is no study that provides an analysis of what kind of phenomena is being captured by attention. There are some works that have looked to attention as being similar to traditional word alignment BIBREF8 , BIBREF6 , BIBREF7 , BIBREF9 . Some of these approaches also experimented with training the attention model using traditional alignments BIBREF8 , BIBREF7 , BIBREF9 . liu-EtAl:2016:COLING have shown that attention could be seen as a reordering model as well as an alignment model. In this paper, we focus on investigating the differences between attention and alignment and what is being captured by the attention mechanism in general. The questions that we are aiming to answer include: Is the attention model only capable of modelling alignment? And how similar is attention to alignment in different syntactic phenomena? Our analysis shows that attention models traditional alignment in some cases more closely while it captures information beyond alignment in others. For instance, attention agrees with traditional alignments to a high degree in the case of nouns. However, it captures other information rather than only the translational equivalent in the case of verbs. This paper makes the following contributions: 1) We provide a detailed comparison of attention in NMT and word alignment. 2) We show that while different attention mechanisms can lead to different degrees of compliance with respect to word alignments, global compliance is not always helpful for word prediction. 3) We show that attention follows different patterns depending on the type of the word being generated. 4) We demonstrate that attention does not always comply with alignment. We provide evidence showing that the difference between attention and alignment is due to attention model capability to attend the context words influencing the current word translation. Related Work liu-EtAl:2016:COLING investigate how training the attention model in a supervised manner can benefit machine translation quality. To this end they use traditional alignments obtained by running automatic alignment tools (GIZA++ BIBREF10 and fast_align BIBREF11 ) on the training data and feed it as ground truth to the attention network. They report some improvements in translation quality arguing that the attention model has learned to better align source and target words. The approach of training attention using traditional alignments has also been proposed by others BIBREF9 , BIBREF8 . chen2016guided show that guided attention with traditional alignment helps in the domain of e-commerce data which includes lots of out of vocabulary (OOV) product names and placeholders, but not much in the other domains. alkhouli-EtAl:2016:WMT have separated the alignment model and translation model, reasoning that this avoids propagation of errors from one model to the other as well as providing more flexibility in the model types and training of the models. They use a feed-forward neural network as their alignment model that learns to model jumps in the source side using HMM/IBM alignments obtained by using GIZA++. shi-padhi-knight:2016:EMNLP2016 show that various kinds of syntactic information are being learned and encoded in the output hidden states of the encoder. The neural system for their experimental analysis is not an attentional model and they argue that attention does not have any impact for learning syntactic information. However, performing the same analysis for morphological information, belinkov2017neural show that attention has also some effect on the information that the encoder of neural machine translation system encodes in its output hidden states. As part of their analysis they show that a neural machine translation system that has an attention model can learn the POS tags of the source side more efficiently than a system without attention. Recently, koehn2017six carried out a brief analysis of how much attention and alignment match in different languages by measuring the probability mass that attention gives to alignments obtained from an automatic alignment tool. They also report differences based on the most attended words. The mixed results reported by chen2016guided, alkhouli-EtAl:2016:WMT, liu-EtAl:2016:COLING on optimizing attention with respect to alignments motivates a more thorough analysis of attention models in NMT. Attention Models This section provides a short background on attention and discusses two most popular attention models which are also used in this paper. The first model is a non-recurrent attention model which is equivalent to the “global attention" method proposed by DBLPjournalscorrLuongPM15. The second attention model that we use in our investigation is an input-feeding model similar to the attention model first proposed by bahdanau-EtAl:2015:ICLR and turned to a more general one and called input-feeding by DBLPjournalscorrLuongPM15. Below we describe the details of both models. Both non-recurrent and input-feeding models compute a context vector INLINEFORM0 at each time step. Subsequently, they concatenate the context vector to the hidden state of decoder and pass it through a non-linearity before it is fed into the softmax output layer of the translation network. DISPLAYFORM0 The difference of the two models lays in the way they compute the context vector. In the non-recurrent model, the hidden state of the decoder is compared to each hidden state of the encoder. Often, this comparison is realized as the dot product of vectors. Then the comparison result is fed to a softmax layer to compute the attention weight. DISPLAYFORM0 DISPLAYFORM1 Here INLINEFORM0 is the hidden state of the decoder at time INLINEFORM1 , INLINEFORM2 is INLINEFORM3 th hidden state of the encoder and INLINEFORM4 is the length of the source sentence. Then the computed alignment weights are used to compute a weighted sum over the encoder hidden states which results in the context vector mentioned above: DISPLAYFORM0 The input-feeding model changes the context vector computation in a way that at each step INLINEFORM0 the context vector is aware of the previously computed context INLINEFORM1 . To this end, the input-feeding model feeds back its own INLINEFORM2 to the network and uses the resulting hidden state instead of the context-independent INLINEFORM3 , to compare to the hidden states of the encoder. This is defined in the following equations: DISPLAYFORM0 DISPLAYFORM1 Here, INLINEFORM0 is the function that the stacked LSTM applies to the input, INLINEFORM1 is the last generated target word, and INLINEFORM2 is the output of previous time step of the input-feeding network itself, meaning the output of Equation EQREF2 in the case that context vector has been computed using INLINEFORM3 from Equation EQREF7 . Comparing Attention with Alignment As mentioned above, it is a commonly held assumption that attention corresponds to word alignments. To verify this, we investigate whether higher consistency between attention and alignment leads to better translations. Measuring Attention-Alignment Accuracy In order to compare attentions of multiple systems as well as to measure the difference between attention and word alignment, we convert the hard word alignments into soft ones and use cross entropy between attention and soft alignment as a loss function. For this purpose, we use manual alignments provided by RWTH German-English dataset as the hard alignments. The statistics of the data are given in Table TABREF8 . We convert the hard alignments to soft alignments using Equation EQREF10 . For unaligned words, we first assume that they have been aligned to all the words in the source side and then do the conversion. DISPLAYFORM0 Here INLINEFORM0 is the set of source words aligned to target word INLINEFORM1 and INLINEFORM2 is the number of source words in the set. After conversion of the hard alignments to soft ones, we compute the attention loss as follows: DISPLAYFORM0 Here INLINEFORM0 is the source sentence and INLINEFORM1 is the weight of the alignment link between source word INLINEFORM2 and the target word (see Equation EQREF10 ). INLINEFORM3 is the attention weight INLINEFORM4 (see Equation EQREF4 ) of the source word INLINEFORM5 , when generating the target word INLINEFORM6 . In our analysis, we also look into the relation between translation quality and the quality of the attention with respect to the alignments. For measuring the quality of attention, we use the attention loss defined in Equation EQREF11 . As a measure of translation quality, we choose the loss between the output of our NMT system and the reference translation at each translation step, which we call word prediction loss. The word prediction loss for word INLINEFORM0 is logarithm of the probability given in Equation EQREF12 . DISPLAYFORM0 Here INLINEFORM0 is the source sentence, INLINEFORM1 is target word at time step INLINEFORM2 , INLINEFORM3 is the target history given by the reference translation and INLINEFORM4 is given by Equation EQREF2 for either non-recurrent or input-feeding attention models. Spearman's rank correlation is used to compute the correlation between attention loss and word prediction loss: DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 are the ranks of the attention losses and word prediction losses, respectively, INLINEFORM2 is the covariance between two input variables, and INLINEFORM3 and INLINEFORM4 are the standard deviations of INLINEFORM5 and INLINEFORM6 . If there is a close relationship between word prediction quality and consistency of attention versus alignment, then there should be high correlation between word prediction loss and attention loss. Figure FIGREF13 shows an example with different levels of consistency between attention and word alignments. For the target words “will" and “come" the attention is not focused on the manually aligned word but distributed between the aligned word and other words. The focus of this paper is examining cases where attention does not follow alignment, answering the questions whether those cases represent errors or desirable behavior of the attention model. Measuring Attention Concentration As another informative variable in our analysis, we look into the attention concentration. While most word alignments only involve one or a few words, attention can be distributed more freely. We measure the concentration of attention by computing the entropy of the attention distribution: DISPLAYFORM0 Empirical Analysis of Attention Behaviour We conduct our analysis using the two different attention models described in Section SECREF3 . Our first attention model is the global model without input-feeding as introduced by DBLPjournalscorrLuongPM15. The second model is the input-feeding model BIBREF5 , which uses recurrent attention. Our NMT system is a unidirectional encoder-decoder system as described in BIBREF5 , using 4 recurrent layers. We trained the systems with dimension size of 1,000 and batch size of 80 for 20 epochs. The vocabulary for both source and target side is set to be the 30K most common words. The learning rate is set to be 1 and a maximum gradient norm of 5 has been used. We also use a dropout rate of 0.3 to avoid overfitting. Impact of Attention Mechanism We train both of the systems on the WMT15 German-to-English training data, see Table TABREF18 for some statistics. Table TABREF17 shows the BLEU scores BIBREF12 for both systems on different test sets. Since we use POS tags and dependency roles in our analysis, both of which are based on words, we chose not to use BPE BIBREF13 which operates at the sub-word level. We report alignment error rate (AER) BIBREF14 , which is commonly used to measure alignment quality, in Table TABREF20 to show the difference between attentions and human alignments provided by RWTH German-English dataset. To compute AER over attentions, we follow DBLPjournalscorrLuongPM15 to produce hard alignments from attentions by choosing the most attended source word for each target word. We also use GIZA++ BIBREF10 to produce automatic alignments over the data set to allow for a comparison between automatically generated alignments and the attentions generated by our systems. GIZA++ is run in both directions and alignments are symmetrized using the grow-diag-final-and refined alignment heuristic. As shown in Table TABREF20 , the input-feeding system not only achieves a higher BLEU score, but also uses attentions that are closer to the human alignments. Table TABREF21 compares input-feeding and non-recurrent attention in terms of attention loss computed using Equation EQREF11 . Here the losses between the attention produced by each system and the human alignments is reported. As expected, the difference in attention losses are in line with AER. The difference between these comparisons is that AER only takes the most attended word into account while attention loss considers the entire attention distribution. Alignment Quality Impact on Translation Based on the results in Section SECREF19 , one might be inclined to conclude that the closer the attention is to the word alignments the better the translation. However, chen2016guided, liu-EtAl:2016:COLING, alkhouli-EtAl:2016:WMT report mixed results by optimizing their NMT system with respect to word prediction and alignment quality. These findings warrant a more fine-grained analysis of attention. To this end, we include POS tags in our analysis and study the patterns of attention based on POS tags of the target words. We choose POS tags because they exhibit some simple syntactic characteristics. We use the coarse grained universal POS tags BIBREF15 given in Table TABREF25 . To better understand how attention accuracy affects translation quality, we analyse the relationship between attention loss and word prediction loss for individual part-of-speech classes. Figure FIGREF22 shows how attention loss differs when generating different POS tags. One can see that attention loss varies substantially across different POS tags. In particular, we focus on the cases of NOUN and VERB which are the most frequent POS tags in the dataset. As shown, the attention of NOUN is the closest to alignments on average. But the average attention loss for VERB is almost two times larger than the loss for NOUN. Considering this difference and the observations in Section SECREF19 , a natural follow-up would be to focus on getting the attention of verbs to be closer to alignments. However, Figure FIGREF22 shows that the average word prediction loss for verbs is actually smaller compared to the loss for nouns. In other words, although the attention for verbs is substantially more inconsistent with the word alignments than for nouns, the NMT system translates verbs more accurately than nouns on average. To formalize this relationship we compute Spearman's rank correlation between word prediction loss and attention loss, based on the POS tags of the target side, for the input-feeding model, see Figure FIGREF27 . The low correlation for verbs confirms that attention to other parts of source sentence rather than the aligned word is necessary for translating verbs and that attention does not necessarily have to follow alignments. However, the higher correlation for nouns means that consistency of attention with alignments is more desirable. This could, in a way, explain the mixed result reported for training attention using alignments BIBREF9 , BIBREF7 , BIBREF8 . Especially the results by chen2016guided in which large improvements are achieved for the e-commerce domain which contains many OOV product names and placeholders, but no or very weak improvements were achieved over common domains. Attention Concentration In word alignment, most target words are aligned to one source word. The average number of source words aligned to nouns and verbs is 1.1 and 1.2 respectively. To investigate to what extent this also holds for attention we measure the attention concentration by computing the entropy of the attention distribution, see Equation EQREF16 . Figure FIGREF28 shows the average entropy of attention based on POS tags. As shown, nouns have one of the lowest entropies meaning that on average the attention for nouns tends to be concentrated. This also explains the closeness of the attention to alignments for nouns. In addition, the correlation between attention entropy and attention loss in case of nouns is high as shown in Figure FIGREF28 . This means that attention entropy can be used as a measure of closeness of attention to alignment in the case of nouns. The higher attention entropy for verbs, in Figure FIGREF28 , shows that the attention is more distributed compared to nouns. The low correlation between attention entropy and word prediction loss (see Figure FIGREF32 ) shows that attention concentration is not required when translating into verbs. This also confirms that the correct translation of verbs requires the systems to pay attention to different parts of the source sentence. Another interesting observation here is the low correlation for pronouns (PRON) and particles (PRT), see Figure FIGREF32 . As can be seen in Figure FIGREF28 , these tags have more distributed attention comparing to nouns, for example. This could either mean that the attention model does not know where to focus or it deliberately pays attention to multiple, somehow relevant, places to be able to produce a better translation. The latter is supported by the relatively low word prediction losses, shown in the Figure FIGREF22 . Attention Distribution To further understand under which conditions attention is paid to words other than the aligned words, we study the distribution of attention over the source words. First, we measure how much attention is paid to the aligned words for each POS tag, on average. To this end, we compute the percentage of the probability mass that the attention model has assigned to aligned words for each POS tag, see Table TABREF35 . One can notice that less than half of the attention is paid to alignment points for most of the POS tags. To examine how the rest of attention in each case has been distributed over the source sentence we measure the attention distribution over dependency roles in the source side. We first parse the source side of RWTH data using the ParZu parser BIBREF16 . Then we compute how the attention probability mass given to the words other than the alignment points, is distributed over dependency roles. Table TABREF33 gives the most attended roles for each POS tag. Here, we focus on POS tags discussed earlier. One can see that the most attended roles when translating to nouns include adjectives and determiners and in the case of translating to verbs, it includes auxiliary verbs, adverbs (including negation), subjects, and objects. Conclusion In this paper, we have studied attention in neural machine translation and provided an analysis of the relation between attention and word alignment. We have shown that attention agrees with traditional alignment to a certain extent. However, this differs substantially by attention mechanism and the type of the word being generated. We have shown that attention has different patterns based on the POS tag of the target word. The concentrated pattern of attention and the relatively high correlations for nouns show that training the attention with explicit alignment labels is useful for generating nouns. However, this is not the case for verbs, since the large portion of attention being paid to words other than alignment points, is already capturing other relevant information. Training attention with alignments in this case will force the attention model to forget these useful information. This explains the mixed results reported when guiding attention to comply with alignments BIBREF9 , BIBREF7 , BIBREF8 . Acknowledgments This research was funded in part by the Netherlands Organization for Scientific Research (NWO) under project numbers 639.022.213 and 612.001.218.
[ "For certain POS tags, e.g. VERB, PRON.", "most word alignments only involve one or a few words, attention can be distributed more freely" ]
3,366
qasper_e
en
null
e7af1912d971eeb3c7f5f96c5440ef649b96bda636efa14d
Which model do they use for end-to-end speech recognition?
Introduction State-of-the-art automatic speech recognition (ASR) systems BIBREF0 have large model capacities and require significant quantities of training data to generalize. Labeling thousands of hours of audio, however, is expensive and time-consuming. A natural question to ask is how to achieve better generalization with fewer training examples. Active learning studies this problem by identifying and labeling only the most informative data, potentially reducing sample complexity. How much active learning can help in large-scale, end-to-end ASR systems, however, is still an open question. The speech recognition community has generally identified the informativeness of samples by calculating confidence scores. In particular, an utterance is considered informative if the most likely prediction has small probability BIBREF1 , or if the predictions are distributed very uniformly over the labels BIBREF2 . Though confidence-based measures work well in practice, less attention has been focused on gradient-based methods like Expected Gradient Length (EGL) BIBREF3 , where the informativeness is measured by the norm of the gradient incurred by the instance. EGL has previously been justified as intuitively measuring the expected change in a model's parameters BIBREF3 .We formalize this intuition from the perspective of asymptotic variance reduction, and experimentally, we show EGL to be superior to confidence-based methods on speech recognition tasks. Additionally, we observe that the ranking of samples scored by EGL is not correlated with that of confidence scoring, suggesting EGL identifies aspects of an instance that confidence scores cannot capture. In BIBREF3 , EGL was applied to active learning on sequence labeling tasks, but our work is the first we know of to apply EGL to speech recognition in particular. Gradient-based methods have also found applications outside active learning. For example, BIBREF4 suggests that in stochastic gradient descent, sampling training instances with probabilities proportional to their gradient lengths can speed up convergence. From the perspective of variance reduction, this importance sampling problem shares many similarities to problems found in active learning. Problem Formulation Denote INLINEFORM0 as an utterance and INLINEFORM1 the corresponding label (transcription). A speech recognition system models the conditional distribution INLINEFORM2 , where INLINEFORM3 are the parameters in the model, and INLINEFORM4 is typically implemented by a Recurrent Neural Network (RNN). A training set is a collection of INLINEFORM5 pairs, denoted as INLINEFORM6 . The parameters of the model are estimated by minimizing the negative log-likelihood on the training set: DISPLAYFORM0 Active learning seeks to augment the training set with a new set of utterances and labels INLINEFORM0 in order to achieve good generalization on a held-out test dataset. In many applications, there is an unlabeled pool INLINEFORM1 which is costly to label in its entirety. INLINEFORM2 is queried for the “most informative” instance(s) INLINEFORM3 , for which the label(s) INLINEFORM4 are then obtained. We discuss several such query strategies below. Confidence Scores Confidence scoring has been used extensively as a proxy for the informativeness of training samples. Specifically, an INLINEFORM0 is considered informative if the predictions are uniformly distributed over all the labels BIBREF2 , or if the best prediction of its label is with low probability BIBREF1 . By taking the instances which “confuse” the model, these methods may effectively explore under-sampled regions of the input space. Expected Gradient Length Intuitively, an instance can be considered informative if it results in large changes in model parameters. A natural measure of the change is gradient length, INLINEFORM0 . Motivated by this intuition, Expected Gradient Length (EGL) BIBREF3 picks the instances expected to have the largest gradient length. Since labels are unknown on INLINEFORM1 , EGL computes the expectation of the gradient norm over all possible labelings. BIBREF3 interprets EGL as “expected model change”. In the following section, we formalize the intuition for EGL and show that it follows naturally from reducing the variance of an estimator. Variance in the Asymptote Assume the joint distribution of INLINEFORM0 has the following form, DISPLAYFORM0 where INLINEFORM0 is the true parameter, and INLINEFORM1 is independent of INLINEFORM2 . By selecting a subset of the training data, we are essentially choosing another distribution INLINEFORM3 so that the INLINEFORM4 pairs are drawn from INLINEFORM5 Statistical signal processing theory BIBREF5 states the following asymptotic distribution of INLINEFORM0 , DISPLAYFORM0 where INLINEFORM0 is the Fisher Information Matrix with respect to INLINEFORM1 . Using first order approximation at INLINEFORM2 , we have asymptotically, DISPLAYFORM0 Eq. ( EQREF7 ) indicates that to reduce INLINEFORM0 on test data, we need to minimize the expected variance INLINEFORM1 over the test set. This is called Fisher Information Ratio criteria in BIBREF6 , which itself is hard to optimize. An easier surrogate is to maximize INLINEFORM2 . Substituting Eq. ( EQREF5 ) into INLINEFORM3 , we have INLINEFORM4 which is equivalent to INLINEFORM0 A practical issue is that we do not know INLINEFORM0 in advance. We could instead substitute an estimate INLINEFORM1 from a pre-trained model, where it is reasonable to assume the INLINEFORM2 to be close to the true INLINEFORM3 . The batch selection then works by taking the samples that have largest gradient norms, DISPLAYFORM0 For RNNs, the gradients for each potential label can be obtained by back-propagation. Another practical issue is that EGL marginalizes over all possible labelings, but in speech recognition, the number of labelings scales exponentially in the number of timesteps. Therefore, we only marginalize over the INLINEFORM0 most probable labelings. They are obtained by beam search decoding, as in BIBREF7 . The EGL method in BIBREF3 is almost the same as Eq. ( EQREF8 ), except the gradient's norm is not squared in BIBREF3 . Here we have provided a more formal characterization of EGL to complement its intuitive interpretation as “expected model change” in BIBREF3 . For notational convenience, we denote Eq. ( EQREF8 ) as EGL in subsequent sections. Experiments We empirically validate EGL on speech recognition tasks. In our experiments, the RNN takes in spectrograms of utterances, passing them through two 2D-convolutional layers, followed by seven bi-directional recurrent layers and a fully-connected layer with softmax activation. All recurrent layers are batch normalized. At each timestep, the softmax activations give a probability distribution over the characters. CTC loss BIBREF8 is then computed from the timestep-wise probabilities. A base model, INLINEFORM0 , is trained on 190 hours ( INLINEFORM1 100K instances) of transcribed speech data. Then, it selects a subset of a 1,700-hour ( INLINEFORM2 1.1M instances) unlabeled dataset. We query labels for the selected subset and incorporate them into training. Learning rates are tuned on a small validation set of 2048 instances. The trained model is then tested on a 156-hour ( INLINEFORM3 100K instances) test set and we report CTC loss, Character Error Rate (CER) and Word Error Rate (WER). The confidence score methods BIBREF1 , BIBREF2 can be easily extended to our setup. Specifically, from the probabilities over the characters, we can compute an entropy per timestep and then average them. This method is denoted as entropy. We could also take the most likely prediction and calculate its CTC loss, normalized by number of timesteps. This method is denoted as pCTC (predicted CTC) in the following sections. We implement EGL by marginalizing over the most likely 100 labels, and compare it with: 1) a random selection baseline, 2) entropy, and 3) pCTC. Using the same base model, each method queries a variable percentage of the unlabeled dataset. The queries are then included into training set, and the model continues training until convergence. Fig. FIGREF9 reports the metrics (Exact values are reported in Table TABREF12 in the Appendix) on the test set as the query percentage varies. All the active learning methods outperform the random baseline. Moreover, EGL shows a steeper, more rapid reduction in error than all other approaches. Specifically, when querying 20% of the unlabeled dataset, EGL has 11.58% lower CER and 11.09% lower WER relative to random. The performance of EGL at querying 20% is on par with random at 40%, suggesting that using EGL can lead to an approximate 50% decrease in data labeling. Similarity between Query Methods It is useful to understand how the three active learning methods differ in measuring the informativeness of an instance. To compare any two methods, we take rankings of informativeness given by these two methods, and plot them in a 2-D ranking-vs-ranking coordinate system. A plot close to the diagonal implies that these two methods evaluate informativeness in a very similar way. Fig. FIGREF11 shows the ranking-vs-ranking plots between pCTC and entropy, EGL and entropy. We observe that pCTC rankings and entropy rankings (Fig. FIGREF11 ) are very correlated. This is likely because they are both related to model uncertainty. In contrast, EGL gives very different rankings from entropy (Fig. FIGREF11 ). This suggests EGL is able to identify aspects of an instance that uncertainty-based measurements cannot capture. We further investigate the samples for which EGL and entropy yield vastly different estimates of informativeness, e.g., the elements in the red circle in Fig. FIGREF11 . These particular samples consist of short utterances containing silence (with background noise) or filler words. Further investigation is required to understand whether these samples are noisy outliers or whether they are in fact important for training end-to-end speech recognition systems. Conclusion and Future Work We formally explained EGL from a variance reduction perspective and experimentally tested its performance on end-to-end speech recognition systems. Initial experiments show a notable gain over random selection, and that it outperforms confidence score methods used in the ASR community. We also show EGL measures sample informativeness in a very different way from confidence scores, giving rise to open research questions. All the experiments reported here query all samples in a single batch. It is also worth considering the effects of querying samples in a sequential manner. In the future, we will further validate the approach with sequential queries and seek to make the informativeness measure robust to outliers.
[ "RNN", " Recurrent Neural Network (RNN)" ]
1,644
qasper_e
en
null
0e6d9eaa143bb2a251695d6cc0556c5e03f25ccbb56a171f
What is the baseline?
Introduction Named Entity Recognition (NER) is a foremost NLP task to label each atomic elements of a sentence into specific categories like "PERSON", "LOCATION", "ORGANIZATION" and othersBIBREF0. There has been an extensive NER research on English, German, Dutch and Spanish language BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, and notable research on low resource South Asian languages like HindiBIBREF6, IndonesianBIBREF7 and other Indian languages (Kannada, Malayalam, Tamil and Telugu)BIBREF8. However, there has been no study on developing neural NER for Nepali language. In this paper, we propose a neural based Nepali NER using latest state-of-the-art architecture based on grapheme-level which doesn't require any hand-crafted features and no data pre-processing. Recent neural architecture like BIBREF1 is used to relax the need to hand-craft the features and need to use part-of-speech tag to determine the category of the entity. However, this architecture have been studied for languages like English, and German and not been applied to languages like Nepali which is a low resource language i.e limited data set to train the model. Traditional methods like Hidden Markov Model (HMM) with rule based approachesBIBREF9,BIBREF10, and Support Vector Machine (SVM) with manual feature-engineeringBIBREF11 have been applied but they perform poor compared to neural. However, there has been no research in Nepali NER using neural network. Therefore, we created the named entity annotated dataset partly with the help of Dataturk to train a neural model. The texts used for this dataset are collected from various daily news sources from Nepal around the year 2015-2016. Following are our contributions: We present a novel Named Entity Recognizer (NER) for Nepali language. To best of our knowledge we are the first to propose neural based Nepali NER. As there are not good quality dataset to train NER we release a dataset to support future research We perform empirical evaluation of our model with state-of-the-art models with relative improvement of upto 10% In this paper, we present works similar to ours in Section SECREF2. We describe our approach and dataset statistics in Section SECREF3 and SECREF4, followed by our experiments, evaluation and discussion in Section SECREF5, SECREF6, and SECREF7. We conclude with our observations in Section SECREF8. To facilitate further research our code and dataset will be made available at github.com/link-yet-to-be-updated Related Work There has been a handful of research on Nepali NER task based on approaches like Support Vector Machine and gazetteer listBIBREF11 and Hidden Markov Model and gazetteer listBIBREF9,BIBREF10. BIBREF11 uses SVM along with features like first word, word length, digit features and gazetteer (person, organization, location, middle name, verb, designation and others). It uses one vs rest classification model to classify each word into different entity classes. However, it does not the take context word into account while training the model. Similarly, BIBREF9 and BIBREF10 uses Hidden Markov Model with n-gram technique for extracting POS-tags. POS-tags with common noun, proper noun or combination of both are combined together, then uses gazetteer list as look-up table to identify the named entities. Researchers have shown that the neural networks like CNNBIBREF12, RNNBIBREF13, LSTMBIBREF14, GRUBIBREF15 can capture the semantic knowledge of language better with the help of pre-trained embbeddings like word2vecBIBREF16, gloveBIBREF17 or fasttextBIBREF18. Similar approaches has been applied to many South Asian languages like HindiBIBREF6, IndonesianBIBREF7, BengaliBIBREF19 and In this paper, we present the neural network architecture for NER task in Nepali language, which doesn't require any manual feature engineering nor any data pre-processing during training. First we are comparing BiLSTMBIBREF14, BiLSTM+CNNBIBREF20, BiLSTM+CRFBIBREF1, BiLSTM+CNN+CRFBIBREF2 models with CNN modelBIBREF0 and Stanford CRF modelBIBREF21. Secondly, we show the comparison between models trained on general word embeddings, word embedding + character-level embedding, word embedding + part-of-speech(POS) one-hot encoding and word embedding + grapheme clustered or sub-word embeddingBIBREF22. The experiments were performed on the dataset that we created and on the dataset received from ILPRL lab. Our extensive study shows that augmenting word embedding with character or grapheme-level representation and POS one-hot encoding vector yields better results compared to using general word embedding alone. Approach In this section, we describe our approach in building our model. This model is partly inspired from multiple models BIBREF20,BIBREF1, andBIBREF2 Approach ::: Bidirectional LSTM We used Bi-directional LSTM to capture the word representation in forward as well as reverse direction of a sentence. Generally, LSTMs take inputs from left (past) of the sentence and computes the hidden state. However, it is proven beneficialBIBREF23 to use bi-directional LSTM, where, hidden states are computed based from right (future) of sentence and both of these hidden states are concatenated to produce the final output as $h_t$=[$\overrightarrow{h_t}$;$\overleftarrow{h_t}$], where $\overrightarrow{h_t}$, $\overleftarrow{h_t}$ = hidden state computed in forward and backward direction respectively. Approach ::: Features ::: Word embeddings We have used Word2Vec BIBREF16, GloVe BIBREF17 and FastText BIBREF18 word vectors of 300 dimensions. These vectors were trained on the corpus obtained from Nepali National Corpus. This pre-lemmatized corpus consists of 14 million words from books, web-texts and news papers. This corpus was mixed with the texts from the dataset before training CBOW and skip-gram version of word2vec using gensim libraryBIBREF24. This trained model consists of vectors for 72782 unique words. Light pre-processing was performed on the corpus before training it. For example, invalid characters or characters other than Devanagari were removed but punctuation and numbers were not removed. We set the window context at 10 and the rare words whose count is below 5 are dropped. These word embeddings were not frozen during the training session because fine-tuning word embedding help achieve better performance compared to frozen oneBIBREF20. We have used fasttext embeddings in particular because of its sub-word representation ability, which is very useful in highly inflectional language as shown in Table TABREF25. We have trained the word embedding in such a way that the sub-word size remains between 1 and 4. We particularly chose this size because in Nepali language a single letter can also be a word, for example e, t, C, r, l, n, u and a single character (grapheme) or sub-word can be formed after mixture of dependent vowel signs with consonant letters for example, C + O + = CO, here three different consonant letters form a single sub-word. The two-dimensional visualization of an example word npAl is shown in FIGREF14. Principal Component Analysis (PCA) technique was used to generate this visualization which helps use to analyze the nearest neighbor words of a given sample word. 84 and 104 nearest neighbors were observed using word2vec and fasttext embedding respectively on the same corpus. Approach ::: Features ::: Character-level embeddings BIBREF20 and BIBREF2 successfully presented that the character-level embeddings, extracted using CNN, when combined with word embeddings enhances the NER model performance significantly, as it is able to capture morphological features of a word. Figure FIGREF7 shows the grapheme-level CNN used in our model, where inputs to CNN are graphemes. Character-level CNN is also built in similar fashion, except the inputs are characters. Grapheme or Character -level embeddings are randomly initialized from [0,1] with real values with uniform distribution of dimension 30. Approach ::: Features ::: Grapheme-level embeddings Grapheme is atomic meaningful unit in writing system of any languages. Since, Nepali language is highly morphologically inflectional, we compared grapheme-level representation with character-level representation to evaluate its effect. For example, in character-level embedding, each character of a word npAl results into n + + p + A + l has its own embedding. However, in grapheme level, a word npAl is clustered into graphemes, resulting into n + pA + l. Here, each grapheme has its own embedding. This grapheme-level embedding results good scores on par with character-level embedding in highly inflectional languages like Nepali, because graphemes also capture syntactic information similar to characters. We created grapheme clusters using uniseg package which is helpful in unicode text segmentations. Approach ::: Features ::: Part-of-speech (POS) one hot encoding We created one-hot encoded vector of POS tags and then concatenated with pre-trained word embeddings before passing it to BiLSTM network. A sample of data is shown in figure FIGREF13. Dataset Statistics ::: OurNepali dataset Since, we there was no publicly available standard Nepali NER dataset and did not receive any dataset from the previous researchers, we had to create our own dataset. This dataset contains the sentences collected from daily newspaper of the year 2015-2016. This dataset has three major classes Person (PER), Location (LOC) and Organization (ORG). Pre-processing was performed on the text before creation of the dataset, for example all punctuations and numbers besides ',', '-', '|' and '.' were removed. Currently, the dataset is in standard CoNLL-2003 IO formatBIBREF25. Since, this dataset is not lemmatized originally, we lemmatized only the post-positions like Ek, kO, l, mA, m, my, jF, sg, aEG which are just the few examples among 299 post positions in Nepali language. We obtained these post-positions from sanjaalcorps and added few more to match our dataset. We will be releasing this list in our github repository. We found out that lemmatizing the post-positions boosted the F1 score by almost 10%. In order to label our dataset with POS-tags, we first created POS annotated dataset of 6946 sentences and 16225 unique words extracted from POS-tagged Nepali National Corpus and trained a BiLSTM model with 95.14% accuracy which was used to create POS-tags for our dataset. The dataset released in our github repository contains each word in newline with space separated POS-tags and Entity-tags. The sentences are separated by empty newline. A sample sentence from the dataset is presented in table FIGREF13. Dataset Statistics ::: ILPRL dataset After much time, we received the dataset from Bal Krishna Bal, ILPRL, KU. This dataset follows standard CoNLL-2003 IOB formatBIBREF25 with POS tags. This dataset is prepared by ILPRL Lab, KU and KEIV Technologies. Few corrections like correcting the NER tags had to be made on the dataset. The statistics of both the dataset is presented in table TABREF23. Table TABREF24 presents the total entities (PER, LOC, ORG and MISC) from both of the dataset used in our experiments. The dataset is divided into three parts with 64%, 16% and 20% of the total dataset into training set, development set and test set respectively. Experiments In this section, we present the details about training our neural network. The neural network architecture are implemented using PyTorch framework BIBREF26. The training is performed on a single Nvidia Tesla P100 SXM2. We first run our experiment on BiLSTM, BiLSTM-CNN, BiLSTM-CRF BiLSTM-CNN-CRF using the hyper-parameters mentioned in Table TABREF30. The training and evaluation was done on sentence-level. The RNN variants are initialized randomly from $(-\sqrt{k},\sqrt{k})$ where $k=\frac{1}{hidden\_size}$. First we loaded our dataset and built vocabulary using torchtext library. This eased our process of data loading using its SequenceTaggingDataset class. We trained our model with shuffled training set using Adam optimizer with hyper-parameters mentioned in table TABREF30. All our models were trained on single layer of LSTM network. We found out Adam was giving better performance and faster convergence compared to Stochastic Gradient Descent (SGD). We chose those hyper-parameters after many ablation studies. The dropout of 0.5 is applied after LSTM layer. For CNN, we used 30 different filters of sizes 3, 4 and 5. The embeddings of each character or grapheme involved in a given word, were passed through the pipeline of Convolution, Rectified Linear Unit and Max-Pooling. The resulting vectors were concatenated and applied dropout of 0.5 before passing into linear layer to obtain the embedding size of 30 for the given word. This resulting embedding is concatenated with word embeddings, which is again concatenated with one-hot POS vector. Experiments ::: Tagging Scheme Currently, for our experiments we trained our model on IO (Inside, Outside) format for both the dataset, hence the dataset does not contain any B-type annotation unlike in BIO (Beginning, Inside, Outside) scheme. Experiments ::: Early Stopping We used simple early stopping technique where if the validation loss does not decrease after 10 epochs, the training was stopped, else the training will run upto 100 epochs. In our experience, training usually stops around 30-50 epochs. Experiments ::: Hyper-parameters Tuning We ran our experiment looking for the best hyper-parameters by changing learning rate from (0,1, 0.01, 0.001, 0.0001), weight decay from [$10^{-1}$, $10^{-2}$, $10^{-3}$, $10^{-4}$, $10^{-5}$, $10^{-6}$, $10^{-7}$], batch size from [1, 2, 4, 8, 16, 32, 64, 128], hidden size from [8, 16, 32, 64, 128, 256, 512 1024]. Table TABREF30 shows all other hyper-parameter used in our experiment for both of the dataset. Experiments ::: Effect of Dropout Figure FIGREF31 shows how we end up choosing 0.5 as dropout rate. When the dropout layer was not used, the F1 score are at the lowest. As, we slowly increase the dropout rate, the F1 score also gradually increases, however after dropout rate = 0.5, the F1 score starts falling down. Therefore, we have chosen 0.5 as dropout rate for all other experiments performed. Evaluation In this section, we present the details regarding evaluation and comparison of our models with other baselines. Table TABREF25 shows the study of various embeddings and comparison among each other in OurNepali dataset. Here, raw dataset represents such dataset where post-positions are not lemmatized. We can observe that pre-trained embeddings significantly improves the score compared to randomly initialized embedding. We can deduce that Skip Gram models perform better compared CBOW models for word2vec and fasttext. Here, fastText_Pretrained represents the embedding readily available in fastText website, while other embeddings are trained on the Nepali National Corpus as mentioned in sub-section SECREF11. From this table TABREF25, we can clearly observe that model using fastText_Skip Gram embeddings outperforms all other models. Table TABREF35 shows the model architecture comparison between all the models experimented. The features used for Stanford CRF classifier are words, letter n-grams of upto length 6, previous word and next word. This model is trained till the current function value is less than $1\mathrm {e}{-2}$. The hyper-parameters of neural network experiments are set as shown in table TABREF30. Since, word embedding of character-level and grapheme-level is random, their scores are near. All models are evaluated using CoNLL-2003 evaluation scriptBIBREF25 to calculate entity-wise precision, recall and f1 score. Discussion In this paper we present that we can exploit the power of neural network to train the model to perform downstream NLP tasks like Named Entity Recognition even in Nepali language. We showed that the word vectors learned through fasttext skip gram model performs better than other word embedding because of its capability to represent sub-word and this is particularly important to capture morphological structure of words and sentences in highly inflectional like Nepali. This concept can come handy in other Devanagari languages as well because the written scripts have similar syntactical structure. We also found out that stemming post-positions can help a lot in improving model performance because of inflectional characteristics of Nepali language. So when we separate out its inflections or morphemes, we can minimize the variations of same word which gives its root word a stronger word vector representations compared to its inflected versions. We can clearly imply from tables TABREF23, TABREF24, and TABREF35 that we need more data to get better results because OurNepali dataset volume is almost ten times bigger compared to ILPRL dataset in terms of entities. Conclusion and Future work In this paper, we proposed a novel NER for Nepali language and achieved relative improvement of upto 10% and studies different factors effecting the performance of the NER for Nepali language. We also present a neural architecture BiLSTM+CNN(grapheme-level) which turns out to be performing on par with BiLSTM+CNN(character-level) under the same configuration. We believe this will not only help Nepali language but also other languages falling under the umbrellas of Devanagari languages. Our model BiLSTM+CNN(grapheme-level) and BiLSTM+CNN(G)+POS outperforms all other model experimented in OurNepali and ILPRL dataset respectively. Since this is the first named entity recognition research in Nepal language using neural network, there are many rooms for improvement. We believe initializing the grapheme-level embedding with fasttext embeddings might help boosting the performance, rather than randomly initializing it. In future, we plan to apply other latest techniques like BERT, ELMo and FLAIR to study its effect on low-resource language like Nepali. We also plan to improve the model using cross-lingual or multi-lingual parameter sharing techniques by jointly training with other Devanagari languages like Hindi and Bengali. Finally, we would like to contribute our dataset to Nepali NLP community to move forward the research going on in language understanding domain. We believe there should be special committee to create and maintain such dataset for Nepali NLP and organize various competitions which would elevate the NLP research in Nepal. Some of the future works are listed below: Proper initialization of grapheme level embedding from fasttext embeddings. Apply robust POS-tagger for Nepali dataset Lemmatize the OurNepali dataset with robust and efficient lemmatizer Improve Nepali language score with cross-lingual learning techniques Create more dataset using Wikipedia/Wikidata framework Acknowledgments The authors of this paper would like to express sincere thanks to Bal Krishna Bal, Kathmandu University Professor for providing us the POS-tagged Nepali NER data.
[ "CNN modelBIBREF0, Stanford CRF modelBIBREF21", "Bam et al. SVM, Ma and Hovy w/glove, Lample et al. w/fastText, Lample et al. w/word2vec" ]
2,836
qasper_e
en
null
bd1fcb1c0f33cf6bc44554bff4ec86fed2a4fc90e8d18cdf
When is this paper published?
Introduction Text summarization generates summaries from input documents while keeping salient information. It is an important task and can be applied to several real-world applications. Many methods have been proposed to solve the text summarization problem BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . There are two main text summarization techniques: extractive and abstractive. Extractive summarization generates summary by selecting salient sentences or phrases from the source text, while abstractive methods paraphrase and restructure sentences to compose the summary. We focus on abstractive summarization in this work as it is more flexible and thus can generate more diverse summaries. Recently, many abstractive approaches are introduced based on neural sequence-to-sequence framework BIBREF4 , BIBREF0 , BIBREF3 , BIBREF5 . Based on the sequence-to-sequence model with copy mechanism BIBREF6 , BIBREF0 incorporates a coverage vector to track and control attention scores on source text. BIBREF4 introduce intra-temporal attention processes in the encoder and decoder to address the repetition and incoherent problem. There are two issues in previous abstractive methods: 1) these methods use left-context-only decoder, thus do not have complete context when predicting each word. 2) they do not utilize the pre-trained contextualized language models on the decoder side, so it is more difficult for the decoder to learn summary representations, context interactions and language modeling together. Recently, BERT has been successfully used in various natural language processing tasks, such as textual entailment, name entity recognition and machine reading comprehensions. In this paper, we present a novel natural language generation model based on pre-trained language models (we use BERT in this work). As far as we know, this is the first work to extend BERT to the sequence generation task. To address the above issues of previous abstractive methods, in our model, we design a two-stage decoding process to make good use of BERT's context modeling ability. On the first stage, we generate the summary using a left-context-only-decoder. On the second stage, we mask each word of the summary and predict the refined word one-by-one using a refine decoder. To further improve the naturalness of the generated sequence, we cooperate reinforcement objective with the refine decoder. The main contributions of this work are: 1. We propose a natural language generation model based on BERT, making good use of the pre-trained language model in the encoder and decoder process, and the model can be trained end-to-end without handcrafted features. 2. We design a two-stage decoder process. In this architecture, our model can generate each word of the summary considering both sides' context information. 3. We conduct experiments on the benchmark datasets CNN/Daily Mail and New York Times. Our model achieves a 33.33 average of ROUGE-1, ROUGE-2 and ROUGE-L on the CNN/Daily Mail, which is state-of-the-art. On the New York Times dataset, our model achieves about 5.6% relative improvement over ROUGE-1. Text Summarization In this paper, we focus on single-document multi-sentence summarization and propose a supervised abstractive model based on the neural attentive sequence-to-sequence framework which consists of two parts: a neural network for the encoder and another network for the decoder. The encoder encodes the input sequence to intermediate representation and the decoder predicts one word at a time step given the input sequence representation vector and previous decoded output. The goal of the model is to maximize the probability of generating the correct target sequences. In the encoding and generation process, the attention mechanism is used to concentrate on the most important positions of text. The learning objective of most sequence-to-sequence models is to minimize the negative log likelihood of the generated sequence as following equation shows, where $y^*_i$ is the i-th ground-truth summary token. $$Loss = - \log \sum _{t=1}^N P(y_t^*|y_{<t}^*, X)$$ (Eq. 3) However, with this objective, traditional sequence generation models consider only one direction context in the decoding process, which could cause performance degradation since complete context of one token contains preceding and following tokens, thus feeding only preceded decoded words to the decoder so that the model may generate unnatural sequences. For example, attentive sequence-to-sequence models often generate sequences with repeated phrases which harm the naturalness. Some previous works mitigate this problem by improving the attention calculation process, but in this paper we show that feeding bi-directional context instead of left-only-context can better alleviate this problem. Text summarization models are usually classified to abstractive and extractive ones. Recently, extractive models like DeepChannel BIBREF8 , rnn-ext+RL BIBREF9 and NeuSUM BIBREF2 achieve higher performances using well-designed structures. For example, DeepChannel propose a salience estimation network and iteratively extract salient sentences. BIBREF16 train a sentence compression model to teach another latent variable extractive model. Also, several recent works focus on improving abstractive methods. BIBREF3 design a content selector to over-determine phrases in a source document that should be part of the summary. BIBREF11 introduce inconsistency loss to force words in less attended sentences(which determined by extractive model) to have lower generation probabilities. BIBREF5 extend seq2seq model with an information selection network to generate more informative summaries. Bi-Directional Pre-Trained Context Encoders Recently, context encoders such as ELMo, GPT, and BERT have been widely used in many NLP tasks. These models are pre-trained on a huge unlabeled corpus and can generate better contextualized token embeddings, thus the approaches built on top of them can achieve better performance. Since our method is based on BERT, we illustrate the process briefly here. BERT consists of several layers. In each layer there is first a multi-head self-attention sub-layer and then a linear affine sub-layer with the residual connection. In each self-attention sub-layer the attention scores $e_{ij}$ are first calculated as Eq. ( 5 ) () shows, in which $d_e$ is output dimension, and $W^Q, W^K, W^V$ are parameter matrices. $$&a_{ij} = \cfrac{(h_iW_Q)(h_jW_K)^T}{\sqrt{d_e}} \\ &e_{ij} = \cfrac{\exp {e_{ij}}}{\sum _{k=1}^N\exp {e_{ik}}} $$ (Eq. 5) Then the output is calculated as Eq. ( 6 ) shows, which is the weighted sum of previous outputs $h$ added by previous output $h_i$ . The last layer outputs is context encoding of input sequence. $$o_i = h_i + \sum _{j=1}^{N} e_{ij}(h_j W_V) $$ (Eq. 6) Despite the wide usage and huge success, there is also a mismatch problem between these pre-trained context encoders and sequence-to-sequence models. The issue is that while using a pre-trained context encoder like GPT or BERT, they model token-level representations by conditioning on both direction context. During pre-training, they are fed with complete sequences. However, with a left-context-only decoder, these pre-trained language models will suffer from incomplete and inconsistent context and thus cannot generate good enough context-aware word representations, especially during the inference process. Model In this section, we describe the structure of our model, which learns to generate an abstractive multi-sentence summary from a given source document. Based on the sequence-to-sequence framework built on top of BERT, we first design a refine decoder at word-level to tackle the two problems described in the above section. We also introduce a discrete objective for the refine decoders to reduce the exposure bias problem. The overall structure of our model is illustrated in Figure 1 . Problem Formulation We denote the input document as $X = \lbrace x_1, \ldots , x_m\rbrace $ where $x_i \in \mathcal {X}$ represents one source token. The corresponding summary is denoted as $Y = \lbrace y_1, \ldots , y_L\rbrace $ , $L$ represents the summary length. Given input document $X$ , we first predict the summary draft by a left-context-only decoder, and then using the generated summary draft we can condition on both context sides and refine the content of the summary. The draft will guide and constrain the refine process of summary. Summary Draft Generation The summary draft is based on the sequence-to-sequence model. On the encoder side the input document $X$ is encoded into representation vectors $H = \lbrace h_1, \ldots , h_m\rbrace $ , and then fed to the decoder to generate the summary draft $A = \lbrace a_1, \ldots , a_{|a|}\rbrace $ . We simply use BERT as the encoder. It first maps the input sequence to word embeddings and then computes document embeddings as the encoder's output, denoted by following equation. $$H = BERT(x_1, \ldots , x_m)$$ (Eq. 10) In the draft decoder, we first introduce BERT's word embedding matrix to map the previous summary draft outputs $\lbrace y_1, \ldots , y_{t-1}\rbrace $ into embeddings vectors $\lbrace q_1, \ldots , q_{t-1}\rbrace $ at t-th time step. Note that as the input sequence of the decoder is not complete, we do not use the BERT network to predict the context vectors here. Then we introduce an $N$ layer Transformer decoder to learn the conditional probability $P(A|H)$ . Transformer's decoder-encoder multi-head attention helps the decoder learn soft alignments between summary and source document. At the t-th time step, the draft decoder predicts output probability conditioned on previous outputs and encoder hidden representations as Eq. ( 13 ) shows, in which $q_{<t} = \lbrace q_1, \ldots , q_{t-1}\rbrace $ . Each generated sequence will be truncated in the first position of a special token '[PAD]'. $$&P^{vocab}_t(w) = f_{dec}(q_{<t}, H) \\ &L_{dec} = \sum _{i=1}^{|a|} -\log P(a_i = y_i^*|a_{< i}, H) $$ (Eq. 13) As Eq. () shows, the decoder's learning objective is to minimize negative likelihood of conditional probability, in which $y_i^*$ is the i-th ground truth word of summary. However a decoder with this structure is not sufficient enough: if we use the BERT network in this decoder, then during training and inference, in-complete context(part of sentence) is fed into the BERT module, and although we can fine-tune BERT's parameters, the input distribution is quite different from the pre-train process, and thus harms the quality of generated context representations. If we just use the embedding matrix here, it will be more difficult for the decoder with fresh parameters to learn to model representations as well as vocabulary probabilities, from a relative small corpus compared to BERT's huge pre-training corpus. In a word, the decoder cannot utilize BERT's ability to generate high quality context vectors, which will also harm performance. This issue exists when using any other contextualized word representations, so we design a refine process to mitigate it in our approach which will be described in the next sub-section. As some summary tokens are out-of-vocabulary words and occurs in input document, we incorporate copy mechanism BIBREF6 based on the Transformer decoder, we will describe it briefly. At decoder time step $t$ , we first calculate the attention probability distribution over source document $X$ using the bi-linear dot product of the last layer decoder output of Transformer $o_t$ and the encoder output $h_j$ , as Eq. ( 15 ) () shows. $$u_t^j =& o_t W_c h_j \\ \alpha _t^j =& \cfrac{\exp {u_t^j}}{\sum _{k=1}^N\exp {u_t^k}} $$ (Eq. 15) We then calculate copying gate $g_t\in [0, 1]$ , which makes a soft choice between selecting from source and generating from vocabulary, $W_c, W_g, b_g$ are parameters: $$g_t = sigmoid(W_g \cdot [o_t, h] + b_g) $$ (Eq. 16) Using $g_t$ we calculate the weighted sum of copy probability and generation probability to get the final predicted probability of extended vocabulary $\mathcal {V} + \mathcal {X}$ , where $\mathcal {X}$ is the set of out of vocabulary words from the source document. The final probability is calculated as follow: $$P_t(w) = (1-g_t)P_t^{vocab}(w) + g_t\sum _{i:w_i=w} \alpha _t^i$$ (Eq. 17) Summary Refine Process The main reason to introduce the refine process is to enhance the decoder using BERT's contextualized representations, so we do not modify the encoder and reuse it during this process. On the decoder side, we propose a new word-level refine decoder. The refine decoder receives a generated summary draft as input, and outputs a refined summary. It first masks each word in the summary draft one by one, then feeds the draft to BERT to generate context vectors. Finally it predicts a refined summary word using an $N$ layer Transformer decoder which is the same as the draft decoder. At t-th time step the n-th word of input summary is masked, and the decoder predicts the n-th refined word given other words of the summary. The learning objective of this process is shown in Eq. ( 19 ), $y_i$ is the i-th summary word and $y_{i}^*$ for the ground-truth summary word, and $a_{\ne i} = \lbrace a_1, \ldots , a_{i-1}, a_{i+1}, \ldots , a_{|y|}\rbrace $ . $$L_{refine} = \sum _{i=1}^{|y|} -\log P(y_i = y_i^*|a_{\ne i}, H) $$ (Eq. 19) From the view of BERT or other contextualized embeddings, the refine decoding process provides a more complete input sequence which is consistent with their pre-training processes. Intuitively, this process works as follows: first the draft decoder writes a summary draft based on a document, and then the refine decoder edits the draft. It concentrates on one word at a time, based on the source document as well as other words. We design the word-level refine decoder because this process is similar to the cloze task in BERT's pre-train process, therefore by using the ability of the contextual language model the decoder can generate more fluent and natural sequences. The parameters are shared between the draft decoder and refine decoder, as we find that using individual parameters the model's performance degrades a lot. The reason may be that we use teach-forcing during training, and thus the word-level refine decoder learns to predict words given all the other ground-truth words of summary. This objective is similar to the language model's pre-train objective, and is probably not enough for the decoder to learn to generate refined summaries. So in our model all decoders share the same parameters. Researchers usually use ROUGE as the evaluation metric for summarization, however during sequence-to-sequence model training, the objective is to maximize the log likelihood of generated sequences. This mis-match harms the model's performance, so we add a discrete objective to the model, and optimize it by introducing the policy gradient method. For example, the discrete objective for the summary draft process is as Eq. ( 21 ) shows, where $a^s$ is the draft summary sampled from predicted distribution, and $R(a^s)$ is the reward score compared with the ground-truth summary, we use ROUGE-L in our experiment. To balance between optimizing the discrete objective and generating readable sequences, we mix the discrete objective with maximum-likelihood objective. As Eq. () shows, minimizing $\hat{L}_{dec}$ is the final objective for the draft process, note here $L_{dec}$ is $-logP(a|x)$ . In the refine process we introduce similar objectives. $$L^{rl}_{dec} = R(a^s)\cdot [-\log (P(a^s|x))] \\ \hat{L}_{dec} = \gamma * L^{rl}_{dec} + (1 - \gamma ) * L_{dec} $$ (Eq. 21) Learning and Inference During model training, the objective of our model is sum of the two processes, jointly trained using "teacher-forcing" algorithm. During training we feed the ground-truth summary to each decoder and minimize the objective. $$L_{model} = \hat{L}_{dec} + \hat{L}_{refine}$$ (Eq. 23) At test time, each time step we choose the predicted word by $\hat{y} = argmax_{y^{\prime }} P(y^{\prime }|x)$ , use beam search to generate the draft summaries, and use greedy search to generate the refined summaries. Settings In this work, all of our models are built on $BERT_{BASE}$ , although another larger pre-trained model with better performance ( $BERT_{LARGE}$ ) has published but it costs too much time and GPU memory. We use WordPiece embeddings with a 30,000 vocabulary which is the same as BERT. We set the layer of transformer decoders to 12(8 on NYT50), and set the attention heads number to 12(8 on NYT50), set fully-connected sub-layer hidden size to 3072. We train the model using an Adam optimizer with learning rate of $3e-4$ , $\beta _1=0.9$ , $\beta _2=0.999$ and $\epsilon =10^{-9}$ and use a dynamic learning rate during the training process. For regularization, we use dropout BIBREF13 and label smoothing BIBREF14 in our models and set the dropout rate to 0.15, and the label smoothing value to 0.1. We set the RL objective factor $\gamma $ to 0.99. During training, we set the batch size to 36, and train for 4 epochs(8 epochs for NYT50 since it has many fewer training samples), after training the best model are selected from last 10 models based on development set performance. Due to GPU memory limit, we use gradient accumulation, set accumulate step to 12 and feed 3 samples at each step. We use beam size 4 and length penalty of 1.0 to generate logical form sequences. We filter repeated tri-grams in beam-search process by setting word probability to zero if it will generate an tri-gram which exists in the existing summary. It is a nice method to avoid phrase repetition since the two datasets seldom contains repeated tri-grams in one summary. We also fine tune the generated sequences using another two simple rules. When there are multi summary sentences with exactly the same content, we keep the first one and remove the other sentences; we also remove sentences with less than 3 words from the result. To evaluate the performance of our model, we conduct experiments on CNN/Daily Mail dataset, which is a large collection of news articles and modified for summarization. Following BIBREF0 we choose the non-anonymized version of the dataset, which consists of more than 280,000 training samples and 11490 test set samples. We also conduct experiments on the New York Times(NYT) dataset which also consists of many news articles. The original dataset can be applied here. In our experiment, we follow the dataset splits and other pre-process settings of BIBREF15 . We first filter all samples without a full article text or abstract and then remove all samples with summaries shorter than 50 words. Then we choose the test set based on the date of publication(all examples published after January 1, 2007). The final dataset contains 22,000 training samples and 3,452 test samples and is called NYT50 since all summaries are longer than 50 words. We tokenize all sequences of the two datasets using the WordPiece tokenizer. After tokenizing, the average article length and summary length of CNN/Daily Mail are 691 and 51, and NYT50's average article length and summary length are 1152 and 75. We truncate the article length to 512, and the summary length to 100 in our experiment(max summary length is set to 150 on NYT50 as its average golden summary length is longer). On CNN/Daily Mail dataset, we report the full-length F-1 score of the ROUGE-1, ROUGE-2 and ROUGE-L metrics, calculated using PyRouge package and the Porter stemmer option. On NYT50, following BIBREF4 we evaluate limited length ROUGE recall score(limit the generated summary length to the ground truth length). We split NYT50 summaries into sentences by semicolons to calculate the ROUGE scores. Results and Analysis Table 1 shows the results on CNN/Daily Mail dataset, we compare the performance of many recent approaches with our model. We classify them to two groups based on whether they are extractive or abstractive models. As the last line of the table shows, the ROUGE-1 and ROUGE-2 score of our full model is comparable with DCA, and outperforms on ROUGE-L. Also, compared to extractive models NeuSUM and MASK- $LM^{global}$ , we achieve slight higher ROUGE-1. Except the four scores, our model outperforms these models on all the other scores, and since we have 95% confidence interval of at most $\pm $ 0.20, these improvements are statistically significant. As the last four lines of Table 1 show, we conduct an ablation study on our model variants to analyze the importance of each component. We use three ablation models for the experiments. One-Stage: A sequence-to-sequence model with copy mechanism based on BERT; Two-Stage: Adding the word-refine decoder to the One-Stage model; Two-Stage + RL: Full model with refine process cooperated with RL objective. First, we compare the Two-Stage+RL model with Two-Stage ablation, we observe that the full model outperforms by 0.30 on average ROUGE, suggesting that the reinforcement objective helps the model effectively. Then we analyze the effect of refine process by removing word-level refine from the Two-Stage model, we observe that without the refine process the average ROUGE score drops by 1.69. The ablation study shows that each module is necessary for our full model, and the improvements are statistically significant on all metrics. To evaluate the impact of summary length on model performance, we compare the average rouge score improvements of our model with different length of ground-truth summaries. As the above sub-figure of Figure 2 shows, compared to Pointer-Generator with Coverage, on length interval 40-80(occupies about 70% of test set) the improvements of our model are higher than shorter samples, confirms that with better context representations, in longer documents our model can achieve higher performance. As the below sub-figure of Figure 2 shows, compared to extractive baseline: Lead-3 BIBREF0 , the advantage of our model will fall when golden summary length is greater than 80. This probably because that we truncate the long documents and golden summaries and cannot get full information, it could also because that the training data in these intervals is too few to train an abstractive model, so simple extractive method will not fall too far behind. Additional Results on NYT50 Table 2 shows experiments on the NYT50 corpus. Since the short summary samples are filtered, NYT50 has average longer summaries than CNN/Daily Mail. So the model needs to catch long-term dependency of the sequences to generate good summaries. The first two lines of Table 2 show results of the two baselines introduced by BIBREF15 : these baselines select first n sentences, or select the first k words from the original document. Also we compare performance of our model with two recent models, we see 2.39 ROUGE-1 improvements compared to the ML+RL with intra-attn approach(previous SOTA) carries over to this dataset, which is a large margin. On ROUGE-2, our model also get an improvement of 0.51. The experiment proves that our approach can outperform competitive methods on different data distributions. Pre-trained language models Pre-trained word vectors BIBREF17 , BIBREF18 , BIBREF19 have been widely used in many NLP tasks. More recently, pre-trained language models (ELMo, GPT and BERT), have also achieved great success on several NLP problems such as textual entailment, semantic similarity, reading comprehension, and question answering BIBREF20 , BIBREF21 , BIBREF22 . Some recent works also focus on leveraging pre-trained language models in summarization. BIBREF23 pretrain a language model and use it as the sentiment analyser when generating reviews of goods. BIBREF24 train a language model on golden summaries, and then use it on the decoder side to incorporate prior knowledge. In this work, we use BERT(which is a pre-trained language model using large scale unlabeled data) on the encoder and decoder of a seq2seq model, and by designing a two stage decoding structure we build a competitive model for abstractive text summarization. Conclusion and Future Work In this work, we propose a two-stage model based on sequence-to-sequence paradigm. Our model utilize BERT on both encoder and decoder sides, and introduce reinforce objective in learning process. We evaluate our model on two benchmark datasets CNN/Daily Mail and New York Times, the experimental results show that compared to previous systems our approach effectively improves performance. Although our experiments are conducted on summarization task, our model can be used in most natural language generation tasks, such as machine translation, question generation and paraphrasing. The refine decoder and mixed objective can also be applied on other sequence generation tasks, and we will investigate on them in future work.
[ "Unanswerable" ]
3,885
qasper_e
en
null
b0bc5483c009536bf3dcf582b3b2015d4db13aaf81df8c9a
How many people participated in their evaluation study of table-to-text models?
Introduction The task of generating natural language descriptions of structured data (such as tables) BIBREF2 , BIBREF3 , BIBREF4 has seen a growth in interest with the rise of sequence to sequence models that provide an easy way of encoding tables and generating text from them BIBREF0 , BIBREF1 , BIBREF5 , BIBREF6 . For text generation tasks, the only gold standard metric is to show the output to humans for judging its quality, but this is too expensive to apply repeatedly anytime small modifications are made to a system. Hence, automatic metrics that compare the generated text to one or more reference texts are routinely used to compare models BIBREF7 . For table-to-text generation, automatic evaluation has largely relied on BLEU BIBREF8 and ROUGE BIBREF9 . The underlying assumption behind these metrics is that the reference text is gold-standard, i.e., it is the ideal target text that a system should generate. In practice, however, when datasets are collected automatically and heuristically, the reference texts are often not ideal. Figure FIGREF2 shows an example from the WikiBio dataset BIBREF0 . Here the reference contains extra information which no system can be expected to produce given only the associated table. We call such reference texts divergent from the table. We show that existing automatic metrics, including BLEU, correlate poorly with human judgments when the evaluation sets contain divergent references (§ SECREF36 ). For many table-to-text generation tasks, the tables themselves are in a pseudo-natural language format (e.g., WikiBio, WebNLG BIBREF6 , and E2E-NLG BIBREF10 ). In such cases we propose to compare the generated text to the underlying table as well to improve evaluation. We develop a new metric, PARENT (Precision And Recall of Entailed N-grams from the Table) (§ SECREF3 ). When computing precision, PARENT effectively uses a union of the reference and the table, to reward correct information missing from the reference. When computing recall, it uses an intersection of the reference and the table, to ignore extra incorrect information in the reference. The union and intersection are computed with the help of an entailment model to decide if a text n-gram is entailed by the table. We show that this method is more effective than using the table as an additional reference. Our main contributions are: Table-to-Text Generation We briefly review the task of generating natural language descriptions of semi-structured data, which we refer to as tables henceforth BIBREF11 , BIBREF12 . Tables can be expressed as set of records INLINEFORM0 , where each record is a tuple (entity, attribute, value). When all the records are about the same entity, we can truncate the records to (attribute, value) pairs. For example, for the table in Figure FIGREF2 , the records are {(Birth Name, Michael Dahlquist), (Born, December 22 1965), ...}. The task is to generate a text INLINEFORM1 which summarizes the records in a fluent and grammatical manner. For training and evaluation we further assume that we have a reference description INLINEFORM2 available for each table. We let INLINEFORM3 denote an evaluation set of tables, references and texts generated from a model INLINEFORM4 , and INLINEFORM5 , INLINEFORM6 denote the collection of n-grams of order INLINEFORM7 in INLINEFORM8 and INLINEFORM9 , respectively. We use INLINEFORM10 to denote the count of n-gram INLINEFORM11 in INLINEFORM12 , and INLINEFORM13 to denote the minimum of its counts in INLINEFORM14 and INLINEFORM15 . Our goal is to assign a score to the model, which correlates highly with human judgments of the quality of that model. PARENT PARENT evaluates each instance INLINEFORM0 separately, by computing the precision and recall of INLINEFORM1 against both INLINEFORM2 and INLINEFORM3 . Evaluation via Information Extraction BIBREF1 proposed to use an auxiliary model, trained to extract structured records from text, for evaluation. However, the extraction model presented in that work is limited to the closed-domain setting of basketball game tables and summaries. In particular, they assume that each table has exactly the same set of attributes for each entity, and that the entities can be identified in the text via string matching. These assumptions are not valid for the open-domain WikiBio dataset, and hence we train our own extraction model to replicate their evaluation scheme. Our extraction system is a pointer-generator network BIBREF19 , which learns to produce a linearized version of the table from the text. The network learns which attributes need to be populated in the output table, along with their values. It is trained on the training set of WikiBio. At test time we parsed the output strings into a set of (attribute, value) tuples and compare it to the ground truth table. The F-score of this text-to-table system was INLINEFORM0 , which is comparable to other challenging open-domain settings BIBREF20 . More details are included in the Appendix SECREF52 . Given this information extraction system, we consider the following metrics for evaluation, along the lines of BIBREF1 . Content Selection (CS): F-score for the (attribute, value) pairs extracted from the generated text compared to those extracted from the reference. Relation Generation (RG): Precision for the (attribute, value) pairs extracted from the generated text compared to those in the ground truth table. RG-F: Since our task emphasizes the recall of information from the table as well, we consider another variant which computes the F-score of the extracted pairs to those in the table. We omit the content ordering metric, since our extraction system does not align records to the input text. Experiments & Results In this section we compare several automatic evaluation metrics by checking their correlation with the scores assigned by humans to table-to-text models. Specifically, given INLINEFORM0 models INLINEFORM1 , and their outputs on an evaluation set, we show these generated texts to humans to judge their quality, and obtain aggregated human evaluation scores for all the models, INLINEFORM2 (§ SECREF33 ). Next, to evaluate an automatic metric, we compute the scores it assigns to each model, INLINEFORM3 , and check the Pearson correlation between INLINEFORM4 and INLINEFORM5 BIBREF21 . Data & Models Our main experiments are on the WikiBio dataset BIBREF0 , which is automatically constructed and contains many divergent references. In § SECREF47 we also present results on the data released as part of the WebNLG challenge. We developed several models of varying quality for generating text from the tables in WikiBio. This gives us a diverse set of outputs to evaluate the automatic metrics on. Table TABREF32 lists the models along with their hyperparameter settings and their scores from the human evaluation (§ SECREF33 ). Our focus is primarily on neural sequence-to-sequence methods since these are most widely used, but we also include a template-based baseline. All neural models were trained on the WikiBio training set. Training details and sample outputs are included in Appendices SECREF56 & SECREF57 . We divide these models into two categories and measure correlation separately for both the categories. The first category, WikiBio-Systems, includes one model each from the four families listed in Table TABREF32 . This category tests whether a metric can be used to compare different model families with a large variation in the quality of their outputs. The second category, WikiBio-Hyperparams, includes 13 different hyperparameter settings of PG-Net BIBREF19 , which was the best performing system overall. 9 of these were obtained by varying the beam size and length normalization penalty of the decoder network BIBREF23 , and the remaining 4 were obtained by re-scoring beams of size 8 with the information extraction model described in § SECREF4 . All the models in this category produce high quality fluent texts, and differ primarily on the quantity and accuracy of the information they express. Here we are testing whether a metric can be used to compare similar systems with a small variation in performance. This is an important use-case as metrics are often used to tune hyperparameters of a model. Human Evaluation We collected human judgments on the quality of the 16 models trained for WikiBio, plus the reference texts. Workers on a crowd-sourcing platform, proficient in English, were shown a table with pairs of generated texts, or a generated text and the reference, and asked to select the one they prefer. Figure FIGREF34 shows the instructions they were given. Paired comparisons have been shown to be superior to rating scales for comparing generated texts BIBREF24 . However, for measuring correlation the comparisons need to be aggregated into real-valued scores, INLINEFORM0 , for each of the INLINEFORM1 models. For this, we use Thurstone's method BIBREF22 , which assigns a score to each model based on how many times it was preferred over an alternative. The data collection was performed separately for models in the WikiBio-Systems and WikiBio-Hyperparams categories. 1100 tables were sampled from the development set, and for each table we got 8 different sentence pairs annotated across the two categories, resulting in a total of 8800 pairwise comparisons. Each pair was judged by one worker only which means there may be noise at the instance-level, but the aggregated system-level scores had low variance (cf. Table TABREF32 ). In total around 500 different workers were involved in the annotation. References were also included in the evaluation, and they received a lower score than PG-Net, highlighting the divergence in WikiBio. Compared Metrics Text only: We compare BLEU BIBREF8 , ROUGE BIBREF9 , METEOR BIBREF18 , CIDEr and CIDEr-D BIBREF25 using their publicly available implementations. Information Extraction based: We compare the CS, RG and RG-F metrics discussed in § SECREF4 . Text & Table: We compare a variant of BLEU, denoted as BLEU-T, where the values from the table are used as additional references. BLEU-T draws inspiration from iBLEU BIBREF26 but instead rewards n-grams which match the table rather than penalizing them. For PARENT, we compare both the word-overlap model (PARENT-W) and the co-occurrence model (PARENT-C) for determining entailment. We also compare versions where a single INLINEFORM0 is tuned on the entire dataset to maximize correlation with human judgments, denoted as PARENT*-W/C. Correlation Comparison We use bootstrap sampling (500 iterations) over the 1100 tables for which we collected human annotations to get an idea of how the correlation of each metric varies with the underlying data. In each iteration, we sample with replacement, tables along with their references and all the generated texts for that table. Then we compute aggregated human evaluation and metric scores for each of the models and compute the correlation between the two. We report the average correlation across all bootstrap samples for each metric in Table TABREF37 . The distribution of correlations for the best performing metrics are shown in Figure FIGREF38 . Table TABREF37 also indicates whether PARENT is significantly better than a baseline metric. BIBREF21 suggest using the William's test for this purpose, but since we are computing correlations between only 4/13 systems at a time, this test has very weak power in our case. Hence, we use the bootstrap samples to obtain a INLINEFORM0 confidence interval of the difference in correlation between PARENT and any other metric and check whether this is above 0 BIBREF27 . Correlations are higher for the systems category than the hyperparams category. The latter is a more difficult setting since very similar models are compared, and hence the variance of the correlations is also high. Commonly used metrics which only rely on the reference (BLEU, ROUGE, METEOR, CIDEr) have only weak correlations with human judgments. In the hyperparams category, these are often negative, implying that tuning models based on these may lead to selecting worse models. BLEU performs the best among these, and adding n-grams from the table as references improves this further (BLEU-T). Among the extractive evaluation metrics, CS, which also only relies on the reference, has poor correlation in the hyperparams category. RG-F, and both variants of the PARENT metric achieve the highest correlation for both settings. There is no significant difference among these for the hyperparams category, but for systems, PARENT-W is significantly better than the other two. While RG-F needs a full information extraction pipeline in its implementation, PARENT-C only relies on co-occurrence counts, and PARENT-W can be used out-of-the-box for any dataset. To our knowledge, this is the first rigorous evaluation of using information extraction for generation evaluation. On this dataset, the word-overlap model showed higher correlation than the co-occurrence model for entailment. In § SECREF47 we will show that for the WebNLG dataset, where more paraphrasing is involved between the table and text, the opposite is true. Lastly, we note that the heuristic for selecting INLINEFORM0 is sufficient to produce high correlations for PARENT, however, if human annotations are available, this can be tuned to produce significantly higher correlations (PARENT*-W/C). Analysis In this section we further analyze the performance of PARENT-W under different conditions, and compare to the other best metrics from Table TABREF37 . To study the correlation as we vary the number of divergent references, we also collected binary labels from workers for whether a reference is entailed by the corresponding table. We define a reference as entailed when it mentions only information which can be inferred from the table. Each table and reference pair was judged by 3 independent workers, and we used the majority vote as the label for that pair. Overall, only INLINEFORM0 of the references were labeled as entailed by the table. Fleiss' INLINEFORM1 was INLINEFORM2 , which indicates a fair agreement. We found the workers sometimes disagreed on what information can be reasonably entailed by the table. Figure FIGREF40 shows the correlations as we vary the percent of entailed examples in the evaluation set of WikiBio. Each point is obtained by fixing the desired proportion of entailed examples, and sampling subsets from the full set which satisfy this proportion. PARENT and RG-F remain stable and show a high correlation across the entire range, whereas BLEU and BLEU-T vary a lot. In the hyperparams category, the latter two have the worst correlation when the evaluation set contains only entailed examples, which may seem surprising. However, on closer examination we found that this subset tends to omit a lot of information from the tables. Systems which produce more information than these references are penalized by BLEU, but not in the human evaluation. PARENT overcomes this issue by measuring recall against the table in addition to the reference. We check how different components in the computation of PARENT contribute to its correlation to human judgments. Specifically, we remove the probability INLINEFORM0 of an n-gram INLINEFORM1 being entailed by the table from Eqs. EQREF19 and EQREF23 . The average correlation for PARENT-W drops to INLINEFORM5 in this case. We also try a variant of PARENT with INLINEFORM6 , which removes the contribution of Table Recall (Eq. EQREF22 ). The average correlation is INLINEFORM7 in this case. With these components, the correlation is INLINEFORM8 , showing that they are crucial to the performance of PARENT. BIBREF28 point out that hill-climbing on an automatic metric is meaningless if that metric has a low instance-level correlation to human judgments. In Table TABREF46 we show the average accuracy of the metrics in making the same judgments as humans between pairs of generated texts. Both variants of PARENT are significantly better than the other metrics, however the best accuracy is only INLINEFORM0 for the binary task. This is a challenging task, since there are typically only subtle differences between the texts. Achieving higher instance-level accuracies will require more sophisticated language understanding models for evaluation. WebNLG Dataset To check how PARENT correlates with human judgments when the references are elicited from humans (and less likely to be divergent), we check its correlation with the human ratings provided for the systems competing in the WebNLG challenge BIBREF6 . The task is to generate text describing 1-5 RDF triples (e.g. John E Blaha, birthPlace, San Antonio), and human ratings were collected for the outputs of 9 participating systems on 223 instances. These systems include a mix of pipelined, statistical and neural methods. Each instance has upto 3 reference texts associated with the RDF triples, which we use for evaluation. The human ratings were collected on 3 distinct aspects – grammaticality, fluency and semantics, where semantics corresponds to the degree to which a generated text agrees with the meaning of the underlying RDF triples. We report the correlation of several metrics with these ratings in Table TABREF48 . Both variants of PARENT are either competitive or better than the other metrics in terms of the average correlation to all three aspects. This shows that PARENT is applicable for high quality references as well. While BLEU has the highest correlation for the grammar and fluency aspects, PARENT does best for semantics. This suggests that the inclusion of source tables into the evaluation orients the metric more towards measuring the fidelity of the content of the generation. A similar trend is seen comparing BLEU and BLEU-T. As modern neural text generation systems are typically very fluent, measuring their fidelity is of increasing importance. Between the two entailment models, PARENT-C is better due to its higher correlation with the grammaticality and fluency aspects. The INLINEFORM0 parameter in the calculation of PARENT decides whether to compute recall against the table or the reference (Eq. EQREF22 ). Figure FIGREF50 shows the distribution of the values taken by INLINEFORM1 using the heuristic described in § SECREF3 for instances in both WikiBio and WebNLG. For WikiBio, the recall of the references against the table is generally low, and hence the recall of the generated text relies more on the table. For WebNLG, where the references are elicited from humans, this recall is much higher (often INLINEFORM2 ), and hence the recall of the generated text relies more on the reference. Related Work Over the years several studies have evaluated automatic metrics for measuring text generation performance BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 , BIBREF35 . The only consensus from these studies seems to be that no single metric is suitable across all tasks. A recurring theme is that metrics like BLEU and NIST BIBREF36 are not suitable for judging content quality in NLG. Recently, BIBREF37 did a comprehensive study of several metrics on the outputs of state-of-the-art NLG systems, and found that while they showed acceptable correlation with human judgments at the system level, they failed to show any correlation at the sentence level. Ours is the first study which checks the quality of metrics when table-to-text references are divergent. We show that in this case even system level correlations can be unreliable. Hallucination BIBREF38 , BIBREF39 refers to when an NLG system generates text which mentions extra information than what is present in the source from which it is generated. Divergence can be viewed as hallucination in the reference text itself. PARENT deals with hallucination by discounting n-grams which do not overlap with either the reference or the table. PARENT draws inspiration from iBLEU BIBREF26 , a metric for evaluating paraphrase generation, which compares the generated text to both the source text and the reference. While iBLEU penalizes texts which match the source, here we reward such texts since our task values accuracy of generated text more than the need for paraphrasing the tabular content BIBREF40 . Similar to SARI for text simplification BIBREF41 and Q-BLEU for question generation BIBREF42 , PARENT falls under the category of task-specific metrics. Conclusions We study the automatic evaluation of table-to-text systems when the references diverge from the table. We propose a new metric, PARENT, which shows the highest correlation with humans across a range of settings with divergent references in WikiBio. We also perform the first empirical evaluation of information extraction based metrics BIBREF1 , and find RG-F to be effective. Lastly, we show that PARENT is comparable to the best existing metrics when references are elicited by humans on the WebNLG data. Acknowledgements Bhuwan Dhingra is supported by a fellowship from Siemens, and by grants from Google. We thank Maruan Al-Shedivat, Ian Tenney, Tom Kwiatkowski, Michael Collins, Slav Petrov, Jason Baldridge, David Reitter and other members of the Google AI Language team for helpful discussions and suggestions. We thank Sam Wiseman for sharing data for an earlier version of this paper. We also thank the anonymous reviewers for their feedback. Information Extraction System For evaluation via information extraction BIBREF1 we train a model for WikiBio which accepts text as input and generates a table as the output. Tables in WikiBio are open-domain, without any fixed schema for which attributes may be present or absent in an instance. Hence we employ the Pointer-Generator Network (PG-Net) BIBREF19 for this purpose. Specifically, we use a sequence-to-sequence model, whose encoder and decoder are both single-layer bi-directional LSTMs. The decoder is augmented with an attention mechanism over the states of the encoder. Further, it also uses a copy mechanism to optionally copy tokens directly from the source text. We do not use the coverage mechanism of BIBREF19 since that is specific to the task of summarization they study. The decoder is trained to produce a linearized version of the table where the rows and columns are flattened into a sequence, and separate by special tokens. Figure FIGREF53 shows an example. Clearly, since the references are divergent, the model cannot be expected to produce the entire table, and we see some false information being hallucinated after training. Nevertheless, as we show in § SECREF36 , this system can be used for evaluating generated texts. After training, we can parse the output sequence along the special tokens INLINEFORM0 R INLINEFORM1 and INLINEFORM2 C INLINEFORM3 to get a set of (attribute, value) pairs. Table TABREF54 shows the precision, recall and F-score of these extracted pairs against the ground truth tables, where the attributes and values are compared using an exact string match. Hyperparameters After tuning we found the same set of hyperparameters to work well for both the table-to-text PG-Net, and the inverse information extraction PG-Net. The hidden state size of the biLSTMs was set to 200. The input and output vocabularies were set to 50000 most common words in the corpus, with additional special symbols for table attribute names (such as “birth-date”). The embeddings of the tokens in the vocabulary were initialized with Glove BIBREF43 . Learning rate of INLINEFORM0 was used during training, with the Adam optimizer, and a dropout of INLINEFORM1 was also applied to the outputs of the biLSTM. Models were trained till the loss on the dev set stopped dropping. Maximum length of a decoded text was set to 40 tokens, and that of the tables was set to 120 tokens. Various beam sizes and length normalization penalties were applied for the table-to-text system, which are listed in the main paper. For the information extraction system, we found a beam size of 8 and no length penalty to produce the highest F-score on the dev set. Sample Outputs Table TABREF55 shows some sample references and the corresponding predictions from the best performing model, PG-Net for WikiBio.
[ "about 500", "Unanswerable" ]
3,831
qasper_e
en
null
e877b03879e4db0ccc7618230cad71590dadd808fc7a8152
What models are used in the experiment?
Introduction Offensive content has become pervasive in social media and a reason of concern for government organizations, online communities, and social media platforms. One of the most common strategies to tackle the problem is to train systems capable of recognizing offensive content, which then can be deleted or set aside for human moderation. In the last few years, there have been several studies published on the application of computational methods to deal with this problem. Most prior work focuses on a different aspect of offensive language such as abusive language BIBREF0 , BIBREF1 , (cyber-)aggression BIBREF2 , (cyber-)bullying BIBREF3 , BIBREF4 , toxic comments INLINEFORM0 , hate speech BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , and offensive language BIBREF11 . Prior work has focused on these aspects of offensive language in Twitter BIBREF3 , BIBREF7 , BIBREF8 , BIBREF11 , Wikipedia comments, and Facebook posts BIBREF2 . Recently, Waseem et. al. ( BIBREF12 ) acknowledged the similarities among prior work and discussed the need for a typology that differentiates between whether the (abusive) language is directed towards a specific individual or entity or towards a generalized group and whether the abusive content is explicit or implicit. Wiegand et al. ( BIBREF11 ) followed this trend as well on German tweets. In their evaluation, they have a task to detect offensive vs not offensive tweets and a second task for distinguishing between the offensive tweets as profanity, insult, or abuse. However, no prior work has explored the target of the offensive language, which is important in many scenarios, e.g., when studying hate speech with respect to a specific target. Therefore, we expand on these ideas by proposing a a hierarchical three-level annotation model that encompasses: Using this annotation model, we create a new large publicly available dataset of English tweets. The key contributions of this paper are as follows: Related Work Different abusive and offense language identification sub-tasks have been explored in the past few years including aggression identification, bullying detection, hate speech, toxic comments, and offensive language. Aggression identification: The TRAC shared task on Aggression Identification BIBREF2 provided participants with a dataset containing 15,000 annotated Facebook posts and comments in English and Hindi for training and validation. For testing, two different sets, one from Facebook and one from Twitter were provided. Systems were trained to discriminate between three classes: non-aggressive, covertly aggressive, and overtly aggressive. Bullying detection: Several studies have been published on bullying detection. One of them is the one by xu2012learning which apply sentiment analysis to detect bullying in tweets. xu2012learning use topic models to to identify relevant topics in bullying. Another related study is the one by dadvar2013improving which use user-related features such as the frequency of profanity in previous messages to improve bullying detection. Hate speech identification: It is perhaps the most widespread abusive language detection sub-task. There have been several studies published on this sub-task such as kwok2013locate and djuric2015hate who build a binary classifier to distinguish between `clean' comments and comments containing hate speech and profanity. More recently, Davidson et al. davidson2017automated presented the hate speech detection dataset containing over 24,000 English tweets labeled as non offensive, hate speech, and profanity. Offensive language: The GermEval BIBREF11 shared task focused on Offensive language identification in German tweets. A dataset of over 8,500 annotated tweets was provided for a course-grained binary classification task in which systems were trained to discriminate between offensive and non-offensive tweets and a second task where the organizers broke down the offensive class into three classes: profanity, insult, and abuse. Toxic comments: The Toxic Comment Classification Challenge was an open competition at Kaggle which provided participants with comments from Wikipedia labeled in six classes: toxic, severe toxic, obscene, threat, insult, identity hate. While each of these sub-tasks tackle a particular type of abuse or offense, they share similar properties and the hierarchical annotation model proposed in this paper aims to capture this. Considering that, for example, an insult targeted at an individual is commonly known as cyberbulling and that insults targeted at a group are known as hate speech, we pose that OLID's hierarchical annotation model makes it a useful resource for various offensive language identification sub-tasks. Hierarchically Modelling Offensive Content In the OLID dataset, we use a hierarchical annotation model split into three levels to distinguish between whether language is offensive or not (A), and type (B) and target (C) of the offensive language. Each level is described in more detail in the following subsections and examples are shown in Table TABREF10 . Level A: Offensive language Detection Level A discriminates between offensive (OFF) and non-offensive (NOT) tweets. Not Offensive (NOT): Posts that do not contain offense or profanity; Offensive (OFF): We label a post as offensive if it contains any form of non-acceptable language (profanity) or a targeted offense, which can be veiled or direct. This category includes insults, threats, and posts containing profane language or swear words. Level B: Categorization of Offensive Language Level B categorizes the type of offense and two labels are used: targeted (TIN) and untargeted (INT) insults and threats. Targeted Insult (TIN): Posts which contain an insult/threat to an individual, group, or others (see next layer); Untargeted (UNT): Posts containing non-targeted profanity and swearing. Posts with general profanity are not targeted, but they contain non-acceptable language. Level C: Offensive Language Target Identification Level C categorizes the targets of insults and threats as individual (IND), group (GRP), and other (OTH). Individual (IND): Posts targeting an individual. It can be a a famous person, a named individual or an unnamed participant in the conversation. Insults and threats targeted at individuals are often defined as cyberbulling. Group (GRP): The target of these offensive posts is a group of people considered as a unity due to the same ethnicity, gender or sexual orientation, political affiliation, religious belief, or other common characteristic. Many of the insults and threats targeted at a group correspond to what is commonly understood as hate speech. Other (OTH): The target of these offensive posts does not belong to any of the previous two categories (e.g. an organization, a situation, an event, or an issue). Data Collection The data included in OLID has been collected from Twitter. We retrieved the data using the Twitter API by searching for keywords and constructions that are often included in offensive messages, such as `she is' or `to:BreitBartNews'. We carried out a first round of trial annotation of 300 instances with six experts. The goal of the trial annotation was to 1) evaluate the proposed tagset; 2) evaluate the data retrieval method; and 3) create a gold standard with instances that could be used as test questions in the training and test setting annotation which was carried out using crowdsourcing. The breakdown of keywords and their offensive content in the trial data of 300 tweets is shown in Table TABREF14 . We included a left (@NewYorker) and far-right (@BreitBartNews) news accounts because there tends to be political offense in the comments. One of the best offensive keywords was tweets that were flagged as not being safe by the Twitter `safe' filter (the `-' indicates `not safe'). The vast majority of content on Twitter is not offensive so we tried different strategies to keep a reasonable number of tweets in the offensive class amounting to around 30% of the dataset including excluding some keywords that were not high in offensive content such as `they are` and `to:NewYorker`. Although `he is' is lower in offensive content we kept it as a keyword to avoid gender bias. In addition to the keywords in the trial set, we searched for more political keywords which tend to be higher in offensive content, and sampled our dataset such that 50% of the the tweets come from political keywords and 50% come from non-political keywords. In addition to the keywords `gun control', and `to:BreitbartNews', political keywords used to collect these tweets are `MAGA', `antifa', `conservative' and `liberal'. We computed Fliess' INLINEFORM0 on the trial set for the five annotators on 21 of the tweets. INLINEFORM1 is .83 for Layer A (OFF vs NOT) indicating high agreement. As to normalization and anonymization, no user metadata or Twitter IDs have been stored, and URLs and Twitter mentions have been substituted to placeholders. We follow prior work in related areas (burnap2015cyber,davidson2017automated) and annotate our data using crowdsourcing using the platform Figure Eight. We ensure data quality by: 1) we only received annotations from individuals who were experienced in the platform; and 2) we used test questions to discard annotations of individuals who did not reach a certain threshold. Each instance in the dataset was annotated by multiple annotators and inter-annotator agreement has been calculated. We first acquired two annotations for each instance. In case of 100% agreement, we considered these as acceptable annotations, and in case of disagreement, we requested more annotations until the agreement was above 66%. After the crowdsourcing annotation, we used expert adjudication to guarantee the quality of the annotation. The breakdown of the data into training and testing for the labels from each level is shown in Table TABREF15 . Experiments and Evaluation We assess our dataset using traditional and deep learning methods. Our simplest model is a linear SVM trained on word unigrams. SVMs have produced state-of-the-art results for many text classification tasks BIBREF13 . We also train a bidirectional Long Short-Term-Memory (BiLSTM) model, which we adapted from the sentiment analysis system of sentimentSystem,rasooli2018cross and altered to predict offensive labels instead. It consists of (1) an input embedding layer, (2) a bidirectional LSTM layer, (3) an average pooling layer of input features. The concatenation of the LSTM's and average pool layer is passed through a dense layer and the output is passed through a softmax function. We set two input channels for the input embedding layers: pre-trained FastText embeddings BIBREF14 , as well as updatable embeddings learned by the model during training. Finally, we also apply a Convolutional Neural Network (CNN) model based on the architecture of BIBREF15 , using the same multi-channel inputs as the above BiLSTM. Our models are trained on the training data, and evaluated by predicting the labels for the held-out test set. The distribution is described in Table TABREF15 . We evaluate and compare the models using the macro-averaged F1-score as the label distribution is highly imbalanced. Per-class Precision (P), Recall (R), and F1-score (F1), also with other averaged metrics are also reported. The models are compared against baselines of predicting all labels as the majority or minority classes. Offensive Language Detection The performance on discriminating between offensive (OFF) and non-offensive (NOT) posts is reported in Table TABREF18 . We can see that all systems perform significantly better than chance, with the neural models being substantially better than the SVM. The CNN outperforms the RNN model, achieving a macro-F1 score of 0.80. Categorization of Offensive Language In this experiment, the two systems were trained to discriminate between insults and threats (TIN) and untargeted (UNT) offenses, which generally refer to profanity. The results are shown in Table TABREF19 . The CNN system achieved higher performance in this experiment compared to the BiLSTM, with a macro-F1 score of 0.69. All systems performed better at identifying target and threats (TIN) than untargeted offenses (UNT). Offensive Language Target Identification The results of the offensive target identification experiment are reported in Table TABREF20 . Here the systems were trained to distinguish between three targets: a group (GRP), an individual (IND), or others (OTH). All three models achieved similar results far surpassing the random baselines, with a slight performance edge for the neural models. The performance of all systems for the OTH class is 0. This poor performances can be explained by two main factors. First, unlike the two other classes, OTH is a heterogeneous collection of targets. It includes offensive tweets targeted at organizations, situations, events, etc. making it more challenging for systems to learn discriminative properties of this class. Second, this class contains fewer training instances than the other two. There are only 395 instances in OTH, and 1,075 in GRP, and 2,407 in IND. Conclusion and Future Work This paper presents OLID, a new dataset with annotation of type and target of offensive language. OLID is the official dataset of the shared task SemEval 2019 Task 6: Identifying and Categorizing Offensive Language in Social Media (OffensEval) BIBREF16 . In OffensEval, each annotation level in OLID is an independent sub-task. The dataset contains 14,100 tweets and is released freely to the research community. To the best of our knowledge, this is the first dataset to contain annotation of type and target of offenses in social media and it opens several new avenues for research in this area. We present baseline experiments using SVMs and neural networks to identify the offensive tweets, discriminate between insults, threats, and profanity, and finally to identify the target of the offensive messages. The results show that this is a challenging task. A CNN-based sentence classifier achieved the best results in all three sub-tasks. In future work, we would like to make a cross-corpus comparison of OLID and datasets annotated for similar tasks such as aggression identification BIBREF2 and hate speech detection BIBREF8 . This comparison is, however, far from trivial as the annotation of OLID is different. Acknowledgments The research presented in this paper was partially supported by an ERAS fellowship awarded by the University of Wolverhampton.
[ "linear SVM, bidirectional Long Short-Term-Memory (BiLSTM), Convolutional Neural Network (CNN)", "linear SVM, bidirectional Long Short-Term-Memory (BiLSTM), Convolutional Neural Network (CNN)", "linear SVM trained on word unigrams, bidirectional Long Short-Term-Memory (BiLSTM), Convolutional Neural Network (CNN) " ]
2,250
qasper_e
en
null
b7eb32148f033ee9882d1096f1b0a2f75b4e4a05308c7292
Which machine learning models do they explore?
Introduction Named Entity Recognition (NER) is a foremost NLP task to label each atomic elements of a sentence into specific categories like "PERSON", "LOCATION", "ORGANIZATION" and othersBIBREF0. There has been an extensive NER research on English, German, Dutch and Spanish language BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, and notable research on low resource South Asian languages like HindiBIBREF6, IndonesianBIBREF7 and other Indian languages (Kannada, Malayalam, Tamil and Telugu)BIBREF8. However, there has been no study on developing neural NER for Nepali language. In this paper, we propose a neural based Nepali NER using latest state-of-the-art architecture based on grapheme-level which doesn't require any hand-crafted features and no data pre-processing. Recent neural architecture like BIBREF1 is used to relax the need to hand-craft the features and need to use part-of-speech tag to determine the category of the entity. However, this architecture have been studied for languages like English, and German and not been applied to languages like Nepali which is a low resource language i.e limited data set to train the model. Traditional methods like Hidden Markov Model (HMM) with rule based approachesBIBREF9,BIBREF10, and Support Vector Machine (SVM) with manual feature-engineeringBIBREF11 have been applied but they perform poor compared to neural. However, there has been no research in Nepali NER using neural network. Therefore, we created the named entity annotated dataset partly with the help of Dataturk to train a neural model. The texts used for this dataset are collected from various daily news sources from Nepal around the year 2015-2016. Following are our contributions: We present a novel Named Entity Recognizer (NER) for Nepali language. To best of our knowledge we are the first to propose neural based Nepali NER. As there are not good quality dataset to train NER we release a dataset to support future research We perform empirical evaluation of our model with state-of-the-art models with relative improvement of upto 10% In this paper, we present works similar to ours in Section SECREF2. We describe our approach and dataset statistics in Section SECREF3 and SECREF4, followed by our experiments, evaluation and discussion in Section SECREF5, SECREF6, and SECREF7. We conclude with our observations in Section SECREF8. To facilitate further research our code and dataset will be made available at github.com/link-yet-to-be-updated Related Work There has been a handful of research on Nepali NER task based on approaches like Support Vector Machine and gazetteer listBIBREF11 and Hidden Markov Model and gazetteer listBIBREF9,BIBREF10. BIBREF11 uses SVM along with features like first word, word length, digit features and gazetteer (person, organization, location, middle name, verb, designation and others). It uses one vs rest classification model to classify each word into different entity classes. However, it does not the take context word into account while training the model. Similarly, BIBREF9 and BIBREF10 uses Hidden Markov Model with n-gram technique for extracting POS-tags. POS-tags with common noun, proper noun or combination of both are combined together, then uses gazetteer list as look-up table to identify the named entities. Researchers have shown that the neural networks like CNNBIBREF12, RNNBIBREF13, LSTMBIBREF14, GRUBIBREF15 can capture the semantic knowledge of language better with the help of pre-trained embbeddings like word2vecBIBREF16, gloveBIBREF17 or fasttextBIBREF18. Similar approaches has been applied to many South Asian languages like HindiBIBREF6, IndonesianBIBREF7, BengaliBIBREF19 and In this paper, we present the neural network architecture for NER task in Nepali language, which doesn't require any manual feature engineering nor any data pre-processing during training. First we are comparing BiLSTMBIBREF14, BiLSTM+CNNBIBREF20, BiLSTM+CRFBIBREF1, BiLSTM+CNN+CRFBIBREF2 models with CNN modelBIBREF0 and Stanford CRF modelBIBREF21. Secondly, we show the comparison between models trained on general word embeddings, word embedding + character-level embedding, word embedding + part-of-speech(POS) one-hot encoding and word embedding + grapheme clustered or sub-word embeddingBIBREF22. The experiments were performed on the dataset that we created and on the dataset received from ILPRL lab. Our extensive study shows that augmenting word embedding with character or grapheme-level representation and POS one-hot encoding vector yields better results compared to using general word embedding alone. Approach In this section, we describe our approach in building our model. This model is partly inspired from multiple models BIBREF20,BIBREF1, andBIBREF2 Approach ::: Bidirectional LSTM We used Bi-directional LSTM to capture the word representation in forward as well as reverse direction of a sentence. Generally, LSTMs take inputs from left (past) of the sentence and computes the hidden state. However, it is proven beneficialBIBREF23 to use bi-directional LSTM, where, hidden states are computed based from right (future) of sentence and both of these hidden states are concatenated to produce the final output as $h_t$=[$\overrightarrow{h_t}$;$\overleftarrow{h_t}$], where $\overrightarrow{h_t}$, $\overleftarrow{h_t}$ = hidden state computed in forward and backward direction respectively. Approach ::: Features ::: Word embeddings We have used Word2Vec BIBREF16, GloVe BIBREF17 and FastText BIBREF18 word vectors of 300 dimensions. These vectors were trained on the corpus obtained from Nepali National Corpus. This pre-lemmatized corpus consists of 14 million words from books, web-texts and news papers. This corpus was mixed with the texts from the dataset before training CBOW and skip-gram version of word2vec using gensim libraryBIBREF24. This trained model consists of vectors for 72782 unique words. Light pre-processing was performed on the corpus before training it. For example, invalid characters or characters other than Devanagari were removed but punctuation and numbers were not removed. We set the window context at 10 and the rare words whose count is below 5 are dropped. These word embeddings were not frozen during the training session because fine-tuning word embedding help achieve better performance compared to frozen oneBIBREF20. We have used fasttext embeddings in particular because of its sub-word representation ability, which is very useful in highly inflectional language as shown in Table TABREF25. We have trained the word embedding in such a way that the sub-word size remains between 1 and 4. We particularly chose this size because in Nepali language a single letter can also be a word, for example e, t, C, r, l, n, u and a single character (grapheme) or sub-word can be formed after mixture of dependent vowel signs with consonant letters for example, C + O + = CO, here three different consonant letters form a single sub-word. The two-dimensional visualization of an example word npAl is shown in FIGREF14. Principal Component Analysis (PCA) technique was used to generate this visualization which helps use to analyze the nearest neighbor words of a given sample word. 84 and 104 nearest neighbors were observed using word2vec and fasttext embedding respectively on the same corpus. Approach ::: Features ::: Character-level embeddings BIBREF20 and BIBREF2 successfully presented that the character-level embeddings, extracted using CNN, when combined with word embeddings enhances the NER model performance significantly, as it is able to capture morphological features of a word. Figure FIGREF7 shows the grapheme-level CNN used in our model, where inputs to CNN are graphemes. Character-level CNN is also built in similar fashion, except the inputs are characters. Grapheme or Character -level embeddings are randomly initialized from [0,1] with real values with uniform distribution of dimension 30. Approach ::: Features ::: Grapheme-level embeddings Grapheme is atomic meaningful unit in writing system of any languages. Since, Nepali language is highly morphologically inflectional, we compared grapheme-level representation with character-level representation to evaluate its effect. For example, in character-level embedding, each character of a word npAl results into n + + p + A + l has its own embedding. However, in grapheme level, a word npAl is clustered into graphemes, resulting into n + pA + l. Here, each grapheme has its own embedding. This grapheme-level embedding results good scores on par with character-level embedding in highly inflectional languages like Nepali, because graphemes also capture syntactic information similar to characters. We created grapheme clusters using uniseg package which is helpful in unicode text segmentations. Approach ::: Features ::: Part-of-speech (POS) one hot encoding We created one-hot encoded vector of POS tags and then concatenated with pre-trained word embeddings before passing it to BiLSTM network. A sample of data is shown in figure FIGREF13. Dataset Statistics ::: OurNepali dataset Since, we there was no publicly available standard Nepali NER dataset and did not receive any dataset from the previous researchers, we had to create our own dataset. This dataset contains the sentences collected from daily newspaper of the year 2015-2016. This dataset has three major classes Person (PER), Location (LOC) and Organization (ORG). Pre-processing was performed on the text before creation of the dataset, for example all punctuations and numbers besides ',', '-', '|' and '.' were removed. Currently, the dataset is in standard CoNLL-2003 IO formatBIBREF25. Since, this dataset is not lemmatized originally, we lemmatized only the post-positions like Ek, kO, l, mA, m, my, jF, sg, aEG which are just the few examples among 299 post positions in Nepali language. We obtained these post-positions from sanjaalcorps and added few more to match our dataset. We will be releasing this list in our github repository. We found out that lemmatizing the post-positions boosted the F1 score by almost 10%. In order to label our dataset with POS-tags, we first created POS annotated dataset of 6946 sentences and 16225 unique words extracted from POS-tagged Nepali National Corpus and trained a BiLSTM model with 95.14% accuracy which was used to create POS-tags for our dataset. The dataset released in our github repository contains each word in newline with space separated POS-tags and Entity-tags. The sentences are separated by empty newline. A sample sentence from the dataset is presented in table FIGREF13. Dataset Statistics ::: ILPRL dataset After much time, we received the dataset from Bal Krishna Bal, ILPRL, KU. This dataset follows standard CoNLL-2003 IOB formatBIBREF25 with POS tags. This dataset is prepared by ILPRL Lab, KU and KEIV Technologies. Few corrections like correcting the NER tags had to be made on the dataset. The statistics of both the dataset is presented in table TABREF23. Table TABREF24 presents the total entities (PER, LOC, ORG and MISC) from both of the dataset used in our experiments. The dataset is divided into three parts with 64%, 16% and 20% of the total dataset into training set, development set and test set respectively. Experiments In this section, we present the details about training our neural network. The neural network architecture are implemented using PyTorch framework BIBREF26. The training is performed on a single Nvidia Tesla P100 SXM2. We first run our experiment on BiLSTM, BiLSTM-CNN, BiLSTM-CRF BiLSTM-CNN-CRF using the hyper-parameters mentioned in Table TABREF30. The training and evaluation was done on sentence-level. The RNN variants are initialized randomly from $(-\sqrt{k},\sqrt{k})$ where $k=\frac{1}{hidden\_size}$. First we loaded our dataset and built vocabulary using torchtext library. This eased our process of data loading using its SequenceTaggingDataset class. We trained our model with shuffled training set using Adam optimizer with hyper-parameters mentioned in table TABREF30. All our models were trained on single layer of LSTM network. We found out Adam was giving better performance and faster convergence compared to Stochastic Gradient Descent (SGD). We chose those hyper-parameters after many ablation studies. The dropout of 0.5 is applied after LSTM layer. For CNN, we used 30 different filters of sizes 3, 4 and 5. The embeddings of each character or grapheme involved in a given word, were passed through the pipeline of Convolution, Rectified Linear Unit and Max-Pooling. The resulting vectors were concatenated and applied dropout of 0.5 before passing into linear layer to obtain the embedding size of 30 for the given word. This resulting embedding is concatenated with word embeddings, which is again concatenated with one-hot POS vector. Experiments ::: Tagging Scheme Currently, for our experiments we trained our model on IO (Inside, Outside) format for both the dataset, hence the dataset does not contain any B-type annotation unlike in BIO (Beginning, Inside, Outside) scheme. Experiments ::: Early Stopping We used simple early stopping technique where if the validation loss does not decrease after 10 epochs, the training was stopped, else the training will run upto 100 epochs. In our experience, training usually stops around 30-50 epochs. Experiments ::: Hyper-parameters Tuning We ran our experiment looking for the best hyper-parameters by changing learning rate from (0,1, 0.01, 0.001, 0.0001), weight decay from [$10^{-1}$, $10^{-2}$, $10^{-3}$, $10^{-4}$, $10^{-5}$, $10^{-6}$, $10^{-7}$], batch size from [1, 2, 4, 8, 16, 32, 64, 128], hidden size from [8, 16, 32, 64, 128, 256, 512 1024]. Table TABREF30 shows all other hyper-parameter used in our experiment for both of the dataset. Experiments ::: Effect of Dropout Figure FIGREF31 shows how we end up choosing 0.5 as dropout rate. When the dropout layer was not used, the F1 score are at the lowest. As, we slowly increase the dropout rate, the F1 score also gradually increases, however after dropout rate = 0.5, the F1 score starts falling down. Therefore, we have chosen 0.5 as dropout rate for all other experiments performed. Evaluation In this section, we present the details regarding evaluation and comparison of our models with other baselines. Table TABREF25 shows the study of various embeddings and comparison among each other in OurNepali dataset. Here, raw dataset represents such dataset where post-positions are not lemmatized. We can observe that pre-trained embeddings significantly improves the score compared to randomly initialized embedding. We can deduce that Skip Gram models perform better compared CBOW models for word2vec and fasttext. Here, fastText_Pretrained represents the embedding readily available in fastText website, while other embeddings are trained on the Nepali National Corpus as mentioned in sub-section SECREF11. From this table TABREF25, we can clearly observe that model using fastText_Skip Gram embeddings outperforms all other models. Table TABREF35 shows the model architecture comparison between all the models experimented. The features used for Stanford CRF classifier are words, letter n-grams of upto length 6, previous word and next word. This model is trained till the current function value is less than $1\mathrm {e}{-2}$. The hyper-parameters of neural network experiments are set as shown in table TABREF30. Since, word embedding of character-level and grapheme-level is random, their scores are near. All models are evaluated using CoNLL-2003 evaluation scriptBIBREF25 to calculate entity-wise precision, recall and f1 score. Discussion In this paper we present that we can exploit the power of neural network to train the model to perform downstream NLP tasks like Named Entity Recognition even in Nepali language. We showed that the word vectors learned through fasttext skip gram model performs better than other word embedding because of its capability to represent sub-word and this is particularly important to capture morphological structure of words and sentences in highly inflectional like Nepali. This concept can come handy in other Devanagari languages as well because the written scripts have similar syntactical structure. We also found out that stemming post-positions can help a lot in improving model performance because of inflectional characteristics of Nepali language. So when we separate out its inflections or morphemes, we can minimize the variations of same word which gives its root word a stronger word vector representations compared to its inflected versions. We can clearly imply from tables TABREF23, TABREF24, and TABREF35 that we need more data to get better results because OurNepali dataset volume is almost ten times bigger compared to ILPRL dataset in terms of entities. Conclusion and Future work In this paper, we proposed a novel NER for Nepali language and achieved relative improvement of upto 10% and studies different factors effecting the performance of the NER for Nepali language. We also present a neural architecture BiLSTM+CNN(grapheme-level) which turns out to be performing on par with BiLSTM+CNN(character-level) under the same configuration. We believe this will not only help Nepali language but also other languages falling under the umbrellas of Devanagari languages. Our model BiLSTM+CNN(grapheme-level) and BiLSTM+CNN(G)+POS outperforms all other model experimented in OurNepali and ILPRL dataset respectively. Since this is the first named entity recognition research in Nepal language using neural network, there are many rooms for improvement. We believe initializing the grapheme-level embedding with fasttext embeddings might help boosting the performance, rather than randomly initializing it. In future, we plan to apply other latest techniques like BERT, ELMo and FLAIR to study its effect on low-resource language like Nepali. We also plan to improve the model using cross-lingual or multi-lingual parameter sharing techniques by jointly training with other Devanagari languages like Hindi and Bengali. Finally, we would like to contribute our dataset to Nepali NLP community to move forward the research going on in language understanding domain. We believe there should be special committee to create and maintain such dataset for Nepali NLP and organize various competitions which would elevate the NLP research in Nepal. Some of the future works are listed below: Proper initialization of grapheme level embedding from fasttext embeddings. Apply robust POS-tagger for Nepali dataset Lemmatize the OurNepali dataset with robust and efficient lemmatizer Improve Nepali language score with cross-lingual learning techniques Create more dataset using Wikipedia/Wikidata framework Acknowledgments The authors of this paper would like to express sincere thanks to Bal Krishna Bal, Kathmandu University Professor for providing us the POS-tagged Nepali NER data.
[ "BiLSTM, BiLSTM-CNN, BiLSTM-CRF, BiLSTM-CNN-CRF", "BiLSTMBIBREF14, BiLSTM+CNNBIBREF20, BiLSTM+CRFBIBREF1, BiLSTM+CNN+CRFBIBREF2, CNN modelBIBREF0 and Stanford CRF modelBIBREF21" ]
2,838
qasper_e
en
null
11697ce7f07d6badb6224d68ed40229693b66941ed3a156d
How is the vocabulary of word-like or phoneme-like units automatically discovered?
Introduction Topic identification (topic ID) on speech aims to identify the topic(s) for given speech recordings, referred to as spoken documents, where the topics are a predefined set of classes or labels. This task is typically formulated as a three-step process. First, speech is tokenized into words or phones by automatic speech recognition (ASR) systems BIBREF0 , or by limited-vocabulary keyword spotting BIBREF1 . Second, standard text-based processing techniques are applied to the resulting tokenizations, and produce a vector representation for each spoken document, typically a bag-of-words multinomial representation, or a more compact vector given by probabilistic topic models BIBREF2 , BIBREF3 . Finally, topic ID is performed on the spoken document representations by supervised training of classifiers, such as Bayesian classifiers and support vector machines (SVMs). However, in the first step, training the ASR system required for tokenization itself requires transcribed speech and pronunciations. In this paper, we focus on a difficult and realistic scenario where the speech corpus of a test language is annotated only with a minimal number of topic labels, i.e., no manual transcriptions or dictionaries for building an ASR system are available. We aim to exploit approaches that enable topic ID on speech without any knowledge of that language other than the topic annotations. In this scenario, while previous work demonstrates that the cross-lingual phoneme recognizers can produce reasonable speech tokenizations BIBREF4 , BIBREF5 , the performance is highly dependent on the language and environmental condition (channel, noise, etc.) mismatch between the training and test data. Therefore, we focus on unsupervised approaches that operate directly on the speech of interest. Raw acoustic feature-based unsupervised term discovery (UTD) is one such approach that aims to identify and cluster repeating word-like units across speech based around segmental dynamic time warping (DTW) BIBREF6 , BIBREF7 . BIBREF8 shows that using the word-like units from UTD for spoken document classification can work well; however, the results in BIBREF8 are limited since the acoustic features on which UTD is performed are produced by acoustic models trained from the transcribed speech of its evaluation corpus. In this paper, we investigate UTD-based topic ID performance when UTD operates on language-independent speech representations extracted from multilingual bottleneck networks trained on languages other than the test language BIBREF9 . Another alternative to producing speech tokenizations without language dependency is the model-based approach, i.e., unsupervised learning of hidden Markov model (HMM) based phoneme-like units from untranscribed speech. We exploit the Variational Bayesian inference based acoustic unit discovery (AUD) framework in BIBREF10 that allows parallelized large-scale training. In topic ID tasks, such AUD-based systems have been shown to outperform other systems based on cross-lingual phoneme recognizers BIBREF5 , and this paper aims to further investigate how the performance compares among UTD, AUD and ASR based systems. Moreover, after the speech is tokenized, these works BIBREF0 , BIBREF1 , BIBREF4 , BIBREF5 , BIBREF8 , BIBREF9 are limited to using bag-of-words features as spoken document representations. While UTD only identifies relatively long (0.5 – 1 sec) repeated terms, AUD/ASR enables full-coverage segmentation of continuous speech into a sequence of units/words, and such a resulting temporal sequence enables another feature learning architecture based on convolutional neural networks (CNNs) BIBREF11 ; instead of treating the sequential tokens as a bag of acoustic units or words, the whole token sequence is encoded as concatenated continuous vectors, and followed by convolution and temporal pooling operations that capture the local and global dependencies. Such continuous space feature extraction frameworks have been used in various language processing tasks like spoken language understanding BIBREF12 , BIBREF13 and text classification BIBREF14 , BIBREF15 . However, three questions are worth investigating in our AUD-based setting: (i) if such a CNN-based framework can perform as well on noisy automatically discovered phoneme-like units as on orthographic words/characters, (ii) if pre-trained vectors of phoneme-like units from word2vec BIBREF16 provide superior performance to random initialization as evidenced by the word-based tasks, and (iii) if CNNs are still competitive in low-resource settings of hundreds to two-thousand training exemplars, rather than the large/medium sized datasets as in previous work BIBREF14 , BIBREF15 . Finally, incorporating the different tokenization and feature representation approaches noted above, we perform comprehensive topic ID evaluations on both single-label and multi-label spoken document classification tasks. Unsupervised tokenizations of speech Unsupervised term discovery (UTD) UTD aims to automatically identify and cluster repeated terms (e.g. words or phrases) from speech. To circumvent the exhaustive DTW-based search limited by INLINEFORM0 time BIBREF6 , we exploit the scalable UTD framework in the Zero Resource Toolkit (ZRTools) BIBREF7 , which permits search in INLINEFORM1 time. We briefly describe the UTD procedures in ZRTools by four steps below, and full details can be found in BIBREF7 . Construct the sparse approximate acoustic similarity matrices between pairs of speech utterances. Identify word repetitions via fast diagonal line search and segmental DTW. The resulting matches are used to construct an acoustic similarity graph, where nodes represent the matching acoustic segments and edges reflect DTW distances. Threshold the graph edges, and each connected component of the graph is a cluster of acoustic segments, which produces a corresponding term (word/phrase) category. Finally, the cluster of each discovered term category consists of a list of term occurrences. Note that in the third step above, the weight on each graph edge can be exact DTW-based similarity, or other similarity based on heuristics more than DTW distance. For example, we investigate an implementation in ZRTools, where a separate logistic regression model is used to rescore the similarity between identified matches by determining how likely the matching pair is the same underlying word/phrase and is not a filled pause (e.g. “um-hum” and “yeah uh-huh” in English). Filled pauses tend to be acoustically stationary with more phone repeats and thus would match throughout the acoustic similarity matrix, whereas a contentful word (without too many phone repeats) tend to concentrate around the main diagonal; thus, the features in logistic regression contain the numbers of matrix elements in diagonal bands in progressive steps away from the main diagonal. Feature weights are learned using a portion of transcribed speech with reference transcripts, and the resulting model can be used for language-independent rescoring. Acoustic unit discovery (AUD) We exploit the nonparametric Bayesian AUD framework in BIBREF10 based on variational inference, rather than the maximum likelihood training in BIBREF4 which may oversimplify the parameter estimations, nor the Gibbs Sampling training in BIBREF17 which is not amenable to large scale applications. Specifically, a phone-loop model is formulated where each phoneme-like unit is modeled as an HMM with a Gaussian mixture model of output densities (GMM-HMM). Under the Dirichlet process framework, we consider the phone loop as an infinite mixture of GMM-HMMs, and the mixture weights are based on the stick-breaking construction of Dirichlet process. The infinite number of units in the mixture is truncated in practice, giving zero mixture weight to any unit beyond some large count. We treat such mixture of GMM-HMMs as a single unified HMM and thus the segmentation of the data is performed using standard forward-backward algorithm. Training is fully unsupervised and parallelized; after a fixed number of training iterations, we use Viterbi decoding algorithm to obtain acoustic unit tokenizations of the data. Bag-of-words representation After we obtain the tokenizations of speech by either UTD or AUD, each spoken document is represented by a vector of unigram occurrence counts over discovered terms, or a vector of INLINEFORM0 -gram counts over acoustic units, respectively. Each feature vector can be further scaled by inverse document frequency (IDF), producing a TF-IDF feature. Convolutional neural network-based representation AUD enables full-coverage tokenization of continuous speech into a sequence of acoustic units, which we can exploit in a CNN-based framework to learn a vector representation for each spoken document. As shown in Figure FIGREF7 , in an acoustic unit sequence a of length INLINEFORM0 , each unit INLINEFORM1 , INLINEFORM2 , is encoded as a fixed dimensional continuous vector, and the whole sequence a is represented as a concatenated vector x. A shared convolutional feature transform INLINEFORM3 spans a fixed-sized INLINEFORM4 -gram window, INLINEFORM5 , and slides over the whole sequence. Then the hidden feature layer INLINEFORM6 with nonlinearities consists of each feature vector INLINEFORM7 extracted from the shared convolutional window centered at each acoustic unit position INLINEFORM8 . Max-pooling is performed on top of each INLINEFORM9 , INLINEFORM10 , to obtain a fixed-dimensional vector representation for the whole sequence a, i.e., a vector representation of the whole spoken document, followed by another hidden layer INLINEFORM11 and a final output layer. Note that this framework needs supervision for training; e.g., the output layer can be a softmax function for single-label classification, and the whole model is trained with categorical cross-entropy loss. Also, the vector representation of each unique acoustic unit can be randomly initialized, or pre-trained from other tasks. Specifically, we apply the skip-gram model of word2vec BIBREF18 to pre-train one embedding vector for each acoustic unit, based on the hierarchical softmax with Huffman codes. Single-label classification For the bag-of-words representation, we use a stochastic gradient descent (SGD) based linear SVM BIBREF19 , BIBREF20 with hinge loss and INLINEFORM0 / INLINEFORM1 norm regularization. For the CNN-based framework, we use a softmax function in the output layer for classification as described in Section SECREF9 . For our single-label topic classification experiments, we use the Switchboard Telephone Speech Corpus BIBREF21 , a collection of two-sided telephone conversations. We use the same development (dev) and evaluation (eval) data sets as in BIBREF8 , BIBREF9 . Each whole conversation has two sides and one single topic, and topic ID is performed on each individual-side speech (i.e., each side is seen as one single spoken document). In the 35.7 hour dev data, there are 360 conversation sides evenly distributed across six different topics (recycling, capital punishment, drug testing, family finance, job benefits, car buying), i.e., each topic has equal number of 60 sides. In the 61.6 hour eval data, there are another different six topics (family life, news media, public education, exercise/fitness, pets, taxes) evenly distributed across 600 conversation sides. Algorithm design choices are explored through experiments on dev data. We use manual segmentations provided by the Switchboard corpus to produce utterances with speech activity, and UTD and AUD are operating only on those utterances. For UTD, we use the ZRTools BIBREF7 implementation with the default parameters except that, we use cosine similarity threshold INLINEFORM0 , and vary the diagonal median filter duration INLINEFORM1 over INLINEFORM2 ; we try both the exact DTW-based similarity and the rescored similarity as described in Section SECREF1 , and tune the similarity threshold (used to partition the graph edges) over INLINEFORM3 . For AUD, the unsupervised training is performed only on the dev data (10 iterations); after training, we use the learned models to decode both dev and eval data set, and obtain the acoustic unit tokenizations. We use truncation level 200, which implies maximum 200 different acoustic units can be learned from the corpus. For each acoustic unit, we use a 3-state HMM with 2 Gaussians per state. For the stick-breaking construction of Dirichlet process, we vary the concentration parameter INLINEFORM4 over INLINEFORM5 , and other hyperparameters are the same as BIBREF10 . The acoustic features on which UTD and AUD operate are extracted using the same multilingual bottleneck (BN) network as described in BIBREF9 with Kaldi toolkit BIBREF22 . We conduct the multilingual BN training with 10 language collections (Assamese, Bengali, Cantonese, Haitian, Lao, Pashto, Tamil, Tagalog, Vietnamese and Zulu) – 10 hours of transcribed speech per language. Complete specifications can be found in BIBREF9 . For SVM-based classification, we use the bag of discovered term unigrams, or bag of acoustic unit trigrams. On dev data, we try using the features of raw counts or scaled by IDF, SVM regularization tuned over INLINEFORM0 / INLINEFORM1 norm, regularization constant tuned over INLINEFORM2 , and SGD epochs tuned over INLINEFORM3 . We further normalize each feature to INLINEFORM4 norm unit length. Each experiment is a run of 10-fold cross validation (CV) on the 360 conversation sides of dev data, or on the 600 sides of eval data, respectively. Note that our data size here is relatively small (only 360 or 600) and the SGD training may give high variance in the performance BIBREF23 . Therefore, to report classification accuracy for each configuration (when varying features or models), we repeat each CV experiment 5 times, where each experiment again is a run of 10-fold CV; then for each configuration, the mean and standard deviation of 5 experiments is reported. For CNN-based classification, we use the same strategy to report classification accuracy, i.e., repeating experiments 5 times (where each time is a 10-fold CV) for each CNN configuration. Note that the respective 10 folds of both dev and eval data sets are fixed the same for all the SVM and CNN experiments. Additionally, for each 10-fold CV experiment, instead of training on 9 folds and testing on the remaining 1 fold as in SVM, we use 8 folds for CNN training, leave another 1 fold out as validation data; after training each CNN model for up to 100 epochs, the model with the best accuracy on the validation data is used for evaluation on the test set. The acoustic unit sequence (as CNN inputs) are zero-padded to the longest length in each dataset. We implemented the CNNs in Keras BIBREF24 with Theano BIBREF25 backend. CNN architectures are determined through experiments on dev data. For SGD training we use the Adadelta optimizer BIBREF26 and mini-batch size 18. The INLINEFORM0 -gram window size of each convolutional feature transform INLINEFORM1 is 7. The size of each hidden feature vector INLINEFORM2 (extracted from the transform INLINEFORM3 ) is 1024, with rectified linear unit (ReLU) nonlinearities. Thus, after max-pooling over time, we have a 1024-dimensional vector again, which then goes through another hidden layer INLINEFORM4 (also set as 1024-dimensional with ReLU) and finally into a softmax. Dropout BIBREF27 rate 0.2 is used at each layer. When we initialize the vector representation of each acoustic unit with a set of pre-trained vectors (instead of random initializations), we apply the skip-gram model of word2vec BIBREF18 to the acoustic unit tokenizations of each data set. We use the gensim implementation BIBREF28 , which includes a vector space of embedding dimension 50 (tuned over INLINEFORM0 ), a skip-gram window of size 5, and SGD over 20 epochs. Table TABREF15 shows the topic ID results on Switchboard. For UTD-based classifications, we find that the default rescoring in ZRTools BIBREF7 which is designed to filter out the filled pauses produces comparable performance to the raw DTW similarity scores, but the rescoring can result in much faster connected-component clustering (Section SECREF1 ). Note that this rescoring model is estimated using a portion of transcribed Switchboard, but it is still a legitimate language-independent UTD approach while operating on languages other than English. While a diagonal median filter duration INLINEFORM0 of INLINEFORM1 or INLINEFORM2 gives similar results, INLINEFORM3 produces longer but fewer terms, giving more sparse feature representations. Therefore, we proceed with rescoring and INLINEFORM4 in the following UTD experiments (Section SECREF16 ). For AUD-based classifications, CNN without word2vec pre-training usually gives comparable results with SVM; however, using word2vec pre-training, CNN substantially outperforms the competing SVM in all cases. Also as the concentration parameter INLINEFORM0 in AUD increases from INLINEFORM1 to INLINEFORM2 (yielding less concentrated distributions), we have more unique acoustic units in the tokenizations of both data sets, from 184 to 199, and INLINEFORM3 usually produces better results than INLINEFORM4 . Multi-label classification In the setting where each spoken document can be associated with multiple topics/labels, we proceed to perform a multi-label classification task. The baseline approach is the binary relevance method, which independently trains one binary classifier for each label, and the spoken document is evaluated by each classifier to determine if the respective label applies to it. Specifically, we use a set of SVMs (Section SECREF10 ), one for each label, on the bag-of-words features. To adapt the CNN-based framework for multi-label classification, we replace the softmax in the output layer with a set of sigmoid output nodes, one for each label, as shown in Figure FIGREF7 . Since a sigmoid naturally provides output values between 0 and 1, we train the neural network (NN) to minimize the binary cross entropy loss defined as INLINEFORM0 , where INLINEFORM1 denotes the NN parameters, x is the feature vector of acoustic unit sequence, y is the target vector of labels, INLINEFORM2 and INLINEFORM3 are the output and the target for label INLINEFORM4 , and the number of unique labels is INLINEFORM5 . We further evaluate our topic ID performance on the speech corpora of three languages released by the DARPA LORELEI (Low Resource Languages for Emergent Incidents) Program. For each language there are a number of audio speech files, and each speech file is cut into segments of various lengths (up to 120 seconds). Each speech segment is seen as either in-domain or out-of-domain. In-domain data is defined as any speech segment relating to an incident or incidents, and in-domain data will fall into a set of domain-specific categories; these categories are known as situation types, or in-domain topics. There are 11 situation types: “Civil Unrest or Wide-spread Crime”, “Elections and Politics”, “Evacuation”, “Food Supply”, “Urgent Rescue”, “Utilities, Energy, or Sanitation”, “Infrastructure”, “Medical Assistance”, “Shelter”, “Terrorism or other Extreme Violence”, and “Water Supply”. We consider “Out-of-domain” as the 12th topic label, so each speech segment either corresponds to one or multiple in-domain topics, or is “Out-of-domain”. We use the average precision (AP, equal to the area under the precision-recall curve) as the evaluation metric, and report both the AP across the overall 12 labels, and the AP across 11 situation types, as shown in Table TABREF18 . For each configuration, only a single 10-fold CV result is reported, since we observe less variance in results here than in Switchboard. We have 16.5 hours in-domain data and 8.5 hours out-of-domain data for Turkish, 2.9 and 13.2 hours for Uzbek, and 7.7 and 7.2 hours for Mandarin. We use the same CNN architecture as on Switchboard but make the changes as described in Section SECREF11 . Also we use mini-batch size 30 and fix the training epochs as 100. All CNNs use word2vec pre-training. Additionally, we also implement another two separate topic ID baselines using the decoded word outputs from two supervised ASR systems, trained from 80 hours transcribed Babel Turkish speech BIBREF29 and about 170 hours transcribed HKUST Mandarin telephone speech (LDC2005T32 and LDC2005S15), respectively. As shown in Table TABREF18 , UTD-based SVMs are more competitive than AUD-based SVMs on the smaller corpora, i.e., Uzbek and Mandarin, while being less competitive on the larger corpus, Turkish. We further investigate this behavior on each individual language by varying the amount of training data; we split the data into 10 folds, and perform 10-fold CV 9 times, varying the number of folds for training from 1 to 9. As illustrated in Figure FIGREF19 for Turkish, as we use more folds for training, AUD-based system starts to be more competitive than UTD. Supervised ASR-based systems still give the best results in various cases, while UTD and AUD based systems give comparable performance. Note that CNN-based systems outperform SVMs on Turkish and Uzbek while losing on the smaller sized Mandarin, indicating more topic-labeled data is needed to enable competitive CNNs. This also indicates why CNNs on LORELEI corpora do not produce as large a gain over SVMs as on the larger sized Switchboard, since each 15-25 hour LORELEI corpus with 12 topic labels is a relatively small amount of data compared to the 35.7/61.6 hour Switchboard corpus with 6 labels. Concluding remarks We have demonstrated that both UTD and AUD are viable technologies for producing effective tokenizations of speech that enable topic ID performance comparable to using standard ASR systems, while effectively removing the dependency on transcribed speech required by the ASR alternative. We find that when training data is severely limited the UTD-based classification is superior to AUD-based classification. As the amount of training data increases, performance improves across the board. Finally, with sufficient training data AUD-based CNNs with word2vec pre-training outperform AUD-based SVMs.
[ "Zero Resource Toolkit (ZRTools) BIBREF7" ]
3,381
qasper_e
en
null
c483210fc867c15f41b696d521b983cdcf41c2062e6ac3fb
What BERT model do they test?
Introduction The ability of semantic reasoning is essential for advanced natural language understanding (NLU) systems. Many NLU tasks that take sentence pairs as input, such as natural language inference (NLI) and machine reading comprehension (MRC), heavily rely on the ability of sophisticated semantic reasoning. For instance, the NLI task aims to determine whether the hypothesis sentence (e.g., a woman is sleeping) can be inferred from the premise sentence (e.g., a woman is talking on the phone). This requires the model to read and understand sentence pairs to make the specific semantic inference. Bidirectional Encoder Representations from Transformer (BERT) BIBREF1 has shown strong ability in semantic reasoning. It was recently proposed and obtained impressive results on many tasks, ranging from text classification, natural language inference, and machine reading comprehension. BERT achieves this by employing two objectives in the pre-training, i.e., the masked language modeling (Masked LM) and the next sentence prediction (NSP). Intuitively, the Masked LM task concerns word-level knowledge, and the NSP task captures the global document-level information. The goal of NSP is to identify whether an input sentence is next to another input sentence. From the ablation study BIBREF1, the NSP task is quite useful for the downstream NLI and MRC tasks (e.g., +3.5% absolute gain on the Question NLI (QNLI) BIBREF2 task). Despite its usefulness, we suggest that BERT has not made full use of the document-level knowledge. The sentences in the negative samples used in NSP are randomly drawn from other documents. Therefore, to discriminate against these sentences, BERT is prone to aggregating the shallow semantic, e.g., topic, neglecting context clues useful for detailed reasoning. In other words, the canonical NSP task would encourage the model to recognize the correlation between sentences, rather than obtaining the ability of semantic entailment. This setting weakens the BERT model from learning specific semantic for inference. Another issue that renders NSP less effective is that BERT is order-sensitive. Performance degradation was observed on typical NLI tasks when the order of two input sentences are reversed during the BERT fine-tuning phase. It is reasonable as the NSP task can be roughly analogy to the NLI task when the input comes as (premise, hypothesis), considering the causal order among sentences. However, this identity between NSP and NLI is compromised when the sentences are swapped. Based on these considerations, we propose a simple yet effective method, i.e., introducing a IsPrev category to the classification task, which is a symmetric label of IsNext of NSP. The input of samples with IsPrev is the reverse of those with IsNext label. The advantages of using this previous sentence prediction (PSP) are three folds. (1) Learning the contrast between NSP and PSP forces the model to extract more detailed semantic, thereby the model is more capable of discriminating the correlation and entailment. (2) NSP and PSP are symmetric. This symmetric regularization alleviates the influence of the order of the input pair. (3) Empirical results indicate that our method is beneficial for all the semantic reasoning tasks that take sentence pair as input. In addition, to further incorporating the document-level knowledge, NSP and PSP are extended with non-successive sentences, where the label smoothing technique is adopted. The proposed method yields a considerable improvement in our experiments. We evaluate the ability of semantic reasoning on standard NLI and MRC benchmarks, including the challenging HANS dataset BIBREF0. Analytical work on the HANS dataset provides a more comprehensible perspective towards the proposed method. Furthermore, the results on the Chinese benchmarks are provided to demonstrate its generality. In summary, this work makes the following contributions: The supervision signal from the original NSP task is weak for semantic inference. Therefore, a novel method is proposed to remedy the asymmetric issue and enhance the reasoning ability. Both empirical and analytical evaluations are provided on the NLI and MRC datasets, which verifies the effectiveness of using more document-level knowledge. Related Work ::: Pair-wise semantic reasoning Many NLU tasks seek to model the relationship between two sentences. Semantic reasoning is performed on the sentence pair for the task-specific inference. Pair-wise semantic reasoning tasks have drawn a lot of attention from the NLP community as they largely require the comprehension ability of the learning systems. Recently, the significant improvement on these benchmarks comes from the pre-training models, e.g., BERT, StructBERT BIBREF3, ERNIE BIBREF4, BIBREF5, RoBERTa BIBREF6 and XLNet BIBREF7. These models learn from unsupervised/self-supervised objectives and perform excellently in the downstream tasks. Among these models, BERT adopts NSP as one of the objectives in the pre-training and shows that the NSP task has a positive effect on the NLI and MRC tasks. Although the primary study of XLNet and RoBERTa suggests that NSP is ineffective when the model is trained with a large sequence length of 512, the effect of NSP on the NLI problems should still be emphasized. The inefficiency of NSP is likely because the expected context length will be halved for Masked LM when taking a sentence pair as the input. The models derived from BERT, e.g., StructBERT and ERNIE 1.0/2.0, aim to incorporating more knowledge by elaborating pre-training objectives. This work aims to enhance the NSP task and verifies whether document-level information is helpful for the pre-training. To probe whether our method achieves a better regularization ability, our approach is also evaluated on the HANS BIBREF0 dataset, which contains hard data samples constructed by three heuristics. Previous advanced models such as BERT fail on the HANS dataset, and the test accuracy can barely exceed 0% in the subset of test examples. Related Work ::: Unsupervised learning from document In recent years, many unsupervised pre-training methods have been proposed in the NLP fields to extract knowledge among sentences DBLP:conf/nips/KirosZSZUTF15,DBLP:conf/emnlp/ConneauKSBB17,DBLP:conf/iclr/LogeswaranL18,DBLP:journals/corr/abs-1903-09424. The prediction of surrounding sentences endows the model with the ability to model the sentence-level coherence. Skip-Thought BIBREF8 consists of an encoder and two decoders. When a sentence is given and encoded into a vector by the encoder, the decoders are trained to predict the next sentence and the previous sentence. The goal is to obtain a better sentence representation that is useful for reconstructing the surrounding context. Considering that the estimation of the likelihood of sequences is computationally expensive and time-consuming, the Quick-Thought method BIBREF9 simplifies this in a manner similar to sampled softmax BIBREF10, which classifies the input sentences between surrounding sentences and the other. Note that Quick-Thought does not distinguish between the previous and next sentence as it is functionally rotation invariant. However, BERT is order-dependent, and the discrimination can provide more supervision signal for semantic learning. InferSent BIBREF11 instead pre-trains the model in a manner of supervised learning. It uses a large-scale NLI dataset as the pre-training task to learn the sentence representation. In our work, we focus on designing a more effective document-level objective, extended from the NSP task. The proposed method will be described in the following section and validated by providing extensive experimental results in the experiment part. Method Our method follows the same input format and the model architecture with original BERT. The proposed method solely concerns the NSP task. The NSP task is a binary classification task, which takes two sentences (A and B) as input and determines whether B is the next sentence of A. Although it has been proven to be very effective for BERT, there are two major deficiencies. (1) Discrimination between IsNext and DiffDoc (the label of the sentences drawn from different documents via negative sampling) is semantically shallow as the signal of sentence order is absent. The correlation between two successive sentences could be obvious, due to, for example, lexical overlap or the conjunction used at the beginning of the second sentence. As reported BIBREF1, the final pre-trained model is able to achieve 97%-98% accuracy on the NSP task. (2) BERT is order-sensitive, i.e., $f_{\text{BERT}}( \texttt {A}, \texttt {B}) \ne f_{\text{BERT}}(\texttt {B}, \texttt {A})$, while NSP is uni-directional. When the order of the input NLI pair is reversed, the performance will degrade. For instance, the accuracy decreases by about 0.5% on MNLI BIBREF12 and 0.4% on QNLI after swapping the sentences in our experiments . Motivated by these problems, we propose to extend the NSP task with previous sentence prediction (PSP). Despite its simplicity, empirical results show that this is beneficial for downstream tasks, including both NLI and MRC tasks. To further incorporate the document-level information, the scope is also expanded to include more surrounding sentences, not just the adjacent. The method is briefly illustrated in Fig. FIGREF6. Method ::: Previous Sentence Prediction Learning to recognize the previous sentence enables the model to capture more compact context information. One would argue that IsPrev (the label of PSP) is redundant as it plays a similar role of IsNext (the label of NSP). In fact, Quick-Thought uses the sampled softmax to approximate the sentence likelihood estimation of Skip-Thought, and it actually does not differentiate between the previous and next sentences. However, we suggest the order discrimination is essential for BERT pre-training. Quick-Thought aims at extracting sentence embedding, and it uses a rotating symmetric function, which makes IsPrev redundant in Quick-Thought. In contrast, BERT is order-sensitive, and learning the symmetric regularization is rather necessary. Another advantage of PSP is to enhance document-level supervision. In order to tell the difference between NSP and PSP, the model has to extract the detailed semantic for inference. Method ::: Gathering More Document-level Information Beyond NSP and PSP, which enable the model to learn the short-term dependency between sentences, we also propose to expand the scope of discrimination task to further incorporate the document-level information. Specifically, we also include the in-adjacent sentences in the sentence-pair classification task. The in-adjacent sentences next to the IsPrev and IsNext sentences are sampled, labeled as IsPrevInadj and IsNextInadj (cf. the bottom of Fig. FIGREF6). Note that these in-adjacent sentences will introduce much more training noise to the model. Therefore, the label smoothing technique is adopted to reduce the noise of these additional samples. It achieves this by relaxing our confidence on the labels, e.g., transforming the target probability from (1.0, 0.0) to (0.8, 0.2) in a binary classification problem. In summary, when A is given, the pre-training example for each label is constructed as follows: IsNext: Choosing the adjacent following sentence as B. IsPrev: Choosing the adjacent previous sentence as B. IsNextInadj: Choosing the in-adjacent following sentence as B. There is a sentence between A and B. IsPrevInadj: Choosing the in-adjacent previous sentence as B. There is a sentence between A and B. DiffDoc: Drawing B randomly from a different document. Experiment Settings This section gives detailed experiment settings. The method is evaluated on the BERTbase model, which has 12 layers, 12 self-attention heads with a hidden size of 768. To accelerate the training speed, two-phase training BIBREF1 is adopted. The first phase uses a maximal sentence length of 128, and 512 for the second phase. The numbers of training steps of two phases are 50K and 40K for the BERTBase model. We used AdamW BIBREF13 optimizer with a learning rate of 1e-4, a $\beta _1$ of 0.9, a $\beta _2$ of 0.999 and a L2 weight decay rate of $0.01$. The first 10% of the total steps are used for learning rate warming up, followed by the linear decay schema. We used a dropout probability of 0.1 on all layers. The data used for pre-training is the same as BERT, i.e., English Wikipedia (2500M words) and BookCorpus (800M words) BIBREF14. For the Masked LM task, we followed the same masking rate and settings as in BERT. We explore three method settings for comparison. BERT-PN: The NSP task in BERT is replaced by a 3-class task with IsNext, IsPrev and DiffDoc. The label distribution is 1:1:1. BERT-PN5cls: The NSP task in BERT is replaced by a 5-class task with two additional labels IsNextInadj, IsPrevInadj. The label distribution is 1:1:1:1:1. BERT-PNsmth: It uses the same data with BERT-PN5cls, except that the IsPrevInadj (IsNextInadj) label is mapped to IsPrev (IsNext) with a label smoothing factor of 0.8. BERT-PN is used to verify the feasibility of PSP. The comparison with BERT-PN5cls illustrates whether more document-level information helps. BERT-PNsmth, which is the label-smoothed version of BERT-PN5cls, is used to compare with BERT-PN5cls to see whether the noise reduction is necessary. In the following, we first show that BERT is order-sensitive and the use of PSP remedies this problem. Then we provide experimental results on the NLI and MRC tasks to verify the effectiveness of the proposed method. At last, the proposed method is evaluated on several Chinese datasets. Order-invariant with PSP NSP in the pre-training is useful for NLI and MRC task BIBREF1. However, we suggested that BERT trained with NSP is order-sensitive, i.e., the performance of BERT depends on the order of the input sentence pair. To verify our assumption, a primary experiment was conducted. The order of the input pair of NLI samples is reversed in the fine-tuning phase, and other hyper-parameters and settings keep the same with the BERT paper. Table TABREF19 shows the accuracy on the validation set of the MNLI and QNLI datasets. For the BERTBase model, when the sentences are swapped, the accuracy decreases by 0.5% on the MNLI task and 0.4% on the QNLI task. These results confirm that BERT trained with NSP only is indeed affected by the input order. This phenomenon motivates us to make the NSP task symmetric. The results of BERT-PN verify that BERT-PN is order-invariant. When the input order is reversed, the performance of BERT-PN remains stable. These results indicate that our method is able to remedy the order-sensitivity problem. Results of NLI Tasks ::: GLUE A popular benchmark for evaluation of language understanding is GLUE BIBREF2, which is a collection of three NLI tasks (MNLI, QNLI and RTE), three semantic textual similarity (STS) tasks (QQP, STS-B and MRPC), two text classification (TC) tasks (SST-2 and CoLA). Although the method is motivated for pair-wise reasoning, the results of other problems are also listed. Our implementation follows the same way that BERT performs in these tasks. The fine-tuning was conducted for 3 epochs for all the tasks, with a learning rate of 2e-5. The predictions were obtained by evaluating the training checkpoint with the best validation performance. Table TABREF21 illustrates the experimental results, showing that our method is beneficial for all of NLI tasks. The improvement on the RTE dataset is significant, i.e., 4% absolute gain over the BERTBase. Besides NLI, our model also performs better than BERTBase in the STS task. The STS tasks are semantically similar to the NLI tasks, and hence able to take advantage of PSP as well. Actually, the proposed method has a positive effect whenever the input is a sentence pair. The improvements suggest that the PSP task encourages the model to learn more detailed semantics in the pre-training, which improves the model on the downstream learning tasks. Moreover, our method is surprisingly able to achieve slightly better results in the single-sentence problem. The improvement should be attributed to better semantic representation. When comparing between PN and PN5cls, PN5cls achieves better results than PN. This indicates that including a broader range of the context is effective for improving inference ability. Considering that the representation of IsNext and IsNextInadj should be coherent, we propose BERTBase-PNsmth to mitigate this problem. PNsmth further improves the performance and obtains an averaged score of 81.0. Results of NLI Tasks ::: HANS Although BERT has shown its effectiveness in the NLI tasks. BIBREF0 pointed out that BERT is still vulnerable in the NLI task as it is prone to adopting fallible heuristics. Therefore, they released a dataset, called The Heuristic Analysis for NLI Systems (HANS), to probe whether the model learns inappropriate inductive bias from the training set. It is constructed by three heuristics, i.e., lexical overlap heuristic, sub-sequence heuristic, and constituent heuristic. The first heuristic assumes that a premise entails all hypotheses constructed from words in the premise, the second assumes that a premise entails all of its contiguous sub-sequences and the third assumes that a premise entails all complete sub-trees in its parse tree. BERT and other advanced models fail on this dataset and barely exceeds 0% accuracy in most cases BIBREF0. Fig. FIGREF23 illustrates the accuracy of BERTBase and BERTBase-PNsmth on the HANS dataset. The evaluation is made upon the model trained on the MNLI dataset and the predicted neutral and contradiction labels are mapped into non-entailment. The BERTBase-PNsmth evidently outperforms the BERTBase with the non-entailment examples. For the non-entailment samples constructed using the lexical overlap heuristic, our model achieves 160% relative improvement over the BERTBase model. Some samples are constructed by swapping the entities in the sentence (e.g., The doctor saw the lawyer $\nrightarrow $ The lawyer saw the doctor) and our method outperforms BERTBase by 20% in accuracy. We suggest that the Masked LM task can hardly model the relationship between two entities and NSP only is too semantically shallow to capture the precise meaning. However, the discrimination between NSP and PSP enhances the model to realize the role of entities in a given sentence. For example, to determine that A (X is beautiful) rather than $\bar{\texttt {A}}$ (Y is beautiful) is the previous sentence of B (Y loves X), the model have to recognize the relationship between X and Y. In contrast, when PSP is absent, NSP can be probably inferred by learning the occurrence between beautiful and loves, regardless of the sentence structure. The detailed performance of the proposed method on the HANS dataset is illustrated in Fig. FIGREF24. The definition of each heuristic rules can be found in BIBREF0. Results of MRC Tasks ::: SQuAD v1.1 and v2.0 We also evaluate our method on the MRC tasks. The Stanford Question Answering Dataset (SQuAD v1.1) is a question answering (QA) dataset, which consists of 100K samples BIBREF15. Each data sample has a question and a corresponding Wikipedia passage that contains the answer. The goal is to extract the answer from the passage for the given question. In the fine-tuning procedure, we follow the exact way the BERT performed. The output vectors are used to compute the score of tokens being start and end of the answer span. The valid span that has the maximum score is selected as the prediction. And similarly, the fine-tuning training was performed for 3 epochs with a learning rate of 3e-5. Table TABREF26 demonstrates the results on the SQuAD v1.1 dataset. The comparison between BERTBase-PN and BERTBase indicates that the inclusion of the PSP subtask is beneficial (2.4% absolute improvement). When using BERTBase-PNsmth, another 0.3% increase in EM can be obtained. The experimental results on the SQuAD v2.0 BIBREF16 are also shown in Table. TABREF26. The SQuAD v2.0 differs from SQuAD v1.1 by allowing the question-paragraph pairs that have no answer. For SQuAD v2.0, our method also achieved about 4% absolute improvement in both EM and F1 against BERTBase. Results of MRC Tasks ::: RACE The ReAding Comprehension from Examinations (RACE) dataset BIBREF17 consists of 100K questions taken from English exams, and the answers are generated by human experts. This is one of the most challenging MRC datasets that require sophisticated reasoning. In our implementation, the question, document, and option are concatenated as a single sequence, separated by [SEP] token. And each part is truncated by a maximal length of 40/432/40, respectively. The model computes for a concatenation a scalar as the score, which is then used in a softmax layer for the final prediction. The fine-tuning was conducted for 5 epochs, with a batch size of 32 and a learning rate of 5e-5. As shown in Table TABREF28, the proposed method significantly improve the performance on the RACE dataset. BERTBase-PN obtains 2.6% accuracy improvement, and BERTBase-PN5cls further brings 0.4% absolute gain. The comparisons on the SQuAD v1.1, SQuAD v2.0, and RACE dataset demonstrate that the involvement of additional sentence and discourse information is not only beneficial for the NLI task but also the MRC task. This is reasonable as these tasks heavily rely on the global semantic understanding and sophisticated reasoning among sentences. And this ability can be effectively enhanced by our method. Results of Chinese NLP Tasks The experiments are also conducted on Chinese NLP tasks: XNLI BIBREF19 a multi-lingual dataset. The data sample in XNLI is a sentence pair annotated with textual entailment. The Chinese part is used. LCQMC BIBREF20 is a dataset for sequence matching. A binary label is annotated for a sentence pair in the dataset to indicate whether these two sentences have the same intention. NLPCC-DBQA BIBREF21 formulates the domain-based question answering as a binary classification task. Each data sample is a question-sentence pair. The goal is to identify whether the sentence contains the answer to the question. CMRC-2018 is the Chinese Machine Reading Comprehension dataset. Similar to SQuAD, the system needs to extract fragments from the text as the answer. DRCD BIBREF22 is also a Chinese MRC data set. The data follows the format of SQuAD. For Chinese NLP tasks, we pre-train the model using Chinese corpus. We collected textual data (10879M tokens in total) from the website, consisting of Hudong Baike data (6084M tokens) , Zhihu data(465M tokens) , Sohu News(3937M tokens) and Wikipedia data (393M tokens). For the first 3 Chinese tasks, we follow the settings as in ERNIE BIBREF4. The experimental results are given in Table TABREF29. The proposed method is compared with four models, i.e., BERTBase BIBREF1, BERTBase with whole word masking BIBREF18, ERNIE BIBREF4 and ERNIE 2.0 BIBREF5. Our method achieves comparable or even better results against ERNIE 2.0 BIBREF5. Note that the Chinese ERNIE 2.0 is equipped with 5 different objectives and it uses more training data (14988M tokens in total) than ours. The results indicate that the proposed method is quite effective for the pair-wise semantic reasoning as simply including PSP can achieve the results on par with multiple objectives. The results of CMRC-2018 and DRCD datasets are given in Table TABREF30. Since the CMRC-2018 competition does not release the test set, the comparison on the test set is absent. Our results are obtained using the open-sourced code of BERT-wwm . We keep the hyper-parameters the same with that in ERNIE BIBREF4, except that the batch size is 12 instead of 64 due to the memory limit. Under this setting, we achieved similar results of BERTBase in the BERT-wwm paper BIBREF18. However, this is worse than the results of BERTBase reported in the ERNIE 2.0 paper BIBREF5 by about 1% in F1. This suggests that our results are currently incomparable with ERNIE 2.0. Overall, the results in Table TABREF30 illustrate that our method is also effective for the Chinese QA tasks. Conclusion This paper aims to enrich the NSP task to provide more document-level information in the pre-training. Motivated by the in-symmetric property of NSP, we propose to differentiate between different sentence orders by including PSP. Despite the simplicity, extensive experiments demonstrate that the model obtains a better ability in pair-wise semantic reasoning. Our work suggests that the document-level objective is effective, at least for the BERTbase model. In the future, we will investigate the way to take advantages of both large-scale training and our method.
[ "BERTbase", "BERTbase" ]
3,852
qasper_e
en
null
2a51c07e65a9214ed2cd3c04303afa205e005f4e1ccb172a
what keyphrase extraction models were reassessed?
Introduction This work is licensed under a Creative Commons Attribution 4.0 International Licence. Licence details: http://creativecommons.org/licenses/by/4.0/ Recent years have seen a surge of interest in automatic keyphrase extraction, thanks to the availability of the SemEval-2010 benchmark dataset BIBREF0 . This dataset is composed of documents (scientific articles) that were automatically converted from PDF format to plain text. As a result, most documents contain irrelevant pieces of text (e.g. muddled sentences, tables, equations, footnotes) that require special handling, so as to not hinder the performance of keyphrase extraction systems. In previous work, these are usually removed at the preprocessing step, but using a variety of techniques ranging from simple heuristics BIBREF1 , BIBREF2 , BIBREF3 to sophisticated document logical structure detection on richly-formatted documents recovered from Google Scholar BIBREF4 . Under such conditions, it may prove difficult to draw firm conclusions about which keyphrase extraction model performs best, as the impact of preprocessing on overall performance cannot be properly quantified. While previous work clearly states that efficient document preprocessing is a prerequisite for the extraction of high quality keyphrases, there is, to our best knowledge, no empirical evidence of how preprocessing affects keyphrase extraction performance. In this paper, we re-assess the performance of several state-of-the-art keyphrase extraction models at increasingly sophisticated levels of preprocessing. Three incremental levels of document preprocessing are experimented with: raw text, text cleaning through document logical structure detection, and removal of keyphrase sparse sections of the document. In doing so, we present the first consistent comparison of different keyphrase extraction models and study their robustness over noisy text. More precisely, our contributions are: Dataset and Preprocessing The SemEval-2010 benchmark dataset BIBREF0 is composed of 244 scientific articles collected from the ACM Digital Library (conference and workshop papers). The input papers ranged from 6 to 8 pages and were converted from PDF format to plain text using an off-the-shelf tool. The only preprocessing applied is a systematic dehyphenation at line breaks and removal of author-assigned keyphrases. Scientific articles were selected from four different research areas as defined in the ACM classification, and were equally distributed into training (144 articles) and test (100 articles) sets. Gold standard keyphrases are composed of both author-assigned keyphrases collected from the original PDF files and reader-assigned keyphrases provided by student annotators. Long documents such as those in the SemEval-2010 benchmark dataset are notoriously difficult to handle due to the large number of keyphrase candidates (i.e. phrases that are eligible to be keyphrases) that the systems have to cope with BIBREF6 . Furthermore, noisy textual content, whether due to format conversion errors or to unusable elements (e.g. equations), yield many spurious keyphrase candidates that negatively affect keyphrase extraction performance. This is particularly true for systems that make use of core NLP tools to select candidates, that in turn exhibit poor performance on degraded text. Filtering out irrelevant text is therefore needed for addressing these issues. In this study, we concentrate our effort on re-assessing keyphrase extraction performance on three increasingly sophisticated levels of document preprocessing described below. Table shows the average number of sentences and words along with the maximum possible recall for each level of preprocessing. The maximum recall is obtained by computing the fraction of the reference keyphrases that occur in the documents. We observe that the level 2 preprocessing succeeds in eliminating irrelevant text by significantly reducing the number of words (-19%) while maintaining a high maximum recall (-2%). Level 3 preprocessing drastically reduce the number of words to less than a quarter of the original amount while interestingly still preserving high recall. Keyphrase Extraction Models We re-implemented five keyphrase extraction models : the first two are commonly used as baselines, the third is a resource-lean unsupervised graph-based ranking approach, and the last two were among the top performing systems in the SemEval-2010 keyphrase extraction task BIBREF0 . We note that two of the systems are supervised and rely on the training set to build their classification models. Document frequency counts are also computed on the training set. Stemming is applied to allow more robust matching. The different keyphrase extraction models are briefly described below: Each model uses a distinct keyphrase candidate selection method that provides a trade-off between the highest attainable recall and the size of set of candidates. Table summarizes these numbers for each model. Syntax-based selection heuristics, as used by TopicRank and WINGNUS, are better suited to prune candidates that are unlikely to be keyphrases. As for KP-miner, removing infrequent candidates may seem rather blunt, but it turns out to be a simple yet effective pruning method when dealing with long documents. For details on how candidate selection methods affect keyphrase extraction, please refer to BIBREF16 . Apart from TopicRank that groups similar candidates into topics, the other models do not have any redundancy control mechanism. Yet, recent work has shown that up to 12% of the overall error made by state-of-the-art keyphrase extraction systems were due to redundancy BIBREF6 , BIBREF17 . Therefore as a post-ranking step, we remove redundant keyphrases from the ranked lists generated by all models. A keyphrase is considered redundant if it is included in another keyphrase that is ranked higher in the list. Experimental settings We follow the evaluation procedure used in the SemEval-2010 competition and evaluate the performance of each model in terms of f-measure (F) at the top INLINEFORM0 keyphrases. We use the set of combined author- and reader-assigned keyphrases as reference keyphrases. Extracted and reference keyphrases are stemmed to reduce the number of mismatches. Results The performances of the keyphrase extraction models at each preprocessing level are shown in Table . Overall, we observe a significant increase of performance for all models at levels 3, confirming that document preprocessing plays an important role in keyphrase extraction performance. Also, the difference of f-score between the models, as measured by the standard deviation INLINEFORM0 , gradually decreases with the increasing level of preprocessing. This result strengthens the assumption made in this paper, that performance variation across models is partly a function of the effectiveness of document preprocessing. Somewhat surprisingly, the ordering of the two best models reverses at level 3. This showcases that rankings are heavily influenced by the preprocessing stage, despite the common lack of details and analysis it suffers from in explanatory papers. We also remark that the top performing model, namely KP-Miner, is unsupervised which supports the findings of BIBREF6 indicating that recent unsupervised approaches have rivaled their supervised counterparts in performance. In an attempt to quantify the performance variation across preprocessing levels, we compute the standard deviation INLINEFORM0 for each model. Here we see that unsupervised models are more sensitive to noisy input, as revealed by higher standard deviations. We found two main reasons for this. First, using multiple discriminative features to rank keyphrase candidates adds inherent robustness to the models. Second, the supervision signal helps models to disregard noise. In Table , we compare the outputs of the five models by measuring the percentage of valid keyphrases that are retrieved by all models at once for each preprocessing level. By these additional results, we aim to assess whether document preprocessing smoothes differences between models. We observe that the overlap between the outputs of the different models increases along with the level of preprocessing. This suggests that document preprocessing reduces the effect that the keyphrase extraction model in itself has on overall performance. In other words, the singularity of each model fades away gradually with increase in preprocessing effort. Reproducibility Being able to reproduce experimental results is a central aspect of the scientific method. While assessing the importance of the preprocessing stage for five approaches, we found that several results were not reproducible, as shown in Table . We note that the trends for baselines and high ranking systems are opposite: compared to the published results, our reproduction of top systems under-performs and our reproduction of baselines over-performs. We hypothesise that this stems from a difference in hyperparameter tuning, including the ones that the preprocessing stage makes implicit. Competitors have strong incentives to correctly optimize hyperparameters, to achieve a high ranking and get more publicity for their work while competition organizers might have the opposite incentive: too strong a baseline might not be considered a baseline anymore. We also observe that with this leveled preprocessing, the gap between baselines and top systems is much smaller, lessening again the importance of raw scores and rankings to interpret the shared task results and emphasizing the importance of understanding correctly the preprocessing stage. Additional experiments In the previous sections, we provided empirical evidence that document preprocessing weighs heavily on the outcome of keyphrase extraction models. This raises the question of whether further improvement might be gained from a more aggressive preprocessing. To answer this question, we take another step beyond content filtering and further abridge the input text from level 3 preprocessed documents using an unsupervised summarization technique. More specifically, we keep the title and abstract intact, as they are the two most keyphrase dense parts within scientific articles BIBREF4 , and select only the most content bearing sentences from the remaining contents. To do this, sentences are ordered using TextRank BIBREF14 and the less informative ones, as determined by their TextRank scores normalized by their lengths in words, are filtered out. Finding the optimal subset of sentences from already shortened documents is however no trivial task as maximum recall linearly decreases with the number of sentences discarded. Here, we simply set the reduction ratio to 0.865 so that the average maximum recall on the training set does not lose more than 5%. Table shows the reduction in the average number of sentences and words compared to level 3 preprocessing. The performances of the keyphrase extraction models at level 4 preprocessing are shown in Table . We note that two models, namely TopicRank and TF INLINEFORM0 IDF, lose on performance. These two models mainly rely on frequency counts to rank keyphrase candidates, which in turn become less reliable at level 4 because of the very short length of the documents. Other models however have their f-scores once again increased, thus indicating that further improvement is possible from more reductive document preprocessing strategies. Conclusion In this study, we re-assessed the performance of several keyphrase extraction models and showed that performance variation across models is partly a function of the effectiveness of the document preprocessing. Our results also suggest that supervised keyphrase extraction models are more robust to noisy input. Given our findings, we recommend that future works use a common preprocessing to evaluate the interest of keyphrase extraction approaches. For this reason we make the four levels of preprocessing used in this study available for the community.
[ "Answer with content missing: (LVL1, LVL2, LVL3) \n- Stanford CoreNLP\n- Optical Character Recognition (OCR) system, ParsCIT \n- further abridge the input text from level 2 preprocessed documents to the following: title, headers, abstract, introduction, related work, background and conclusion." ]
1,822
qasper_e
en
null
867b7b94ee2ae6051131ca90ebc66df6ba3084f4bb3bdda8
Is pre-training effective in their evaluation?
Introduction Recently, neural machine translation (NMT) has gained popularity in the field of machine translation. The conventional encoder-decoder NMT proposed by Cho2014 uses two recurrent neural networks (RNN): one is an encoder, which encodes a source sequence into a fixed-length vector, and the other is a decoder, which decodes the vector into a target sequence. A newly proposed attention-based NMT by DzmitryBahdana2014 can predict output words using the weights of each hidden state of the encoder by the attention mechanism, improving the adequacy of translation. Even with the success of attention-based models, a number of open questions remain in NMT. Tu2016 argued two of the common problems are over-translation: some words are repeatedly translated unnecessary and under-translation: some words are mistakenly untranslated. This is due to the fact that NMT can not completely convert the information from the source sentence to the target sentence. Mi2016a and Feng2016 pointed out that NMT lacks the notion of coverage vector in phrase-based statistical machine translation (PBSMT), so unless otherwise specified, there is no way to prevent missing translations. Another problem in NMT is an objective function. NMT is optimized by cross-entropy; therefore, it does not directly maximize the translation accuracy. Shen2016 pointed out that optimization by cross-entropy is not appropriate and proposed a method of optimization based on a translation accuracy score, such as expected BLEU, which led to improvement of translation accuracy. However, BLEU is an evaluation metric based on n-gram precision; therefore, repetition of some words may be present in the translation even though the BLEU score is improved. To address to problem of repeating and missing words in the translation, tu2016neural introduce an encoder-decoder-reconstructor framework that optimizes NMT by back-translation from the output sentences into the original source sentences. In their method, after training the forward translation in a manner similar to the conventional attention-based NMT, they train a back-translation model from the hidden state of the decoder into the source sequence by a new decoder to enforce agreement between source and target sentences. In order to confirm the language independence of the framework, we experiment on two parallel corpora of English-Japanese and Japanese-English translation tasks using encode-decoder-reconstructor. Our experiments show that their method offers significant improvement in BLEU scores and alleviates the problem of repeating and missing words in the translation on English-Japanese translation task, though the difference is not significant on Japanese-English translation task. In addition, we jointly train a model of forward translation and back-translation without pre-training, and then evaluate this model. As a result, the encoder-decoder-reconstructor can not be trained well without pre-training, so it proves that we have to train the forward translation model in a manner similar to the conventional attention-based NMT as pre-training. The main contributions of this paper are as follows: Related Works Several studies have addressed the NMT-specific problem of missing or repeating words. Niehues2016 optimized NMT by adding the outputs of PBSMT to the input of NMT. Mi2016a and Feng2016 introduced a distributed version of coverage vector taken from PBSMT to consider which words have been already translated. All these methods, including ours, employ information of the source sentence to improve the quality of translation, but our method uses back-translation to ensure that there is no inconsistency. Unlike other methods, once learned, our method is identical to the conventional NMT model, so it does not need any additional parameters such as coverage vector or a PBSMT system for testing. The attention mechanism proposed by Meng2016 considers not only the hidden states of the encoder but also the hidden states of the decoder so that over-translation can be relaxed. In addition, the attention mechanism proposed by Feng2016 computes a context vector by considering the previous context vector to prevent over-translation. These works indirectly reduce repeating and missing words, while we directly penalize translation mismatch by considering back-translation. The encoder-decoder-reconstructor framework for NMT proposed by tu2016neural optimizes NMT by reconstructor using back-translation. They consider likelihood of both of forward translation and back-translation, and then this framework offers significant improvement in BLEU scores and alleviates the problem of repeating and missing words in the translation on a Chinese-English translation task. Neural Machine Translation Here, we describe the attention-based NMT proposed by DzmitryBahdana2014 as shown in Figure FIGREF1 . The input sequence ( INLINEFORM0 ) is converted into a fixed-length vector by the encoder using an RNN. At each time step INLINEFORM1 , the hidden state INLINEFORM2 of the encoder is presented as DISPLAYFORM0 using a bidirectional RNN. The forward state INLINEFORM0 and the backward state INLINEFORM1 are computed by DISPLAYFORM0 and DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 are nonlinear functions. The hidden states INLINEFORM2 are converted into a fixed-length vector INLINEFORM3 as DISPLAYFORM0 where INLINEFORM0 is a nonlinear function. The fixed-length vector INLINEFORM0 generated by the encoder is converted into the target sequence ( INLINEFORM1 ) by the decoder using an RNN. At each time step INLINEFORM2 , the conditional probability of the output word INLINEFORM3 is computed by DISPLAYFORM0 where INLINEFORM0 is a nonlinear function. The hidden state INLINEFORM1 of the decoder is presented as DISPLAYFORM0 using the hidden state INLINEFORM0 and the target word INLINEFORM1 at the previous time step and the context vector INLINEFORM2 . The context vector INLINEFORM0 is a weighted sum of each hidden state INLINEFORM1 of the encoder. It is presented as DISPLAYFORM0 and its weight INLINEFORM0 is a normalized probability distribution. It is computed by DISPLAYFORM0 and DISPLAYFORM0 where INLINEFORM0 is a weight vector and INLINEFORM1 and INLINEFORM2 are weight matrices. The objective function is defined by DISPLAYFORM0 where INLINEFORM0 is the number of data and INLINEFORM1 is a model parameter. Incidentally, as a nonlinear function, the hyperbolic tangent function or the rectified linear unit are generally used. Architecture Next, we describe the encoder-decoder-reconstructor framework for NMT proposed by tu2016neural as shown in Figure FIGREF1 . The encoder-decoder-reconstructor consists of two components: the standard encoder-decoder as an attention-based NMT proposed by DzmitryBahdana2014 and the reconstructor which back-translates from the hidden states of decoder to the source sentence. In their method, the hidden state of the decoder is back-translated into the source sequence ( INLINEFORM0 ) by the reconstructor for the back-translation. At each time step INLINEFORM1 , the conditional probability of the output word INLINEFORM2 is computed by DISPLAYFORM0 where INLINEFORM0 is a nonlinear function. The hidden state INLINEFORM1 of the reconstructor is presented as DISPLAYFORM0 using the hidden state INLINEFORM0 and the source word INLINEFORM1 at the previous time step and the new context vector (inverse context vector) INLINEFORM2 . The inverse context vector INLINEFORM0 is a weighted sum of each hidden state INLINEFORM1 of the decoder (on forward translation). It is presented as DISPLAYFORM0 and its weight INLINEFORM0 is a normalized probability distribution. It is computed by DISPLAYFORM0 and DISPLAYFORM0 where INLINEFORM0 is a weight vector and INLINEFORM1 and INLINEFORM2 are weight matrices. The objective function is defined by DISPLAYFORM0 where INLINEFORM0 is the number of data, INLINEFORM1 and INLINEFORM2 are model parameters and INLINEFORM3 is a hyper-parameter which can consider the weight between forward translation and back-translation. This objective function consists of two parts: forward measures translation fluency, and backward measures translation adequacy. Thus, the combined objective function is more consistent with the goal of enhancing overall translation quality, and can more effectively guide the parameter training for making better translation. Training The encoder-decoder-reconstructor is trained with likelihood of both the encoder-decoder and the reconstructor on a set of training datasets. tu2016neural trained a back-translation model from the hidden state of the decoder into the source sequence by reconstructor to enforce agreement between source and target sentences using Equation EQREF23 after training the forward translation in a manner similar to the conventional attention-based NMT using Equation EQREF13 . In addition, we experiment to jointly train a model of forward translation and back-translation without pre-training. It may learn a globally optimal model compared to locally optimal model pre-trained using the forward translation. Testing tu2016neural used a beam search to predict target sentences that approximately maximizes both of forward translation and back-translation on testing. In this paper, however, we do not use a beam search for simplicity and effectiveness. Experiments We evaluated the encoder-decoder-reconstructor framework for NMT on English-Japanese and Japanese-English translation tasks. Datasets We used two parallel corpora: Asian Scientific Paper Excerpt Corpus (ASPEC) BIBREF0 and NTCIR PatentMT Parallel Corpus BIBREF1 . Regarding the training data of ASPEC, we used only the first 1 million sentences sorted by sentence-alignment similarity. Japanese sentences were segmented by the morphological analyzer MeCab (version 0.996, IPADIC), and English sentences were tokenized by tokenizer.perl of Moses. Table TABREF14 shows the numbers of the sentences in each corpus. Note that sentences with more than 40 words were excluded from the training data. Models We used the attention-based NMT BIBREF2 as a baseline-NMT, the encoder-decoder-reconstructor BIBREF3 and the encoder-decoder-reconstructor that jointly trained forward translation and back-translation without pre-training. The RNN used in the experiments had 512 hidden units, 512 embedding units, 30,000 vocabulary size and 64 batch size. We used Adagrad (initial learning rate 0.01) for optimizing model parameters. We trained our model on GeForce GTX TITAN X GPU. Note that we set the hyper-parameter INLINEFORM0 on the encoder-decoder-reconstructor same as tu2016neural. Results Tables TABREF15 and TABREF16 show the translation accuracy in BLEU scores, the INLINEFORM0 -value of the significance test by bootstrap resampling BIBREF4 and training time in hours until convergence. The encoder-decoder-reconstructor BIBREF3 requires slightly longer time to train than the baseline NMT, but we emphasize that decoding time remains the same with the encoder-decoder-reconstructor and baseline-NMT. The results show that the encoder-decoder-reconstructor BIBREF3 significantly improves translation accuracy by 1.01 points on ASPEC and 1.37 points on NTCIR in English-Japanese translation ( INLINEFORM1 ). However, it does not significantly improve translation accuracy in Japanese-English translation. In addition, it is proved that the encoder-decoder-reconstructor without pre-training worsens rather than improves translation accuracy. Table TABREF24 shows examples of outputs of English-Japanese translations. In Example 1, “UTF8min乱 流 粘性 と 数値 粘性 の 大小 関係 により ,” (on the basis of the relation between turbulent viscosity and numerical viscosity in size) is missing in the output of baseline-NMT, but “UTF8min乱 流 粘性 と 数値 的 粘性 の 関係 を 基 に” (on the basis of the relation between turbulent viscosity and numerical viscosity) is present in the output of encoder-decoder-reconstructor. In Example 2, “UTF8min新生児” (newborn infant) and “UTF8min30歳以上の” (of 30 ‐ year ‐ old or more) are repeated in the output of baseline-NMT, but they appear only once in the output of encoder-decoder-reconstructor. In addition, Figures FIGREF25 and FIGREF25 show the attention layer on baseline-NMT and encoder-decoder-reconstructor in each example. In Figure FIGREF25 , although the attention layer of baseline NMT attends input word “turbulent”, the decoder does not output “UTF8min乱流” (turbulent) but “UTF8min検討” (examined) at the 13th word. Thus, under-translation may be resulted from the hidden layer or the embedding layer instead of the attention layer. In Figure FIGREF25 , it is found that the attention layer of baseline-NMT repeatedly attends input words “newborn infant” and “30 ‐ year ‐ old or more”. Consequently, the decoder repeatedly outputs “UTF8min新生児” (newborn infant) and “UTF8min30歳以上の” (of 30 ‐ year ‐ old or more). On the other hand, the attention layer of encoder-decoder-reconstructor almost correctly attends input words. Table TABREF28 shows a comparison of the number of word occurrences for each corpus and model. The columns show (i) the number of words that appear more frequently than the counterparts in the reference, and (ii) the number of words that appear more than once but are not included in the reference. Note that these numbers do not include unknown words, so (iii) shows the number of unknown words. In all the cases, the number of occurrence of redundant words is reduced in encoder-decoder-reconstructor. Thus, we confirmed that encoder-decoder-reconstructor achieves reduction of repeating and missing words while maintaining the quality of translation. Conclusion In this paper, we evaluated the encoder-decoder-reconstructor on English-Japanese and Japanese-English translation tasks. In addition, we evaluate the effectiveness of pre-training by comparing it with a jointly-trained model of forward translation and back-translation. Experimental results show that the encoder-decoder-reconstructor offers significant improvement in BLEU scores and alleviates the problem of repeating and missing words in the translation on English-Japanese translation task, and the encoder-decoder-reconstructor can not be trained well without pre-training, so it proves that we have to train the forward translation model in a manner similar to the conventional attention-based NMT as pre-training.
[ "Yes", "Yes" ]
2,077
qasper_e
en
null
38b265f4caee01966b67dca127276d17f640322b7a11b3f4
what datasets were used?
Introduction Summarization of patient information is essential to the practice of medicine. Clinicians must synthesize information from diverse data sources to communicate with colleagues and provide coordinated care. Examples of clinical summarization are abundant in practice; patient handoff summaries facilitate provider shift change, progress notes provide a daily status update for a patient, oral case presentations enable transfer of information from overnight admission to the care team and attending, and discharge summaries provide information about a patient's hospital visit to their primary care physician and other outpatient providers BIBREF0 . Informal, unstructured, or poor quality summaries can lead to communication failures and even medical errors, yet clinical instruction on how to formulate clinical summaries is ad hoc and informal. Non-existent or limited search functionality, fragmented data sources, and limited visualizations in electronic health records (EHRs) make summarization challenging for providers BIBREF1 , BIBREF2 , BIBREF3 . Furthermore, while dictation of EHR notes allows clinicians to more efficiently document information at the point of care, the stream of consciousness-like writing can hinder the readability of notes. Kripalani et al. show that discharge summaries are often lacking key information, including treatment progression and follow-up protocols, which can hinder communication between hospital and community based clinicians BIBREF4 . Recently, St. Thomas Hospital in Nashville, TN stipulated that discharge notes be written within 48 hours of discharge following incidences where improper care was given to readmitted patients because the discharge summary for the previous admission was not completed BIBREF5 . Automated summary generation has the potential to save clinician time, avoid medical errors, and aid clinical decision making. By organizing and synthesizing a patient's salient medical history, algorithms for patient summarization can enable better communication and care, particularly for chronically ill patients, whose medical records often contain hundreds of notes. In this work, we explore the automatic summarization of discharge summary notes, which are critical to ensuring continuity of care after hospitalization. We (1) provide an upper bound on extractive summarization by assessing how much information in the discharge note can be found in the rest of the patient's EHR notes and (2) develop a classifier for labeling the topics of history of present illness notes, a narrative section in the discharge summary that describes the patient's prior history and current symptoms. Such a classifier can be used to create topic specific evaluation sets for methods that perform extractive summarization. These aims are critical steps in ultimately developing methods that can automate discharge summary creation. Related Work In the broader field of summarization, automization was meant to standardize output while also saving time and effort. Pioneering strategies in summarization started by extracting "significant" sentences in the whole corpus to build an abstract where "significant" sentences were defined by the number of frequently occurring words BIBREF6 . These initial methods did not consider word meaning or syntax at either the sentence or paragraph level, which made them crude at best. More advanced extractive heuristics like topic modeling BIBREF7 , cue word dictionary approaches BIBREF8 , and title methods BIBREF9 for scoring content in a sentence followed soon after. For example, topic modeling extends initial frequency methods by assigning topics scores by frequency of topic signatures, clustering sentences with similar topics, and finally extracting the centroid sentence, which is considered the most representative sentence BIBREF10 . Recently, abstractive summarization approaches using sequence-to-sequence methods have been developed to generate new text that synthesizes original text BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 ; however, the field of abstractive summarization is quite young. Existing approaches within the field of electronic health record summarization have largely been extractive and indicative, meaning that summaries point to important pieces in the original text rather than replacing the original text altogether. Few approaches have been deployed in practice, and even fewer have demonstrated impact on quality of care and outcomes BIBREF15 . Summarization strategies have ranged from extraction of “relevant” sentences from the original text to form the summary BIBREF16 , topic modeling of EHR notes using Latent Dirichlet allocation (LDA) or bayesian networks BIBREF15 , and knowledge based heuristic systems BIBREF17 . To our knowledge, there is no literature to date on extractive or abstractive EHR summarization using neural networks. Data MIMIC-III is a freely available, deidentified database containing electronic health records of patients admitted to an Intensive Care Unit (ICU) at Beth Israel Deaconess Medical Center between 2001 and 2012. The database contains all of the notes associated with each patient's time spent in the ICU as well as 55,177 discharge reports and 4,475 discharge addendums for 41,127 distinct patients. Only the original discharge reports were included in our analyses. Each discharge summary was divided into sections (Date of Birth, Sex, Chief Complaint, Major Surgical or Invasive Procedure, History of Present Illness, etc.) using a regular expression. Upper Bound on Summarization Extractive summarization of discharge summaries relies on the assumption that the information in the discharge summary is documented elsewhere in the rest of the patient's notes. However, sometimes clinicians will document information in the discharge summary that may have been discussed throughout the hospital visit, but was never documented in the EHR. Thus, our first aim was to determine the upper bound of extractive summarization. For each patient, we compared the text of the discharge summary to the remaining notes for the patient's current admission as well as their entire medical record. Concept Unique Identifiers (CUIs) from the Unified Medical Language System (UMLS) were compared in order to assess whether clinically relevant concepts in the discharge summary could be located in the remaining notes BIBREF18 . CUIs were extracted using Apache cTAKES BIBREF19 and filtered by removing the CUIs that are already subsumed by a longer spanning CUI. For example, CUIs for "head" and "ache" were removed if a CUI existed for "head ache" in order to extract the most clinically relevant CUIs. In order to understand which sections of the discharge summaries would be the easiest or most difficult to summarize, we performed the same CUI overlap comparison for the chief complaint, major surgical or invasive procedure, discharge medication, and history of present illness sections of the discharge note separately. We calculated which fraction of the CUIs in each section were located in the rest of the patient's note for a specific hospital stay. We also calculated what percent of the genders recorded in the discharge summary were also recorded in the structured data for the patient. For each of the 55,177 discharge summary reports in the MIMIC database, we calculated what fraction of the CUIs in the discharge summary could be found in the remaining notes for the patient's current admission ( INLINEFORM0 ) and in their entire longitudinal medical record ( INLINEFORM1 ). Table TABREF13 shows the CUI recall averaged across all discharge summaries by both subject_id and hadm_id. The low recall suggests that clinicians may incorporate information in the discharge note that had not been previously documented in the EHR. Figure FIGREF11 plots the relationship between the number of non-discharge notes for each patient and the CUI recall (top) and the number of total CUIs in non-discharge notes and the CUI recall (middle). The number of CUIs is a proxy for the length of the notes, and as expected, the CUI recall tends to be higher in patients with more and longer notes. The bottom panel in Figure FIGREF11 demonstrates that recall is not correlated with the patient's length of stay outside the ICU, which indicates that our upper bound calculation is not severely impacted by only having access to the patient's notes from their stay in the ICU. Finally, Table TABREF14 shows the recall for the sex, chief complaint, procedure, discharge medication, and HPI discharge summary sections averaged across all the discharge summaries. The procedure section has the highest recall of 0.807, which is understandable because procedures undergone during an inpatient stay are most likely to be documented in an EHR. The recall for each of these five sections is much higher than the overall recall in Table TABREF13 , suggesting that extractive summarization may be easier for some sections of the discharge note. Overall, this upper bound analysis suggests that we may not be able to recreate a discharge summary with extractive summarization alone. While CUI comparison allows for comparing medically relevant concepts, cTAKES's CUI labelling process is not perfect, and further work, perhaps through sophisticated regular expressions, is needed to define the limits of extractive summarization. Labeling History of Present Illness Notes We developed a classifier to label topics in the history of present illness (HPI) notes, including demographics, diagnosis history, and symptoms/signs, among others. A random sample of 515 history of present illness notes was taken, and each of the notes was manually annotated by one of eight annotators using the software Multi-document Annotation Environment (MAE) BIBREF20 . MAE provides an interactive GUI for annotators and exports the results of each annotation as an XML file with text spans and their associated labels for additional processing. 40% of the HPI notes were labeled by clinicians and 60% by non-clinicians. Table TABREF5 shows the instructions given to the annotators for each of the 10 labels. The entire HPI note was labeled with one of the labels, and instructions were given to label each clause in a sentence with the same label when possible. Our LSTM model was adopted from prior work by Dernoncourt et al BIBREF21 . Whereas the Dernoncourt model jointly classified each sentence in a medical abstract, here we jointly classify each word in the HPI summary. Our model consists of four layers: a token embedding layer, a word contextual representation layer, a label scoring layer, and a label sequence optimization layer (Figure FIGREF9 ). In the following descriptions, lowercase italics is used to denote scalars, lowercase bold is used to denote vectors, and uppercase italics is used to denote matrices. Token Embedding Layer: In the token embedding layer, pretrained word embeddings are combined with learned character embeddings to create a hybrid token embedding for each word in the HPI note. The word embeddings, which are direct mappings from word INLINEFORM0 to vector INLINEFORM1 , were pretrained using word2vec BIBREF22 , BIBREF23 , BIBREF24 on all of the notes in MIMIC (v30) and only the discharge notes. Both the continuous bag of words (CBOW) and skip gram models were explored. Let INLINEFORM0 be the sequence of characters comprising the word INLINEFORM1 . Each character is mapped to its embedding INLINEFORM2 , and all embeddings are input into a bidirectional LSTM, which ultimately outputs INLINEFORM3 , the character embedding of the word INLINEFORM4 . The output of the token embedding layer is the vector e, which is the result of concatenation of the word embedding, t, and the character embedding, c. Contextual Representation Layer: The contextual representation layer takes as input the sequence of word embeddings, INLINEFORM0 , and outputs an embedding of the contextual representation for each word in the HPI note. The word embeddings are fed into a bi-directional LSTM, which outputs INLINEFORM1 , a concatenation of the hidden states of the two LSTMs for each word. Label Scoring Layer: At this point, each word INLINEFORM0 is associated with a hidden representation of the word, INLINEFORM1 . In the label scoring layer, we use a fully connected neural network with one hidden layer to output a score associated with each of the 10 categories for each word. Let INLINEFORM2 and INLINEFORM3 . We can compute a vector of scores s = INLINEFORM4 where the ith component of s is the score of class i for a given word. Label Sequence Optimization Layer: The Label Sequence Optimization Layer computes the probability of a labeling sequence and finds the sequence with the highest probability. In order to condition the label for each word on the labels of its neighbors, we employ a linear chain conditional random field (CRF) to define a global score, INLINEFORM0 , for a sequence of words and their associated scores INLINEFORM1 and labels, INLINEFORM2 : DISPLAYFORM0 where T is a transition matrix INLINEFORM0 INLINEFORM1 and INLINEFORM2 are vectors of scores that describe the cost of beginning or ending with a label. The probability of a sequence of labels is calculated by applying a softmax layer to obtain a probability of a sequence of labels: DISPLAYFORM0 Cross-entropy loss, INLINEFORM0 , is used as the objective function where INLINEFORM1 is the correct sequence of labels and the probability INLINEFORM2 is calculated according to the CRF. We evaluated our model on the 515 annotated history of present illness notes, which were split in a 70% train set, 15% development set, and a 15% test set. The model is trained using the Adam algorithm for gradient-based optimization BIBREF25 with an initial learning rate = 0.001 and decay = 0.9. A dropout rate of 0.5 was applied for regularization, and each batch size = 20. The model ran for 20 epochs and was halted early if there was no improvement after 3 epochs. We evaluated the impact of character embeddings, the choice of pretrained w2v embeddings, and the addition of learned word embeddings on model performance on the dev set. We report performance of the best performing model on the test set. Table TABREF16 compares dev set performance of the model using various pretrained word embeddings, with and without character embeddings, and with pretrained versus learned word embeddings. The first row in each section is the performance of the model architecture described in the methods section for comparison. Models using word embeddings trained on the discharge summaries performed better than word embeddings trained on all MIMIC notes, likely because the discharge summary word embeddings better captured word use in discharge summaries alone. Interestingly, the continuous bag of words embeddings outperformed skip gram embeddings, which is surprising because the skip gram architecture typically works better for infrequent words BIBREF26 . As expected, inclusion of character embeddings increases performance by approximately 3%. The model with word embeddings learned in the model achieves the highest performance on the dev set (0.886), which may be because the pretrained worm embeddings were trained on a previous version of MIMIC. As a result, some words in the discharge summaries, such as mi-spelled words or rarer diseases and medications, did not have associated word embeddings. Performing a simple spell correction on out of vocab words may improve performance with pretrained word embeddings. We evaluated the best performing model on the test set. The Learned Word Embeddings model achieved an accuracy of 0.88 and an F1-Score of 0.876 on the test set. Table TABREF17 shows the precision, recall, F1 score, and support for each of the ten labels, and Figure FIGREF18 shows the confusion matrix illustrating which labels were frequently misclassified. The demographics and patient movement labels achieved the highest F1 scores (0.96 and 0.93 respectively) while the vitals/labs and medication history labels had the lowest F1 scores (0.40 and 0.66 respectively). The demographics section consistently occurs at the beginning of the HPI note, and the patient movement section uses a limited vocab (transferred, admitted, etc.), which may explain their high F1 scores. On the other hand, the vitals/labs and medication history sections had the lowest support, which may explain why they were more challenging to label. Words that belonged to the diagnosis history, patient movement, and procedure/results sections were frequently labeled as symptoms/signs (Figure FIGREF18 ). Diagnosis history sections may be labeled frequently as symptoms/signs because symptoms/diseases can either be described as part of the patient's diagnosis history or current symptoms depending on when the symptom/disease occurred. However, many of the misclassification errors may be due to inconsistency in manual labelling among annotators. For example, sentences describing both patient movement and patient symptoms (e.g. "the patient was transferred to the hospital for his hypertension") were labeled entirely as 'patient movement' by some annotators while other annotators labeled the different clauses of the sentence separately as 'patient movement' and 'symptoms/signs.' Further standardization among annotators is needed to avoid these misclassifications. Future work is needed to obtain additional manual annotations where each HPI note is annotated by multiple annotators. This will allow for calculation of Cohen's kappa, which measures inter-annotator agreement, and comparison of clinician and non-clinician annotator reliability. Future work is also needed to better understand commonly mislabeled categories and explore alternative model architectures. Here we perform word level label prediction, which can result in phrases that contain multiple labels. For example, the phrase "history of neck pain" can be labeled with both 'diagnosis history' and 'symptoms/signs' labels. Post-processing is needed to create a final label prediction for each phrase. While phrase level prediction may resolve these challenges, it is difficult to segment the HPI note into phrases for prediction, as a single phrase may truly contain multiple labels. Segmentation of sentences by punctuation, conjunctions, and prepositions may yield the best phrase chunker for discharge summary text. Finally, supplementing the word embeddings in our LSTM model with CUIs may further improve performance. While word embeddings do well in learning the contextual context of words, CUIs allow for more explicit incorporation of medical domain expertise. By concatenating the CUI for each word with its hybrid token embedding, we may be able to leverage both data driven and ontology driven approaches. Conclusion In this paper we developed a CUI-based upper bound on extractive summarization of discharge summaries and presented a NN architecture that jointly classifies words in history of present illness notes. We demonstrate that our model can achieve excellent performance on a small dataset with known heterogeneity among annotators. This model can be applied to the 55,000 discharge summaries in MIMIC to create a dataset for evaluation of extractive summarization methods. Acknowledgments We would like to thank our annotators, Andrew Goldberg, Laurie Alsentzer, Elaine Goldberg, Andy Alsentzer, Grace Lo, and Josh Donis. We would also like to acknowledge Pete Szolovits for his guidance and for providing the pretrained word embeddings and Tristan Naumann for providing the MIMIC CUIs.
[ "MIMIC-III", "MIMIC-III" ]
2,992
qasper_e
en
null
c21ac70b9fefd1be0058d56b0eaa81d065271558b901364a
How long is the dataset for each step of hierarchy?
Introduction Offensive content has become pervasive in social media and a reason of concern for government organizations, online communities, and social media platforms. One of the most common strategies to tackle the problem is to train systems capable of recognizing offensive content, which then can be deleted or set aside for human moderation. In the last few years, there have been several studies published on the application of computational methods to deal with this problem. Most prior work focuses on a different aspect of offensive language such as abusive language BIBREF0 , BIBREF1 , (cyber-)aggression BIBREF2 , (cyber-)bullying BIBREF3 , BIBREF4 , toxic comments INLINEFORM0 , hate speech BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , and offensive language BIBREF11 . Prior work has focused on these aspects of offensive language in Twitter BIBREF3 , BIBREF7 , BIBREF8 , BIBREF11 , Wikipedia comments, and Facebook posts BIBREF2 . Recently, Waseem et. al. ( BIBREF12 ) acknowledged the similarities among prior work and discussed the need for a typology that differentiates between whether the (abusive) language is directed towards a specific individual or entity or towards a generalized group and whether the abusive content is explicit or implicit. Wiegand et al. ( BIBREF11 ) followed this trend as well on German tweets. In their evaluation, they have a task to detect offensive vs not offensive tweets and a second task for distinguishing between the offensive tweets as profanity, insult, or abuse. However, no prior work has explored the target of the offensive language, which is important in many scenarios, e.g., when studying hate speech with respect to a specific target. Therefore, we expand on these ideas by proposing a a hierarchical three-level annotation model that encompasses: Using this annotation model, we create a new large publicly available dataset of English tweets. The key contributions of this paper are as follows: Related Work Different abusive and offense language identification sub-tasks have been explored in the past few years including aggression identification, bullying detection, hate speech, toxic comments, and offensive language. Aggression identification: The TRAC shared task on Aggression Identification BIBREF2 provided participants with a dataset containing 15,000 annotated Facebook posts and comments in English and Hindi for training and validation. For testing, two different sets, one from Facebook and one from Twitter were provided. Systems were trained to discriminate between three classes: non-aggressive, covertly aggressive, and overtly aggressive. Bullying detection: Several studies have been published on bullying detection. One of them is the one by xu2012learning which apply sentiment analysis to detect bullying in tweets. xu2012learning use topic models to to identify relevant topics in bullying. Another related study is the one by dadvar2013improving which use user-related features such as the frequency of profanity in previous messages to improve bullying detection. Hate speech identification: It is perhaps the most widespread abusive language detection sub-task. There have been several studies published on this sub-task such as kwok2013locate and djuric2015hate who build a binary classifier to distinguish between `clean' comments and comments containing hate speech and profanity. More recently, Davidson et al. davidson2017automated presented the hate speech detection dataset containing over 24,000 English tweets labeled as non offensive, hate speech, and profanity. Offensive language: The GermEval BIBREF11 shared task focused on Offensive language identification in German tweets. A dataset of over 8,500 annotated tweets was provided for a course-grained binary classification task in which systems were trained to discriminate between offensive and non-offensive tweets and a second task where the organizers broke down the offensive class into three classes: profanity, insult, and abuse. Toxic comments: The Toxic Comment Classification Challenge was an open competition at Kaggle which provided participants with comments from Wikipedia labeled in six classes: toxic, severe toxic, obscene, threat, insult, identity hate. While each of these sub-tasks tackle a particular type of abuse or offense, they share similar properties and the hierarchical annotation model proposed in this paper aims to capture this. Considering that, for example, an insult targeted at an individual is commonly known as cyberbulling and that insults targeted at a group are known as hate speech, we pose that OLID's hierarchical annotation model makes it a useful resource for various offensive language identification sub-tasks. Hierarchically Modelling Offensive Content In the OLID dataset, we use a hierarchical annotation model split into three levels to distinguish between whether language is offensive or not (A), and type (B) and target (C) of the offensive language. Each level is described in more detail in the following subsections and examples are shown in Table TABREF10 . Level A: Offensive language Detection Level A discriminates between offensive (OFF) and non-offensive (NOT) tweets. Not Offensive (NOT): Posts that do not contain offense or profanity; Offensive (OFF): We label a post as offensive if it contains any form of non-acceptable language (profanity) or a targeted offense, which can be veiled or direct. This category includes insults, threats, and posts containing profane language or swear words. Level B: Categorization of Offensive Language Level B categorizes the type of offense and two labels are used: targeted (TIN) and untargeted (INT) insults and threats. Targeted Insult (TIN): Posts which contain an insult/threat to an individual, group, or others (see next layer); Untargeted (UNT): Posts containing non-targeted profanity and swearing. Posts with general profanity are not targeted, but they contain non-acceptable language. Level C: Offensive Language Target Identification Level C categorizes the targets of insults and threats as individual (IND), group (GRP), and other (OTH). Individual (IND): Posts targeting an individual. It can be a a famous person, a named individual or an unnamed participant in the conversation. Insults and threats targeted at individuals are often defined as cyberbulling. Group (GRP): The target of these offensive posts is a group of people considered as a unity due to the same ethnicity, gender or sexual orientation, political affiliation, religious belief, or other common characteristic. Many of the insults and threats targeted at a group correspond to what is commonly understood as hate speech. Other (OTH): The target of these offensive posts does not belong to any of the previous two categories (e.g. an organization, a situation, an event, or an issue). Data Collection The data included in OLID has been collected from Twitter. We retrieved the data using the Twitter API by searching for keywords and constructions that are often included in offensive messages, such as `she is' or `to:BreitBartNews'. We carried out a first round of trial annotation of 300 instances with six experts. The goal of the trial annotation was to 1) evaluate the proposed tagset; 2) evaluate the data retrieval method; and 3) create a gold standard with instances that could be used as test questions in the training and test setting annotation which was carried out using crowdsourcing. The breakdown of keywords and their offensive content in the trial data of 300 tweets is shown in Table TABREF14 . We included a left (@NewYorker) and far-right (@BreitBartNews) news accounts because there tends to be political offense in the comments. One of the best offensive keywords was tweets that were flagged as not being safe by the Twitter `safe' filter (the `-' indicates `not safe'). The vast majority of content on Twitter is not offensive so we tried different strategies to keep a reasonable number of tweets in the offensive class amounting to around 30% of the dataset including excluding some keywords that were not high in offensive content such as `they are` and `to:NewYorker`. Although `he is' is lower in offensive content we kept it as a keyword to avoid gender bias. In addition to the keywords in the trial set, we searched for more political keywords which tend to be higher in offensive content, and sampled our dataset such that 50% of the the tweets come from political keywords and 50% come from non-political keywords. In addition to the keywords `gun control', and `to:BreitbartNews', political keywords used to collect these tweets are `MAGA', `antifa', `conservative' and `liberal'. We computed Fliess' INLINEFORM0 on the trial set for the five annotators on 21 of the tweets. INLINEFORM1 is .83 for Layer A (OFF vs NOT) indicating high agreement. As to normalization and anonymization, no user metadata or Twitter IDs have been stored, and URLs and Twitter mentions have been substituted to placeholders. We follow prior work in related areas (burnap2015cyber,davidson2017automated) and annotate our data using crowdsourcing using the platform Figure Eight. We ensure data quality by: 1) we only received annotations from individuals who were experienced in the platform; and 2) we used test questions to discard annotations of individuals who did not reach a certain threshold. Each instance in the dataset was annotated by multiple annotators and inter-annotator agreement has been calculated. We first acquired two annotations for each instance. In case of 100% agreement, we considered these as acceptable annotations, and in case of disagreement, we requested more annotations until the agreement was above 66%. After the crowdsourcing annotation, we used expert adjudication to guarantee the quality of the annotation. The breakdown of the data into training and testing for the labels from each level is shown in Table TABREF15 . Experiments and Evaluation We assess our dataset using traditional and deep learning methods. Our simplest model is a linear SVM trained on word unigrams. SVMs have produced state-of-the-art results for many text classification tasks BIBREF13 . We also train a bidirectional Long Short-Term-Memory (BiLSTM) model, which we adapted from the sentiment analysis system of sentimentSystem,rasooli2018cross and altered to predict offensive labels instead. It consists of (1) an input embedding layer, (2) a bidirectional LSTM layer, (3) an average pooling layer of input features. The concatenation of the LSTM's and average pool layer is passed through a dense layer and the output is passed through a softmax function. We set two input channels for the input embedding layers: pre-trained FastText embeddings BIBREF14 , as well as updatable embeddings learned by the model during training. Finally, we also apply a Convolutional Neural Network (CNN) model based on the architecture of BIBREF15 , using the same multi-channel inputs as the above BiLSTM. Our models are trained on the training data, and evaluated by predicting the labels for the held-out test set. The distribution is described in Table TABREF15 . We evaluate and compare the models using the macro-averaged F1-score as the label distribution is highly imbalanced. Per-class Precision (P), Recall (R), and F1-score (F1), also with other averaged metrics are also reported. The models are compared against baselines of predicting all labels as the majority or minority classes. Offensive Language Detection The performance on discriminating between offensive (OFF) and non-offensive (NOT) posts is reported in Table TABREF18 . We can see that all systems perform significantly better than chance, with the neural models being substantially better than the SVM. The CNN outperforms the RNN model, achieving a macro-F1 score of 0.80. Categorization of Offensive Language In this experiment, the two systems were trained to discriminate between insults and threats (TIN) and untargeted (UNT) offenses, which generally refer to profanity. The results are shown in Table TABREF19 . The CNN system achieved higher performance in this experiment compared to the BiLSTM, with a macro-F1 score of 0.69. All systems performed better at identifying target and threats (TIN) than untargeted offenses (UNT). Offensive Language Target Identification The results of the offensive target identification experiment are reported in Table TABREF20 . Here the systems were trained to distinguish between three targets: a group (GRP), an individual (IND), or others (OTH). All three models achieved similar results far surpassing the random baselines, with a slight performance edge for the neural models. The performance of all systems for the OTH class is 0. This poor performances can be explained by two main factors. First, unlike the two other classes, OTH is a heterogeneous collection of targets. It includes offensive tweets targeted at organizations, situations, events, etc. making it more challenging for systems to learn discriminative properties of this class. Second, this class contains fewer training instances than the other two. There are only 395 instances in OTH, and 1,075 in GRP, and 2,407 in IND. Conclusion and Future Work This paper presents OLID, a new dataset with annotation of type and target of offensive language. OLID is the official dataset of the shared task SemEval 2019 Task 6: Identifying and Categorizing Offensive Language in Social Media (OffensEval) BIBREF16 . In OffensEval, each annotation level in OLID is an independent sub-task. The dataset contains 14,100 tweets and is released freely to the research community. To the best of our knowledge, this is the first dataset to contain annotation of type and target of offenses in social media and it opens several new avenues for research in this area. We present baseline experiments using SVMs and neural networks to identify the offensive tweets, discriminate between insults, threats, and profanity, and finally to identify the target of the offensive messages. The results show that this is a challenging task. A CNN-based sentence classifier achieved the best results in all three sub-tasks. In future work, we would like to make a cross-corpus comparison of OLID and datasets annotated for similar tasks such as aggression identification BIBREF2 and hate speech detection BIBREF8 . This comparison is, however, far from trivial as the annotation of OLID is different. Acknowledgments The research presented in this paper was partially supported by an ERAS fellowship awarded by the University of Wolverhampton.
[ "Level A: 14100 Tweets\nLevel B: 4640 Tweets\nLevel C: 4089 Tweets" ]
2,255
qasper_e
en
null
45104af27b1eaf72961c861eeae1013869612e3d5f258927
What useful information does attention capture?
Introduction Neural machine translation (NMT) has gained a lot of attention recently due to its substantial improvements in machine translation quality achieving state-of-the-art performance for several languages BIBREF0 , BIBREF1 , BIBREF2 . The core architecture of neural machine translation models is based on the general encoder-decoder approach BIBREF3 . Neural machine translation is an end-to-end approach that learns to encode source sentences into distributed representations and decode these representations into sentences in the target language. Among the different neural MT models, attentional NMT BIBREF4 , BIBREF5 has become popular due to its capability to use the most relevant parts of the source sentence at each translation step. This capability also makes the attentional model superior in translating longer sentences BIBREF4 , BIBREF5 . Figure FIGREF1 shows an example of how attention uses the most relevant source words to generate a target word at each step of the translation. In this paper we focus on studying the relevance of the attended parts, especially cases where attention is `smeared out' over multiple source words where their relevance is not entirely obvious, see, e.g., “would" and “like" in Figure FIGREF1 . Here, we ask whether these are due to errors of the attention mechanism or are a desired behavior of the model. Since the introduction of attention models in neural machine translation BIBREF4 various modifications have been proposed BIBREF5 , BIBREF6 , BIBREF7 . However, to the best of our knowledge there is no study that provides an analysis of what kind of phenomena is being captured by attention. There are some works that have looked to attention as being similar to traditional word alignment BIBREF8 , BIBREF6 , BIBREF7 , BIBREF9 . Some of these approaches also experimented with training the attention model using traditional alignments BIBREF8 , BIBREF7 , BIBREF9 . liu-EtAl:2016:COLING have shown that attention could be seen as a reordering model as well as an alignment model. In this paper, we focus on investigating the differences between attention and alignment and what is being captured by the attention mechanism in general. The questions that we are aiming to answer include: Is the attention model only capable of modelling alignment? And how similar is attention to alignment in different syntactic phenomena? Our analysis shows that attention models traditional alignment in some cases more closely while it captures information beyond alignment in others. For instance, attention agrees with traditional alignments to a high degree in the case of nouns. However, it captures other information rather than only the translational equivalent in the case of verbs. This paper makes the following contributions: 1) We provide a detailed comparison of attention in NMT and word alignment. 2) We show that while different attention mechanisms can lead to different degrees of compliance with respect to word alignments, global compliance is not always helpful for word prediction. 3) We show that attention follows different patterns depending on the type of the word being generated. 4) We demonstrate that attention does not always comply with alignment. We provide evidence showing that the difference between attention and alignment is due to attention model capability to attend the context words influencing the current word translation. Related Work liu-EtAl:2016:COLING investigate how training the attention model in a supervised manner can benefit machine translation quality. To this end they use traditional alignments obtained by running automatic alignment tools (GIZA++ BIBREF10 and fast_align BIBREF11 ) on the training data and feed it as ground truth to the attention network. They report some improvements in translation quality arguing that the attention model has learned to better align source and target words. The approach of training attention using traditional alignments has also been proposed by others BIBREF9 , BIBREF8 . chen2016guided show that guided attention with traditional alignment helps in the domain of e-commerce data which includes lots of out of vocabulary (OOV) product names and placeholders, but not much in the other domains. alkhouli-EtAl:2016:WMT have separated the alignment model and translation model, reasoning that this avoids propagation of errors from one model to the other as well as providing more flexibility in the model types and training of the models. They use a feed-forward neural network as their alignment model that learns to model jumps in the source side using HMM/IBM alignments obtained by using GIZA++. shi-padhi-knight:2016:EMNLP2016 show that various kinds of syntactic information are being learned and encoded in the output hidden states of the encoder. The neural system for their experimental analysis is not an attentional model and they argue that attention does not have any impact for learning syntactic information. However, performing the same analysis for morphological information, belinkov2017neural show that attention has also some effect on the information that the encoder of neural machine translation system encodes in its output hidden states. As part of their analysis they show that a neural machine translation system that has an attention model can learn the POS tags of the source side more efficiently than a system without attention. Recently, koehn2017six carried out a brief analysis of how much attention and alignment match in different languages by measuring the probability mass that attention gives to alignments obtained from an automatic alignment tool. They also report differences based on the most attended words. The mixed results reported by chen2016guided, alkhouli-EtAl:2016:WMT, liu-EtAl:2016:COLING on optimizing attention with respect to alignments motivates a more thorough analysis of attention models in NMT. Attention Models This section provides a short background on attention and discusses two most popular attention models which are also used in this paper. The first model is a non-recurrent attention model which is equivalent to the “global attention" method proposed by DBLPjournalscorrLuongPM15. The second attention model that we use in our investigation is an input-feeding model similar to the attention model first proposed by bahdanau-EtAl:2015:ICLR and turned to a more general one and called input-feeding by DBLPjournalscorrLuongPM15. Below we describe the details of both models. Both non-recurrent and input-feeding models compute a context vector INLINEFORM0 at each time step. Subsequently, they concatenate the context vector to the hidden state of decoder and pass it through a non-linearity before it is fed into the softmax output layer of the translation network. DISPLAYFORM0 The difference of the two models lays in the way they compute the context vector. In the non-recurrent model, the hidden state of the decoder is compared to each hidden state of the encoder. Often, this comparison is realized as the dot product of vectors. Then the comparison result is fed to a softmax layer to compute the attention weight. DISPLAYFORM0 DISPLAYFORM1 Here INLINEFORM0 is the hidden state of the decoder at time INLINEFORM1 , INLINEFORM2 is INLINEFORM3 th hidden state of the encoder and INLINEFORM4 is the length of the source sentence. Then the computed alignment weights are used to compute a weighted sum over the encoder hidden states which results in the context vector mentioned above: DISPLAYFORM0 The input-feeding model changes the context vector computation in a way that at each step INLINEFORM0 the context vector is aware of the previously computed context INLINEFORM1 . To this end, the input-feeding model feeds back its own INLINEFORM2 to the network and uses the resulting hidden state instead of the context-independent INLINEFORM3 , to compare to the hidden states of the encoder. This is defined in the following equations: DISPLAYFORM0 DISPLAYFORM1 Here, INLINEFORM0 is the function that the stacked LSTM applies to the input, INLINEFORM1 is the last generated target word, and INLINEFORM2 is the output of previous time step of the input-feeding network itself, meaning the output of Equation EQREF2 in the case that context vector has been computed using INLINEFORM3 from Equation EQREF7 . Comparing Attention with Alignment As mentioned above, it is a commonly held assumption that attention corresponds to word alignments. To verify this, we investigate whether higher consistency between attention and alignment leads to better translations. Measuring Attention-Alignment Accuracy In order to compare attentions of multiple systems as well as to measure the difference between attention and word alignment, we convert the hard word alignments into soft ones and use cross entropy between attention and soft alignment as a loss function. For this purpose, we use manual alignments provided by RWTH German-English dataset as the hard alignments. The statistics of the data are given in Table TABREF8 . We convert the hard alignments to soft alignments using Equation EQREF10 . For unaligned words, we first assume that they have been aligned to all the words in the source side and then do the conversion. DISPLAYFORM0 Here INLINEFORM0 is the set of source words aligned to target word INLINEFORM1 and INLINEFORM2 is the number of source words in the set. After conversion of the hard alignments to soft ones, we compute the attention loss as follows: DISPLAYFORM0 Here INLINEFORM0 is the source sentence and INLINEFORM1 is the weight of the alignment link between source word INLINEFORM2 and the target word (see Equation EQREF10 ). INLINEFORM3 is the attention weight INLINEFORM4 (see Equation EQREF4 ) of the source word INLINEFORM5 , when generating the target word INLINEFORM6 . In our analysis, we also look into the relation between translation quality and the quality of the attention with respect to the alignments. For measuring the quality of attention, we use the attention loss defined in Equation EQREF11 . As a measure of translation quality, we choose the loss between the output of our NMT system and the reference translation at each translation step, which we call word prediction loss. The word prediction loss for word INLINEFORM0 is logarithm of the probability given in Equation EQREF12 . DISPLAYFORM0 Here INLINEFORM0 is the source sentence, INLINEFORM1 is target word at time step INLINEFORM2 , INLINEFORM3 is the target history given by the reference translation and INLINEFORM4 is given by Equation EQREF2 for either non-recurrent or input-feeding attention models. Spearman's rank correlation is used to compute the correlation between attention loss and word prediction loss: DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 are the ranks of the attention losses and word prediction losses, respectively, INLINEFORM2 is the covariance between two input variables, and INLINEFORM3 and INLINEFORM4 are the standard deviations of INLINEFORM5 and INLINEFORM6 . If there is a close relationship between word prediction quality and consistency of attention versus alignment, then there should be high correlation between word prediction loss and attention loss. Figure FIGREF13 shows an example with different levels of consistency between attention and word alignments. For the target words “will" and “come" the attention is not focused on the manually aligned word but distributed between the aligned word and other words. The focus of this paper is examining cases where attention does not follow alignment, answering the questions whether those cases represent errors or desirable behavior of the attention model. Measuring Attention Concentration As another informative variable in our analysis, we look into the attention concentration. While most word alignments only involve one or a few words, attention can be distributed more freely. We measure the concentration of attention by computing the entropy of the attention distribution: DISPLAYFORM0 Empirical Analysis of Attention Behaviour We conduct our analysis using the two different attention models described in Section SECREF3 . Our first attention model is the global model without input-feeding as introduced by DBLPjournalscorrLuongPM15. The second model is the input-feeding model BIBREF5 , which uses recurrent attention. Our NMT system is a unidirectional encoder-decoder system as described in BIBREF5 , using 4 recurrent layers. We trained the systems with dimension size of 1,000 and batch size of 80 for 20 epochs. The vocabulary for both source and target side is set to be the 30K most common words. The learning rate is set to be 1 and a maximum gradient norm of 5 has been used. We also use a dropout rate of 0.3 to avoid overfitting. Impact of Attention Mechanism We train both of the systems on the WMT15 German-to-English training data, see Table TABREF18 for some statistics. Table TABREF17 shows the BLEU scores BIBREF12 for both systems on different test sets. Since we use POS tags and dependency roles in our analysis, both of which are based on words, we chose not to use BPE BIBREF13 which operates at the sub-word level. We report alignment error rate (AER) BIBREF14 , which is commonly used to measure alignment quality, in Table TABREF20 to show the difference between attentions and human alignments provided by RWTH German-English dataset. To compute AER over attentions, we follow DBLPjournalscorrLuongPM15 to produce hard alignments from attentions by choosing the most attended source word for each target word. We also use GIZA++ BIBREF10 to produce automatic alignments over the data set to allow for a comparison between automatically generated alignments and the attentions generated by our systems. GIZA++ is run in both directions and alignments are symmetrized using the grow-diag-final-and refined alignment heuristic. As shown in Table TABREF20 , the input-feeding system not only achieves a higher BLEU score, but also uses attentions that are closer to the human alignments. Table TABREF21 compares input-feeding and non-recurrent attention in terms of attention loss computed using Equation EQREF11 . Here the losses between the attention produced by each system and the human alignments is reported. As expected, the difference in attention losses are in line with AER. The difference between these comparisons is that AER only takes the most attended word into account while attention loss considers the entire attention distribution. Alignment Quality Impact on Translation Based on the results in Section SECREF19 , one might be inclined to conclude that the closer the attention is to the word alignments the better the translation. However, chen2016guided, liu-EtAl:2016:COLING, alkhouli-EtAl:2016:WMT report mixed results by optimizing their NMT system with respect to word prediction and alignment quality. These findings warrant a more fine-grained analysis of attention. To this end, we include POS tags in our analysis and study the patterns of attention based on POS tags of the target words. We choose POS tags because they exhibit some simple syntactic characteristics. We use the coarse grained universal POS tags BIBREF15 given in Table TABREF25 . To better understand how attention accuracy affects translation quality, we analyse the relationship between attention loss and word prediction loss for individual part-of-speech classes. Figure FIGREF22 shows how attention loss differs when generating different POS tags. One can see that attention loss varies substantially across different POS tags. In particular, we focus on the cases of NOUN and VERB which are the most frequent POS tags in the dataset. As shown, the attention of NOUN is the closest to alignments on average. But the average attention loss for VERB is almost two times larger than the loss for NOUN. Considering this difference and the observations in Section SECREF19 , a natural follow-up would be to focus on getting the attention of verbs to be closer to alignments. However, Figure FIGREF22 shows that the average word prediction loss for verbs is actually smaller compared to the loss for nouns. In other words, although the attention for verbs is substantially more inconsistent with the word alignments than for nouns, the NMT system translates verbs more accurately than nouns on average. To formalize this relationship we compute Spearman's rank correlation between word prediction loss and attention loss, based on the POS tags of the target side, for the input-feeding model, see Figure FIGREF27 . The low correlation for verbs confirms that attention to other parts of source sentence rather than the aligned word is necessary for translating verbs and that attention does not necessarily have to follow alignments. However, the higher correlation for nouns means that consistency of attention with alignments is more desirable. This could, in a way, explain the mixed result reported for training attention using alignments BIBREF9 , BIBREF7 , BIBREF8 . Especially the results by chen2016guided in which large improvements are achieved for the e-commerce domain which contains many OOV product names and placeholders, but no or very weak improvements were achieved over common domains. Attention Concentration In word alignment, most target words are aligned to one source word. The average number of source words aligned to nouns and verbs is 1.1 and 1.2 respectively. To investigate to what extent this also holds for attention we measure the attention concentration by computing the entropy of the attention distribution, see Equation EQREF16 . Figure FIGREF28 shows the average entropy of attention based on POS tags. As shown, nouns have one of the lowest entropies meaning that on average the attention for nouns tends to be concentrated. This also explains the closeness of the attention to alignments for nouns. In addition, the correlation between attention entropy and attention loss in case of nouns is high as shown in Figure FIGREF28 . This means that attention entropy can be used as a measure of closeness of attention to alignment in the case of nouns. The higher attention entropy for verbs, in Figure FIGREF28 , shows that the attention is more distributed compared to nouns. The low correlation between attention entropy and word prediction loss (see Figure FIGREF32 ) shows that attention concentration is not required when translating into verbs. This also confirms that the correct translation of verbs requires the systems to pay attention to different parts of the source sentence. Another interesting observation here is the low correlation for pronouns (PRON) and particles (PRT), see Figure FIGREF32 . As can be seen in Figure FIGREF28 , these tags have more distributed attention comparing to nouns, for example. This could either mean that the attention model does not know where to focus or it deliberately pays attention to multiple, somehow relevant, places to be able to produce a better translation. The latter is supported by the relatively low word prediction losses, shown in the Figure FIGREF22 . Attention Distribution To further understand under which conditions attention is paid to words other than the aligned words, we study the distribution of attention over the source words. First, we measure how much attention is paid to the aligned words for each POS tag, on average. To this end, we compute the percentage of the probability mass that the attention model has assigned to aligned words for each POS tag, see Table TABREF35 . One can notice that less than half of the attention is paid to alignment points for most of the POS tags. To examine how the rest of attention in each case has been distributed over the source sentence we measure the attention distribution over dependency roles in the source side. We first parse the source side of RWTH data using the ParZu parser BIBREF16 . Then we compute how the attention probability mass given to the words other than the alignment points, is distributed over dependency roles. Table TABREF33 gives the most attended roles for each POS tag. Here, we focus on POS tags discussed earlier. One can see that the most attended roles when translating to nouns include adjectives and determiners and in the case of translating to verbs, it includes auxiliary verbs, adverbs (including negation), subjects, and objects. Conclusion In this paper, we have studied attention in neural machine translation and provided an analysis of the relation between attention and word alignment. We have shown that attention agrees with traditional alignment to a certain extent. However, this differs substantially by attention mechanism and the type of the word being generated. We have shown that attention has different patterns based on the POS tag of the target word. The concentrated pattern of attention and the relatively high correlations for nouns show that training the attention with explicit alignment labels is useful for generating nouns. However, this is not the case for verbs, since the large portion of attention being paid to words other than alignment points, is already capturing other relevant information. Training attention with alignments in this case will force the attention model to forget these useful information. This explains the mixed results reported when guiding attention to comply with alignments BIBREF9 , BIBREF7 , BIBREF8 . Acknowledgments This research was funded in part by the Netherlands Organization for Scientific Research (NWO) under project numbers 639.022.213 and 612.001.218.
[ "it captures other information rather than only the translational equivalent in the case of verbs", "Alignment points of the POS tags." ]
3,372
qasper_e
en
null
7de339432271d1324e9ea25f6321364528c6fd2f8ea2e3d1
what were the baselines?
Introduction Emotion detection has long been a topic of interest to scholars in natural language processing (NLP) domain. Researchers aim to recognize the emotion behind the text and distribute similar ones into the same group. Establishing an emotion classifier can not only understand each user's feeling but also be extended to various application, for example, the motivation behind a user's interests BIBREF0. Based on releasing of large text corpus on social media and the emotion categories proposed by BIBREF1, BIBREF2, numerous models have provided and achieved fabulous precision so far. For example, DeepMoji BIBREF3 which utilized transfer learning concept to enhance emotions and sarcasm understanding behind the target sentence. CARER BIBREF4 learned contextualized affect representations to make itself more sensitive to rare words and the scenario behind the texts. As methods become mature, text-based emotion detecting applications can be extended from a single utterance to a dialogue contributed by a series of utterances. Table TABREF2 illustrates the difference between single utterance and dialogue emotion recognition. The same utterances in Table TABREF2, even the same person said the same sentence, the emotion it convey may be various, which may depend on different background of the conversation, tone of speaking or personality. Therefore, for emotion detection, the information from preceding utterances in a conversation is relatively critical. In SocialNLP 2019 EmotionX, the challenge is to recognize emotions for all utterances in EmotionLines dataset, a dataset consists of dialogues. According to the needs for considering context at the same time, we develop two classification models, inspired by bidirectional encoder representations from transformers (BERT) BIBREF5, FriendsBERT and ChatBERT. In this paper, we introduce our approaches including causal utterance modeling, model pre-training, and fine-turning. Dataset EmotionLines BIBREF6 is a dialogue dataset composed of two subsets, Friends and EmotionPush, according to the source of the dialogues. The former comes from the scripts of the Friends TV sitcom. The other is made up of Facebook messenger chats. Each subset includes $1,000$ English dialogues, and each dialogue can be further divided into a few consecutive utterances. All the utterances are annotated by five annotators on a crowd-sourcing platform (Amazon Mechanical Turk), and the labeling work is only based on the textual content. Annotator votes for one of the seven emotions, namely Ekman’s six basic emotions BIBREF1, plus the neutral. If none of the emotion gets more than three votes, the utterance will be marked as “non-neutral”. For the datasets, there are properties worth additional mentioning. Although Friends and EmotionPush share the same data format, they are quite different in nature. Friends is a speech-based dataset which is annotated dialogues from the TV sitcom. It means most of the utterances are generated by the a few main characters. The personality of a character often affects the way of speaking, and therefore “who is the speaker" might provide extra clues for emotion prediction. In contrast, EmotionPush does not have this trait due to the anonymous mechanism. In addition, features such as typo, hyperlink, and emoji that only appear in chat-based data will need some domain-specific techniques to process. Incidentally, the objective of the challenge is to predict the emotion for each utterance. Just, according to EmotionX 2019 specification, there are only four emotions be selected as our label candidates, which are Joy, Sadness, Anger, and Neutral. These emotions will be considered during performance evaluation. The technical detail will also be introduced and discussed in following Section SECREF13 and Section SECREF26. Model Description For this challenge, we adapt BERT which is proposed by BIBREF5 to help understand the context at the same time. Technically, BERT, designed on end-to-end architecture, is a deep pre-trained transformer encoder that dynamically provides language representation and BERT already achieved multiple state-of-the-art results on GLUE benchmark BIBREF7 and many tasks. A quick recap for BERT's architecture and its pre-training tasks will be illustrated in the following subsections. Model Description ::: Model Architecture BERT, the Bidirectional Encoder Representations from Transformers, consists of several transformer encoder layers that enable the model to extract very deep language features on both token-level and sentence-level. Each transformer encoder contains multi-head self-attention layers that provide ability to learn multiple attention feature of each word from their bidirectional context. The transformer and its self-attention mechanism are proposed by BIBREF8. This self-attention mechanism can be interpreted as a key-value mapping given query. By given the embedding vector for token input, the query ($Q$), key ($K$) and value ($V$) are produced by the projection from each three parameter matrices where $W^Q \in \mathbb {R}^{d_{{\rm model}} \times d_{k}}, W^K \in \mathbb {R}^{d_{\rm model} \times d_{k}}$ and $W^V \in \mathbb {R}^{d_{\rm model} \times d_{v}}$. The self-attention BIBREF8 is formally represented as: The $ d_k = d_v = d_{\rm model} = 1024$ in BERT large version and 768 in BERT base version. Once model can extract attention feature, we can extend one self-attention into multi-head self-attention, this extension makes sub-space features can be extracted in same time by this multi-head configuration. Overall, the multi-attention mechanism is adopt for each transformer encoder, and several of encoder layer will be stacked together to form a deep transformer encoder. For the model input, BERT allow us take one sentence as input sequence or two sentences together as one input sequence, and the maximum length of input sequence is 512. The way that BERT was designed is for giving model the sentence-level and token-level understanding. In two sentences case, a special token ([SEP]) will be inserted between two sentences. In addition, the first input token is also a special token ([CLS]), and its corresponding ouput will be vector place for classification during fine-tuning. The outputs of the last encoder layer corresponding to each input token can be treated as word representations for each token, and the word representation of the first token ([CLS]) will be consider as classification (output) representation for further fine-tuning tasks. In BERT, this vector is denoted as $C \in \mathbb {R}^{d_{\rm model}} $, and a classification layer is denoted as $ W \in \mathbb {R}^{K \times d_{\rm model}}$, where $K$ is number of classification labels. Finally, the prediction $P$ of BERT is represented as $P = {\rm softmax}(CW^T)$. Model Description ::: Pre-training Tasks In pre-training, intead of using unidirectional language models, BERT developed two pre-training tasks: (1) Masked LM (cloze test) and (2) Next Sentence Prediction. At the first pre-training task, bidirectional language modeling can be done at this cloze-like pre-training. In detail, 15% tokens of input sequence will be masked at random and model need to predict those masked tokens. The encoder will try to learn contextual representations from every given tokens due to masking tokens at random. Model will not know which part of the input is going to be masked, so that the information of each masked tokens should be inferred by remaining tokens. At Next Sentence Prediction, two sentences concatenated together will be considered as model input. In order to give model a good nature language understanding, knowing relationship between sentence is one of important abilities. When generating input sequences, 50% of time the sentence B is actually followed by sentence A, and rest 50% of the time the sentence B will be picked randomly from dataset, and model need to predict if the sentence B is next sentence of sentence A. That is, the attention information will be shared between sentences. Such sentence-level understanding may have difficulties to be learned at first pre-training task (Masked LM), therefore, the pre-training task (NSP) is developed as second training goal to capture the cross sentence relationship. In this competition, limited by the size of dataset and the challenge in contextual emotion recognition, we consider BERT with both two pre-training tasks can give a good starting point to extract emotion changing during dialogue-like conversation. Especially the second pre-training task, it might be more important for dialogue-like conversation where the emotion may various by the context of continuous utterances. That is, given a set of continues conversations, the emotion of current utterance might be influenced by previous utterance. By this assumption and with supporting from the experiment results of BERT, we can take sentence A as one-sentence context and consider sentence B as the target sentence for emotion prediction. The detail will be described in Section SECREF4. Methodology The main goal of the present work is to predict the emotion of utterance within the dialogue. Following are four major difficulties we concern about: The emotion of the utterances depends not only on the text but also on the interaction happened earlier. The source of the two datasets are different. Friends is speech-based dialogues and EmotionPush is chat-based dialogues. It makes datasets possess different characteristics. There are only $1,000$ dialogues in both training datasets which are not large enough for the stability of training a complex neural-based model. The prediction targets (emotion labels) are highly unbalanced. The proposed approach is summarized in Figure FIGREF3, which aims to overcome these challenges. The framework could be separated into three steps and described as follow: Methodology ::: Causal Utterance Modeling Given a dialogue $D^{(i)}$ which includes sequence of utterances denoted as $D^{(i)}=(u^{(i)}_{1}, u^{(i)}_{2}, ..., u^{(i)}_{n})$, where $i$ is the index in dataset and $n$ is the number of utterances in the given dialogue. In order to conserve the emotional information of both utterance and conversation, we rearrange each two consecutive utterances $u_{t}, u_{t-1}$ into a single sentence representation $x_{t}$ as The corresponding sentence representation corpus $X^{(i)}$ are denoted as $X^{(i)}=(x^{(i)}_{1}, x^{(i)}_{2}, ..., x^{(i)}_{n})$. Note that the first utterance within a conversation does not have its causal utterance (previous sentence), therefore, the causal utterance will be set as [None]. A practical example of sentence representation is shown in Table TABREF11. Since the characteristics of two datasets are not identical, we customize different causal utterance modeling strategies to refine the information in text. For Friends, there are two specific properties. The first one is that most dialogues are surrounding with the six main characters, including Rachel, Monica, Phoebe, Joey, Chandler, and Ross. The utterance ratio of given by the six roles is up to $83.4\%$. Second, the personal characteristics of the six characters are very clear. Each leading role has its own emotion undulated rule. To make use of these features, we introduce the personality tokenization which help learning the personality of the six characters. Personality tokenization concatenate the speaker and says tokens before the input utterance if the speaker is one of the six characters. The example is shown in Table TABREF12. For EmotionPush, the text are informal chats which including like slang, acronym, typo, hyperlink, and emoji. Another characteristic is that the specific name entities are tokenized with random index. (e.g. “organization_80”, “person_01”, and “time_12”). We consider some of these informal text are related to expressing emotion such as repeated typing, purposed capitalization, and emoji (e.g. “:D”, “:(”, and “<3”)). Therefore, we keep most informal expressions but only process hyperlinks, empty utterance, and name entities by unifying the tokens. Methodology ::: Model Pre-training Since the size of both datasets are not large enough for complex neural-based model training as well as BERT model is only pre-train on formal text datasets, the issues of overfitting and domain bias are important considerations for design the pre-training process. To avoid our model overfitting on the training data and increase the understanding of informal text, we adapted BERT and derived two models, namely FriendsBERT and ChatBERT, with different pre-training tasks before the formal training process for Friends and EmotionPush dataset, respectively. The pre-training strategies are described below. For pre-training FriendsBERT, we collect the completed scripts of all ten seasons of Friends TV shows from emorynlp which includes 3,107 scenes within 61,309 utterances. All the utterances are followed the preprocessing methods mentions above to compose the corpus for Masked language model pre-training task. The consequent utterances in the same scenes are considered as the consequent sentences to pre-train the Next Sentence Prediction task. In the pre-training process, the training loss is the sum of the mean likelihood of two pre-train tasks. For pre-training ChatBERT, we pre-train our model on the Twitter dataset, since the text and writing style on Twitter are close to the chat text where both may involved with many informal words or emoticons as well. The Twitter emotion dataset, 8 basic emotions from emotion wheel BIBREF1, was collected by twitter streaming API with specific emotion-related hashtags, such as #anger, #joy, #cry, #sad and etc. The hashtags in tweets are treated as emotion label for model fine-tuning. The tweets were fine-grined processing followed the rules in BIBREF9, BIBREF4, including duplicate tweets removing, the emotion hashtags must appearing in the last position of a tweet, and etc. The statis of tweets were summarized in Table TABREF17. Each tweet and corresponding emotion label composes an emotion classification dataset for pre-training. Methodology ::: Fine-tuning Since our emotion recognition task is treated as a sequence-level classification task, the model would be fine-tuned on the processed training data. Following the BERT construction, we take the first embedding vector which corresponds to the special token [CLS] from the final hidden state of the Transformer encoder. This vector represents the embedding vector of the corresponding conversation utterances which is denoted as $\mathbf {C} \in \mathbb {R}^{H}$, where $H$ is the embedding size. A dense neural layer is treated as a classification layer which consists of parameters $\mathbf {W} \in \mathbb {R}^{K\times H}$ and $\mathbf {b} \in \mathbb {R}^{K}$, where $K$ is the number of emotion class. The emotion prediction probabilities $\mathbf {P} \in \mathbb {R}^{K}$ are computed by a softmax activation function as All the parameters in BERT and the classification layer would be fine-turned together to minimize the Negative Log Likelihood (NLL) loss function, as Equation (DISPLAY_FORM22), based on the ground truth emotion label $c$. In order to tackle the problem of highly unbalanced emotion labels, we apply weighted balanced warming on NLL loss function, as Equation (DISPLAY_FORM23), in the first epoch of fine-tuning procedure. where $\mathbf {w}$ are the weights of corresponding emotion label $c$ which are computed and normalize by the frequency as By adding the weighted balanced warming on NLL loss, the model could learn to predict the minor emotions (e.g. anger and sadness) earlier and make the training process more stable. Since the major evaluation metrics micro F1-score is effect by the number of each label, we only apply the weighted balanced warming in first epoch to optimize the performance. Experiments Since the EmotionX challenge only provided the gold labels in training data, we pick the best performance model (weights) to predict the testing data. In this section, we present the experiment and evaluation results. Experiments ::: Experimental Setup The EmotionX challenge consists of $1,000$ dialogues for both Friends and EmotionPush. In all of our experiments, each dataset is separated into top 800 dialogues for training and last 200 dialogues for validation. Since the EmotionX challenge considers only the four emotions (anger, joy, neutral, and sadness) in the evaluation stage, we ignore all the data point corresponding to other emotions directly. The details of emotions distribution are shown in Table TABREF18. The hyperparameters and training setup of our models (FriendsBERT and ChatBERT) are shown in Table TABREF25. Some common and easily implemented methods are selected as the baselines embedding methods and classification models. The baseline embedding methods are including bag-of-words (BOW), term frequency–inverse document frequency (TFIDF), and neural-based word embedding. The classification models are including Logistic Regression (LR), Random Forest (RF), TextCNN BIBREF10 with initial word embedding as GloVe BIBREF11, and our proposed model. All the experiment results are based on the best performances of validation results. Experiments ::: Performance The experiment results of validation on Friends are shown in Table TABREF19. The proposed model and baselines are evaluated based on the Precision (P.), Recall (R.), and F1-measure (F1). For the traditional baselines, namely BOW and TFIDF, we observe that they achieve surprising high F1 scores around $0.81$, however, the scores for Anger and Sadness are lower. This explains that traditional approaches tend to predict the labels with large sample size, such as Joy and Neutral, but fail to take of scarce samples even when an ensemble random forest classifier is adopted. In order to prevent the unbalanced learning, we choose the weighted loss mechanism for both TextCNN and causal modeling TextCNN (C-TextCNN), these models suffer less than the traditional baselines and achieve a slightly balance performance, where there are around 15% and 7% improvement on Anger and Sadness, respectively. We following adopt the casual utterance modeling to original TextCNN, mapping previous utterance as well as target utterance into model. The causal utterance modeling improve the C-TextCNN over TextCNN for 6%, 2% and 1% on Anger, Joy and overall F1 score. Motivated from these preliminary experiments, the proposed FriendsBERT also adopt the ideas of both weighted loss and causal utterance modeling. As compared to the original BERT, single sentence BERT (FriendsBERT-base-s), the proposed FriendsBERT-base improve 1% for Joy and overall F1, and 2% for Sadness. For the final validation performance, our proposed approach achieves the highest scores, which are $0.85$ and $0.86$ for FriendsBERT-base and FriendsBERT-large, respectively. Overall, the proposed FriendsBERT successfully captures the sentence-level context-awarded information and outperforms all the baselines, which not only achieves high performance on large sample labels, but also on small sample labels. The similar settings are also adapted to EmotionPush dataset for the final evaluation. Experiments ::: Evaluation Results The testing dataset consists of 240 dialogues including $3,296$ and $3,536$ utterances in Friends and EmotionPush respectively. We re-train our FriendsBERT and ChatBERT with top 920 training dialogues and predict the evaluation results using the model performing the best validation results. The results are shown in Table TABREF29 and Table TABREF30. The present method achieves $81.5\%$ and $88.5\%$ micro F1-score on the testing dataset of Friends and EmotionPush, respectively. Conclusion and Future work In the present work, we propose FriendsBERT and ChatBERT for the multi-utterance emotion recognition task on EmotionLines dataset. The proposed models are adapted from BERT BIBREF5 with three main improvement during the model training procedure, which are the causal utterance modeling mechanism, specific model pre-training, and adapt weighted loss. The causal utterance modeling takes the advantages of the sentence-level context information during model inference. The specific model pre-training helps to against the bias in different text domain. The weighted loss avoids our model to only predict on large size sample. The effectiveness and generalizability of the proposed methods are demonstrated from the experiments. In future work, we consider to include the conditional probabilistic constraint $P ({\rm Emo}_{B} | \hat{\rm Emo}_{A})$. Model should predict the emotion based on a certain understanding about context emotions. This might be more reasonable for guiding model than just predicting emotion of ${\rm Sentence}_B$ directly. In addition, due to the limitation of BERT input format, ambiguous number of input sentences is now becoming an important design requirement for our future work. Also, personality embedding development will be another future work of the emotion recognition. The personality embedding will be considered as sentence embedding injected into word embedding, and it seems this additional information can contribute some improvement potentially.
[ "BOW-LR, BOW-RF. TFIDF-RF, TextCNN, C-TextCNN", "bag-of-words (BOW), term frequency–inverse document frequency (TFIDF), neural-based word embedding, Logistic Regression (LR), Random Forest (RF), TextCNN BIBREF10 with initial word embedding as GloVe" ]
3,181
qasper_e
en
null
eca9924a93084bb987fc705af683806564fa2d04e9dba909
how many tags do they look at?
Introduction When people shop for books online in e-book stores such as, e.g., the Amazon Kindle store, they enter search terms with the goal to find e-books that meet their preferences. Such e-books have a variety of metadata such as, e.g., title, author or keywords, which can be used to retrieve e-books that are relevant to the query. As a consequence, from the perspective of e-book publishers and editors, annotating e-books with tags that best describe the content and which meet the vocabulary of users (e.g., when searching and reviewing e-books) is an essential task BIBREF0 . Problem and aim of this work. Annotating e-books with suitable tags is, however, a complex task as users' vocabulary may differ from the one of editors. Such a vocabulary mismatch yet hinders effective organization and retrieval BIBREF1 of e-books. For example, while editors mostly annotate e-books with descriptive tags that reflect the book's content, Amazon users often search for parts of the book title. In the data we use for the present study (see Section SECREF2 ), we find that around 30% of the Amazon search terms contain parts of e-book titles. In this paper, we present our work to support editors in the e-book annotation process with tag recommendations BIBREF2 , BIBREF3 . Our idea is to exploit user-generated search query terms in Amazon to mimic the vocabulary of users in Amazon, who search for e-books. We combine these search terms with tags assigned by editors in a hybrid tag recommendation approach. Thus, our aim is to show that we can improve the performance of tag recommender systems for e-books both concerning recommendation accuracy as well as semantic similarity and tag recommendation diversity. Related work. In tag recommender systems, mostly content-based algorithms (e.g., BIBREF4 , BIBREF5 ) are used to recommend tags to annotate resources such as e-books. In our work, we incorporate both content features of e-books (i.e., title and description text) as well as Amazon search terms to account for the vocabulary of e-book readers. Concerning the evaluation of tag recommendation systems, most studies focus on measuring the accuracy of tag recommendations (e.g., BIBREF2 ). However, the authors of BIBREF6 suggest also to use beyond-accuracy metrics such as diversity to evaluate the quality of tag recommendations. In our work, we measure recommendation diversity in addition to recommendation accuracy and propose a novel metric termed semantic similarity to validate semantic matches of tag recommendations. Approach and findings. We exploit editor tags and user-generated search terms as input for tag recommendation approaches. Our evaluation comprises of a rich set of 19 different algorithms to recommend tags for e-books, which we group into (i) popularity-based, (ii) similarity-based (i.e., using content information), and (iii) hybrid approaches. We evaluate our approaches in terms of accuracy, semantic similarity and diversity on the review content of Amazon users, which reflects the readers' vocabulary. With semantic similarity, we measure how semantically similar (based on learned Doc2Vec BIBREF7 embeddings) the list of recommended tags is to the list of relevant tags. We use this additional metric to measure not only exact “hits” of our recommendations but also semantic matches. Our evaluation results show that combining both data sources enhances the quality of tag recommendations for annotating e-books. Furthermore, approaches that solely train on Amazon search terms provide poor performance in terms of accuracy but deliver good results in terms of semantic similarity and recommendation diversity. Method In this section, we describe our dataset as well as our tag recommendation approaches we propose to annotate e-books. Dataset Our dataset contains two sources of data, one to generate tag recommendations and another one to evaluate tag recommendations. HGV GmbH has collected all data sources and we provide the dataset statistics in Table TABREF3 . Data used to generate recommendations. We employ two sources of e-book annotation data: (i) editor tags, and (ii) Amazon search terms. For editor tags, we collect data of 48,705 e-books from 13 publishers, namely Kunstmann, Delius-Klasnig, VUR, HJR, Diogenes, Campus, Kiwi, Beltz, Chbeck, Rowohlt, Droemer, Fischer and Neopubli. Apart from the editor tags, this data contains metadata fields of e-books such as the ISBN, the title, a description text, the author and a list of BISACs, which are identifiers for book categories. For the Amazon search terms, we collect search query logs of 21,243 e-books for 12 months (i.e., November 2017 to October 2018). Apart from the search terms, this data contains the e-books' ISBNs, titles and description texts. Table TABREF3 shows that the overlap of e-books that have editor tags and Amazon search terms is small (i.e., only 497). Furthermore, author and BISAC (i.e., the book category identifier) information are primarily available for e-books that contain editor tags. Consequently, both data sources provide complementary information, which underpins the intention of this work, i.e., to evaluate tag recommendation approaches using annotation sources from different contexts. Data used to evaluate recommendations. For evaluation, we use a third set of e-book annotations, namely Amazon review keywords. These review keywords are extracted from the Amazon review texts and are typically provided in the review section of books on Amazon. Our idea is to not favor one or the other data source (i.e., editor tags and Amazon search terms) when evaluating our approaches against expected tags. At the same time, we consider Amazon review keywords to be a good mixture of editor tags and search terms as they describe both the content and the users' opinions on the e-books (i.e., the readers' vocabulary). As shown in Table TABREF3 , we collect Amazon review keywords for 2,896 e-books (publishers: Kiwi, Rowohlt, Fischer, and Droemer), which leads to 33,663 distinct review keywords and on average 30 keyword assignments per e-book. Tag Recommendation Approaches We implement three types of tag recommendation approaches, i.e., (i) popularity-based, (ii) similarity-based (i.e., using content information), and (iii) hybrid approaches. Due to the lack of personalized tags (i.e., we do not know which user has assigned a tag), we do not implement other types of algorithms such as collaborative filtering BIBREF8 . In total, we evaluate 19 different algorithms to recommend tags for annotating e-books. Popularity-based approaches. We recommend the most frequently used tags in the dataset, which is a common strategy for tag recommendations BIBREF9 . That is, a most popular INLINEFORM0 approach for editor tags and a most popular INLINEFORM1 approach for Amazon search terms. For e-books, for which we also have author (= INLINEFORM2 and INLINEFORM3 ) or BISAC (= INLINEFORM4 and INLINEFORM5 ) information, we use these features to further filter the recommended tags, i.e., to only recommend tags that were used to annotate e-books of a specific author or a specific BISAC. We combine both data sources (i.e., editor tags and Amazon search terms) using a round-robin combination strategy, which ensures an equal weight for both sources. This gives us three additional popularity-based algorithms (= INLINEFORM0 , INLINEFORM1 and INLINEFORM2 ). Similarity-based approaches. We exploit the textual content of e-books (i.e., description or title) to recommend relevant tags BIBREF10 . For this, we first employ a content-based filtering approach BIBREF11 based on TF-IDF BIBREF12 to find top- INLINEFORM0 similar e-books. For each of the similar e-books, we then either extract the assigned editor tags (= INLINEFORM2 and INLINEFORM3 ) or the Amazon search terms (= INLINEFORM4 and INLINEFORM5 ). To combine the tags of the top- INLINEFORM6 similar e-books, we use the cross-source algorithm BIBREF13 , which favors tags that were used to annotate more than one similar e-book (i.e., tags that come from multiple recommendation sources). The final tag relevancy is calculated as: DISPLAYFORM0 where INLINEFORM0 denotes the number of distinct e-books, which yielded the recommendation of tag INLINEFORM1 , to favor tags that come from multiple sources and INLINEFORM2 is the similarity score of the corresponding e-book. We again use a round-robin strategy to combine both data sources (= INLINEFORM3 and INLINEFORM4 ). Hybrid approaches. We use the previously mentioned cross-source algorithm BIBREF13 to construct four hybrid recommendation approaches. In this case, tags are favored that are recommended by more than one algorithm. Hence, to create a popularity-based hybrid (= INLINEFORM0 ), we combine the best three performing popularity-based approaches from the ones (i) without any contextual signal, (ii) with the author as context, and (iii) with BISAC as context. In the case of the similarity-based hybrid (= INLINEFORM1 ), we utilize the two best performing similarity-based approaches from the ones (i) which use the title, and (ii) which use the description text. We further define INLINEFORM2 , a hybrid approach that combines the three popularity-based methods of INLINEFORM3 and the two similarity-based approaches of INLINEFORM4 . Finally, we define INLINEFORM5 as a hybrid approach that uses the best performing popularity-based and the best performing similarity-based approach (see Figure FIGREF11 in Section SECREF4 for more details about the particular algorithm combinations). Experimental Setup In this section, we describe our evaluation protocol as well as the measures we use to evaluate and compare our tag recommendation approaches. Evaluation Protocol For evaluation, we use the third set of e-book annotations, namely Amazon review keywords. As described in Section SECREF1 , these review keywords are extracted from the Amazon review texts and thus, reflect the users' vocabulary. We evaluate our approaches for the 2,896 e-books, for whom we got review keywords. To follow common practice for tag recommendation evaluation BIBREF14 , we predict the assigned review keywords (= our test set) for respective e-books. Evaluation Metrics In this work, we measure (i) recommendation accuracy, (ii) semantic similarity, and (iii) recommendation diversity to evaluate the quality of our approaches from different perspectives. Recommendation accuracy. We use Normalized Discounted Cumulative Gain (nDCG) BIBREF15 to measure the accuracy of the tag recommendation approaches. The nDCG measure is a standard ranking-dependent metric that not only measures how many tags can be correctly predicted but also takes into account their position in the recommendation list with length of INLINEFORM0 . It is based on the Discounted Cummulative Gain, which is given by: DISPLAYFORM0 where INLINEFORM0 is a function that returns 1 if the recommended tag at position INLINEFORM1 in the recommended list is relevant. We then calculate DCG@ INLINEFORM2 for every evaluated e-book by dividing DCG@ INLINEFORM3 with the ideal DCG value iDCG@ INLINEFORM4 , which is the highest possible DCG value that can be achieved if all the relevant tags would be recommended in the correct order. It is given by the following formula BIBREF15 : DISPLAYFORM0 Semantic similarity. One precondition of standard recommendation accuracy measures is that to generate a “hit”, the recommended tag needs to be an exact syntactical match to the one from the test set. When tags are recommended from one data source and compared to tags from another source, this can be problematic. For example, if we recommend the tag “victim” but expect the tag “prey”, we would mark this as a mismatch, therefore being a bad recommendation. But if we know that the corresponding e-book is a crime novel, the recommended tag would be (semantically) descriptive to reflect the book's content. Hence, in this paper, we propose to additionally measure the semantic similarity between recommended tags and tags from the test set (i.e., the Amazon review keywords). Over the last four years, there have been several notable publications in the area of applying deep learning to uncover semantic relationships between textual content (e.g., by learning word embeddings with Word2Vec BIBREF16 , BIBREF17 ). Based on this, we propose an alternative measure of recommendation quality by learning the semantic relationships from both vocabularies and then using it to compare how semantically similar the recommended tags are to the expected review keywords. For this, we first extract the textual content in the form of the description text, title, editor tags and Amazon search terms of e-books from our dataset. We then train a Doc2Vec BIBREF7 model on the content. Then, we use the model to infer the latent representation for both the complete list of recommended tags as well as the list of expected tags from the test set. Finally, we use the cosine similarity measure to calculate how semantically similar these two lists are. Recommendation diversity. As defined in BIBREF18 , we calculate recommendation diversity as the average dissimilarity of all pairs of tags in the list of recommended tags. Thus, given a distance function INLINEFORM0 that corresponds to the dissimilarity between two tags INLINEFORM1 and INLINEFORM2 in the list of recommended tags, INLINEFORM3 is given as the average dissimilarity of all pairs of tags: DISPLAYFORM0 where INLINEFORM0 is the number of evaluated e-books and the dissimilarity function is defined as INLINEFORM1 . In our experiments, we use the previously trained Doc2Vec model to extract the latent representation of a specific tag. The similarity of two tags INLINEFORM2 is then calculated with the Cosine similarity measure using the latent vector representations of respective tags INLINEFORM3 and INLINEFORM4 . Results Concerning tag recommendation accuracy, in this section, we report results for different values of INLINEFORM0 (i.e., number of recommended tags). For the beyond-accuracy experiment, we use the full list of recommended tags (i.e., INLINEFORM1 ). Recommendation Accuracy Evaluation Figure FIGREF11 shows the results of the accuracy experiment for the (i) popularity-based, (ii) similarity-based, and (iii) hybrid tag recommendation approaches. Popularity-based approaches. In Figure FIGREF11 , we see that popularity-based approaches based on editor tags tend to perform better than if trained on Amazon search terms. If we take into account contextual information like BISAC or author, we can further improve accuracy in terms of INLINEFORM0 . That is, we find that using popular tags from e-books of a specific author leads to the best accuracy of the popularity-based approaches. This suggests that editors and readers do seem to reuse tags for e-books of same authors. If we use both editor tags and Amazon search terms, we can further increase accuracy, especially for higher values of INLINEFORM1 like in the case of INLINEFORM2 . This is, however, not the case for INLINEFORM3 as the accuracy of the integrated INLINEFORM4 approach is low. The reason for this is the limited amount of e-books from within the Amazon search query logs that have BISAC information (i.e., only INLINEFORM5 ). Similarity-based approaches. We further improve accuracy if we first find similar e-books and then extract their top- INLINEFORM0 tags in a cross-source manner as described in Section SECREF4 . As shown in Figure FIGREF11 , using the description text to find similar e-books results in more accurate tag recommendations than using the title (i.e., INLINEFORM0 for INLINEFORM1 ). This is somehow expected as the description text consists of a bigger corpus of words (i.e., multiple sentences) than the title. Concerning the collected Amazon search query logs, extracting and then recommending tags from this source results in a much lower accuracy performance. Thus, these results also suggest to investigate beyond-accuracy metrics as done in Section SECREF17 . Hybrid approaches. Figure FIGREF11 shows the accuracy results of the four hybrid approaches. By combining the best three popularity-based approaches, we outperform all of the initially evaluated popularity algorithms (i.e., INLINEFORM0 for INLINEFORM1 ). On the contrary, the combination of the two best performing similarity-based approaches INLINEFORM2 and INLINEFORM3 does not yield better accuracy. The negative impact of using a lower-performing approach such as INLINEFORM4 within a hybrid combination can also be observed in INLINEFORM5 for lower values of INLINEFORM6 . Overall, this confirms our initial intuition that combining the best performing popularity-based approach with the best similarity-based approach should result in the highest accuracy (i.e., INLINEFORM7 for INLINEFORM8 ). Moreover, our goal, namely to exploit editor tags in combination with search terms used by readers to increase the metadata quality of e-books, is shown to be best supported by applying hybrid approaches as they provide the best prediction results. Beyond-Accuracy Evaluation Figure FIGREF16 illustrates the results of the experiments, which measure the recommendation impact beyond-accuracy. Semantic similarity. Figure FIGREF16 illustrates the results of our proposed semantic similarity measure. To compare our proposed measure to standard accuracy measures such as INLINEFORM0 , we use Kendall's Tau rank correlation BIBREF19 as suggested by BIBREF20 for automatic evaluation of information-ordering tasks. From that, we rank our recommendation approaches according to both accuracy and semantic similarity and calculate the relation between both rankings. This results in INLINEFORM1 with a p-value < INLINEFORM2 , which suggests a high correlation between the semantic similarity and the standard accuracy measure. Therefore, the semantic similarity measure helps us interpret the recommendation quality. For instance, we achieve the lowest INLINEFORM0 values with the similarity-based approaches that recommend Amazon search terms (i.e., INLINEFORM1 and INLINEFORM2 ). When comparing these results with others from Figure FIGREF11 , a conclusion could be quickly drawn that the recommended tags are merely unusable. However, by looking at Figure FIGREF16 , we see that, although these approaches do not provide the highest recommendation accuracy, they still result in tag recommendations that are semantically related at a high degree to the expected annotations from the test set. Overall, this suggests that approaches, which provide a poor accuracy performance concerning INLINEFORM4 but provide a good performance regarding semantic similarity could still be helpful for annotating e-books. Recommendation diversity. Figure FIGREF16 shows the diversity of the tag recommendation approaches. We achieve the highest diversity with the similarity-based approaches, which extract Amazon search terms. Their accuracy is, however, very low. Thus, the combination of the two vocabularies can provide a good trade-off between recommendation accuracy and diversity. Conclusion and Future Work In this paper, we present our work to support editors in the e-book annotation process. Specifically, we aim to provide tag recommendations that incorporate both the vocabulary of the editors and e-book readers. Therefore, we train various configurations of tag recommender approaches on editors' tags and Amazon search terms and evaluate them on a dataset containing Amazon review keywords. We find that combining both data sources enhances the quality of tag recommendations for annotating e-books. Furthermore, while approaches that train only on Amazon search terms provide poor performance concerning recommendation accuracy, we show that they still offer helpful annotations concerning recommendation diversity as well as our novel semantic similarity metric. Future work. For future work, we plan to validate our findings using another dataset, e.g., by recommending tags for scientific articles and books in BibSonomy. With this, we aim to demonstrate the usefulness of the proposed approach in a similar domain and to enhance the reproducibility of our results by using an open dataset. Moreover, we plan to evaluate our tag recommendation approaches in a study with domain users. Also, we want to improve our similarity-based approaches by integrating novel embedding approaches BIBREF16 , BIBREF17 as we did, for example, with our proposed semantic similarity evaluation metric. Finally, we aim to incorporate explanations for recommended tags so that editors of e-book annotations receive additional support in annotating e-books BIBREF21 . By making the underlying (semantic) reasoning visible to the editor who is in charge of tailoring annotations, we aim to support two goals: (i) allowing readers to discover e-books more efficiently, and (ii) enabling publishers to leverage semi-automatic categorization processes for e-books. In turn, providing explanations fosters control over which vocabulary to choose when tagging e-books for different application contexts. Acknowledgments. The authors would like to thank Peter Langs, Jan-Philipp Wolf and Alyona Schraa from HGV GmbH for providing the e-book annotation data. This work was funded by the Know-Center GmbH (FFG COMET Program), the FFG Data Market Austria project and the AI4EU project (EU grant 825619). The Know-Center GmbH is funded within the Austrian COMET Program - Competence Centers for Excellent Technologies - under the auspices of the Austrian Ministry of Transport, Innovation and Technology, the Austrian Ministry of Economics and Labor and by the State of Styria. COMET is managed by the Austrian Research Promotion Agency (FFG).
[ "Unanswerable", "48,705" ]
3,307
qasper_e
en
null
d1aa1132439bd292965634095bf1c9943e062bb6645ff78c
What is the architecture of their model?
Introduction End-to-end speech-to-text translation (ST) has attracted much attention recently BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6 given its simplicity against cascading automatic speech recognition (ASR) and machine translation (MT) systems. The lack of labeled data, however, has become a major blocker for bridging the performance gaps between end-to-end models and cascading systems. Several corpora have been developed in recent years. post2013improved introduced a 38-hour Spanish-English ST corpus by augmenting the transcripts of the Fisher and Callhome corpora with English translations. di-gangi-etal-2019-must created the largest ST corpus to date from TED talks but the language pairs involved are out of English only. beilharz2019librivoxdeen created a 110-hour German-English ST corpus from LibriVox audiobooks. godard-etal-2018-low created a Moboshi-French ST corpus as part of a rare language documentation effort. woldeyohannis provided an Amharic-English ST corpus in the tourism domain. boito2019mass created a multilingual ST corpus involving 8 languages from a multilingual speech corpus based on Bible readings BIBREF7. Previous work either involves language pairs out of English, very specific domains, very low resource languages or a limited set of language pairs. This limits the scope of study, including the latest explorations on end-to-end multilingual ST BIBREF8, BIBREF9. Our work is mostly similar and concurrent to iranzosnchez2019europarlst who created a multilingual ST corpus from the European Parliament proceedings. The corpus we introduce has larger speech durations and more translation tokens. It is diversified with multiple speakers per transcript/translation. Finally, we provide additional out-of-domain test sets. In this paper, we introduce CoVoST, a multilingual ST corpus based on Common Voice BIBREF10 for 11 languages into English, diversified with over 11,000 speakers and over 60 accents. It includes a total 708 hours of French (Fr), German (De), Dutch (Nl), Russian (Ru), Spanish (Es), Italian (It), Turkish (Tr), Persian (Fa), Swedish (Sv), Mongolian (Mn) and Chinese (Zh) speeches, with French and German ones having the largest durations among existing public corpora. We also collect an additional evaluation corpus from Tatoeba for French, German, Dutch, Russian and Spanish, resulting in a total of 9.3 hours of speech. Both corpora are created at the sentence level and do not require additional alignments or segmentation. Using the official Common Voice train-development-test split, we also provide baseline models, including, to our knowledge, the first end-to-end many-to-one multilingual ST models. CoVoST is released under CC0 license and free to use. The Tatoeba evaluation samples are also available under friendly CC licenses. All the data can be acquired at https://github.com/facebookresearch/covost. Data Collection and Processing ::: Common Voice (CoVo) Common Voice BIBREF10 is a crowdsourcing speech recognition corpus with an open CC0 license. Contributors record voice clips by reading from a bank of donated sentences. Each voice clip was validated by at least two other users. Most of the sentences are covered by multiple speakers, with potentially different genders, age groups or accents. Raw CoVo data contains samples that passed validation as well as those that did not. To build CoVoST, we only use the former one and reuse the official train-development-test partition of the validated data. As of January 2020, the latest CoVo 2019-06-12 release includes 29 languages. CoVoST is currently built on that release and covers the following 11 languages: French, German, Dutch, Russian, Spanish, Italian, Turkish, Persian, Swedish, Mongolian and Chinese. Validated transcripts were sent to professional translators. Note that the translators had access to the transcripts but not the corresponding voice clips since clips would not carry additional information. Since transcripts were duplicated due to multiple speakers, we deduplicated the transcripts before sending them to translators. As a result, different voice clips of the same content (transcript) will have identical translations in CoVoST for train, development and test splits. In order to control the quality of the professional translations, we applied various sanity checks to the translations BIBREF11. 1) For German-English, French-English and Russian-English translations, we computed sentence-level BLEU BIBREF12 with the NLTK BIBREF13 implementation between the human translations and the automatic translations produced by a state-of-the-art system BIBREF14 (the French-English system was a Transformer big BIBREF15 separately trained on WMT14). We applied this method to these three language pairs only as we are confident about the quality of the corresponding systems. Translations with a score that was too low were manually inspected and sent back to the translators when needed. 2) We manually inspected examples where the source transcript was identical to the translation. 3) We measured the perplexity of the translations using a language model trained on a large amount of clean monolingual data BIBREF14. We manually inspected examples where the translation had a high perplexity and sent them back to translators accordingly. 4) We computed the ratio of English characters in the translations. We manually inspected examples with a low ratio and sent them back to translators accordingly. 5) Finally, we used VizSeq BIBREF16 to calculate similarity scores between transcripts and translations based on LASER cross-lingual sentence embeddings BIBREF17. Samples with low scores were manually inspected and sent back for translation when needed. We also sanity check the overlaps of train, development and test sets in terms of transcripts and voice clips (via MD5 file hashing), and confirm they are totally disjoint. Data Collection and Processing ::: Tatoeba (TT) Tatoeba (TT) is a community built language learning corpus having sentences aligned across multiple languages with the corresponding speech partially available. Its sentences are on average shorter than those in CoVoST (see also Table TABREF2) given the original purpose of language learning. Sentences in TT are licensed under CC BY 2.0 FR and part of the speeches are available under various CC licenses. We construct an evaluation set from TT (for French, German, Dutch, Russian and Spanish) as a complement to CoVoST development and test sets. We collect (speech, transcript, English translation) triplets for the 5 languages and do not include those whose speech has a broken URL or is not CC licensed. We further filter these samples by sentence lengths (minimum 4 words including punctuations) to reduce the portion of short sentences. This makes the resulting evaluation set closer to real-world scenarios and more challenging. We run the same quality checks for TT as for CoVoST but we do not find poor quality translations according to our criteria. Finally, we report the overlap between CoVo transcripts and TT sentences in Table TABREF5. We found a minimal overlap, which makes the TT evaluation set a suitable additional test set when training on CoVoST. Data Analysis ::: Basic Statistics Basic statistics for CoVoST and TT are listed in Table TABREF2 including (unique) sentence counts, speech durations, speaker demographics (partially available) as well as vocabulary and token statistics (based on Moses-tokenized sentences by sacreMoses) on both transcripts and translations. We see that CoVoST has over 327 hours of German speeches and over 171 hours of French speeches, which, to our knowledge, corresponds to the largest corpus among existing public ST corpora (the second largest is 110 hours BIBREF18 for German and 38 hours BIBREF19 for French). Moreover, CoVoST has a total of 18 hours of Dutch speeches, to our knowledge, contributing the first public Dutch ST resource. CoVoST also has around 27-hour Russian speeches, 37-hour Italian speeches and 67-hour Persian speeches, which is 1.8 times, 2.5 times and 13.3 times of the previous largest public one BIBREF7. Most of the sentences (transcripts) in CoVoST are covered by multiple speakers with potentially different accents, resulting in a rich diversity in the speeches. For example, there are over 1,000 speakers and over 10 accents in the French and German development / test sets. This enables good coverage of speech variations in both model training and evaluation. Data Analysis ::: Speaker Diversity As we can see from Table TABREF2, CoVoST is diversified with a rich set of speakers and accents. We further inspect the speaker demographics in terms of sample distributions with respect to speaker counts, accent counts and age groups, which is shown in Figure FIGREF6, FIGREF7 and FIGREF8. We observe that for 8 of the 11 languages, at least 60% of the sentences (transcripts) are covered by multiple speakers. Over 80% of the French sentences have at least 3 speakers. And for German sentences, even over 90% of them have at least 5 speakers. Similarly, we see that a large portion of sentences are spoken in multiple accents for French, German, Dutch and Spanish. Speakers of each language also spread widely across different age groups (below 20, 20s, 30s, 40s, 50s, 60s and 70s). Baseline Results We provide baselines using the official train-development-test split on the following tasks: automatic speech recognition (ASR), machine translation (MT) and speech translation (ST). Baseline Results ::: Experimental Settings ::: Data Preprocessing We convert raw MP3 audio files from CoVo and TT into mono-channel waveforms, and downsample them to 16,000 Hz. For transcripts and translations, we normalize the punctuation, we tokenize the text with sacreMoses and lowercase it. For transcripts, we further remove all punctuation markers except for apostrophes. We use character vocabularies on all the tasks, with 100% coverage of all the characters. Preliminary experimentation showed that character vocabularies provided more stable training than BPE. For MT, the vocabulary is created jointly on both transcripts and translations. We extract 80-channel log-mel filterbank features, computed with a 25ms window size and 10ms window shift using torchaudio. The features are normalized to 0 mean and 1.0 standard deviation. We remove samples having more than 3,000 frames or more than 256 characters for GPU memory efficiency (less than 25 samples are removed for all languages). Baseline Results ::: Experimental Settings ::: Model Training Our ASR and ST models follow the architecture in berard2018end, but have 3 decoder layers like that in pino2019harnessing. For MT, we use a Transformer base architecture BIBREF15, but with 3 encoder layers, 3 decoder layers and 0.3 dropout. We use a batch size of 10,000 frames for ASR and ST, and a batch size of 4,000 tokens for MT. We train all models using Fairseq BIBREF20 for up to 200,000 updates. We use SpecAugment BIBREF21 for ASR and ST to alleviate overfitting. Baseline Results ::: Experimental Settings ::: Inference and Evaluation We use a beam size of 5 for all models. We use the best checkpoint by validation loss for MT, and average the last 5 checkpoints for ASR and ST. For MT and ST, we report case-insensitive tokenized BLEU BIBREF22 using sacreBLEU BIBREF23. For ASR, we report word error rate (WER) and character error rate (CER) using VizSeq. Baseline Results ::: Automatic Speech Recognition (ASR) For simplicity, we use the same model architecture for ASR and ST, although we do not leverage ASR models to pretrain ST model encoders later. Table TABREF18 shows the word error rate (WER) and character error rate (CER) for ASR models. We see that French and German perform the best given they are the two highest resource languages in CoVoST. The other languages are relatively low resource (especially Turkish and Swedish) and the ASR models are having difficulties to learn from this data. Baseline Results ::: Machine Translation (MT) MT models take transcripts (without punctuation) as inputs and outputs translations (with punctuation). For simplicity, we do not change the text preprocessing methods for MT to correct this mismatch. Moreover, this mismatch also exists in cascading ST systems, where MT model inputs are the outputs of an ASR model. Table TABREF20 shows the BLEU scores of MT models. We notice that the results are consistent with what we see from ASR models. For example thanks to abundant training data, French has a decent BLEU score of 29.8/25.4. German doesn't perform well, because of less richness of content (transcripts). The other languages are low resource in CoVoST and it is difficult to train decent models without additional data or pre-training techniques. Baseline Results ::: Speech Translation (ST) CoVoST is a many-to-one multilingual ST corpus. While end-to-end one-to-many and many-to-many multilingual ST models have been explored very recently BIBREF8, BIBREF9, many-to-one multilingual models, to our knowledge, have not. We hence use CoVoST to examine this setting. Table TABREF22 and TABREF23 show the BLEU scores for both bilingual and multilingual end-to-end ST models trained on CoVoST. We observe that combining speeches from multiple languages is consistently bringing gains to low-resource languages (all besides French and German). This includes combinations of distant languages, such as Ru+Fr, Tr+Fr and Zh+Fr. Moreover, some combinations do bring gains to high-resource language (French) as well: Es+Fr, Tr+Fr and Mn+Fr. We simply provide the most basic many-to-one multilingual baselines here, and leave the full exploration of the best configurations to future work. Finally, we note that for some language pairs, absolute BLEU numbers are relatively low as we restrict model training to the supervised data. We encourage the community to improve upon those baselines, for example by leveraging semi-supervised training. Baseline Results ::: Multi-Speaker Evaluation In CoVoST, large portion of transcripts are covered by multiple speakers with different genders, accents and age groups. Besides the standard corpus-level BLEU scores, we also want to evaluate model output variance on the same content (transcript) but different speakers. We hence propose to group samples (and their sentence BLEU scores) by transcript, and then calculate average per-group mean and average coefficient of variation defined as follows: and where $G$ is the set of sentence BLEU scores grouped by transcript and $G^{\prime } = \lbrace g | g\in G, |g|>1, \textrm {Mean}(g) > 0 \rbrace $. $\textrm {BLEU}_{MS}$ provides a normalized quality score as oppose to corpus-level BLEU or unnormalized average of sentence BLEU. And $\textrm {CoefVar}_{MS}$ is a standardized measure of model stability against different speakers (the lower the better). Table TABREF24 shows the $\textrm {BLEU}_{MS}$ and $\textrm {CoefVar}_{MS}$ of our ST models on CoVoST test set. We see that German and Persian have the worst $\textrm {CoefVar}_{MS}$ (least stable) given their rich speaker diversity in the test set and relatively small train set (see also Figure FIGREF6 and Table TABREF2). Dutch also has poor $\textrm {CoefVar}_{MS}$ because of the lack of training data. Multilingual models are consistantly more stable on low-resource languages. Ru+Fr, Tr+Fr, Fa+Fr and Zh+Fr even have better $\textrm {CoefVar}_{MS}$ than all individual languages. Conclusion We introduce a multilingual speech-to-text translation corpus, CoVoST, for 11 languages into English, diversified with over 11,000 speakers and over 60 accents. We also provide baseline results, including, to our knowledge, the first end-to-end many-to-one multilingual model for spoken language translation. CoVoST is free to use with a CC0 license, and the additional Tatoeba evaluation samples are also CC-licensed.
[ "follow the architecture in berard2018end, but have 3 decoder layers like that in pino2019harnessing" ]
2,424
qasper_e
en
null
54797939fdd2089aeb0c4242e660b2ff1914538839e9a21e
what data did they use?
Introduction Long short term memory (LSTM) units BIBREF1 are popular for many sequence modeling tasks and are used extensively in language modeling. A key to their success is their articulated gating structure, which allows for more control over the information passed along the recurrence. However, despite the sophistication of the gating mechanisms employed in LSTMs and similar recurrent units, the input and context vectors are treated with simple linear transformations prior to gating. Non-linear transformations such as convolutions BIBREF2 have been used, but these have not achieved the performance of well regularized LSTMs for language modeling BIBREF3 . A natural way to improve the expressiveness of linear transformations is to increase the number of dimensions of the input and context vectors, but this comes with a significant increase in the number of parameters which may limit generalizability. An example is shown in Figure FIGREF1 , where LSTMs performance decreases with the increase in dimensions of the input and context vectors. Moreover, the semantics of the input and context vectors are different, suggesting that each may benefit from specialized treatment. Guided by these insights, we introduce a new recurrent unit, the Pyramidal Recurrent Unit (PRU), which is based on the LSTM gating structure. Figure FIGREF2 provides an overview of the PRU. At the heart of the PRU is the pyramidal transformation (PT), which uses subsampling to effect multiple views of the input vector. The subsampled representations are combined in a pyramidal fusion structure, resulting in richer interactions between the individual dimensions of the input vector than is possible with a linear transformation. Context vectors, which have already undergone this transformation in the previous cell, are modified with a grouped linear transformation (GLT) which allows the network to learn latent representations in high dimensional space with fewer parameters and better generalizability (see Figure FIGREF1 ). We show that PRUs can better model contextual information and demonstrate performance gains on the task of language modeling. The PRU improves the perplexity of the current state-of-the-art language model BIBREF0 by up to 1.3 points, reaching perplexities of 56.56 and 64.53 on the Penn Treebank and WikiText2 datasets while learning 15-20% fewer parameters. Replacing an LSTM with a PRU results in improvements in perplexity across a variety of experimental settings. We provide detailed ablations which motivate the design of the PRU architecture, as well as detailed analysis of the effect of the PRU on other components of the language model. Related work Multiple methods, including a variety of gating structures and transformations, have been proposed to improve the performance of recurrent neural networks (RNNs). We first describe these approaches and then provide an overview of recent work in language modeling. Pyramidal Recurrent Units We introduce Pyramidal Recurrent Units (PRUs), a new RNN architecture which improves modeling of context by allowing for higher dimensional vector representations while learning fewer parameters. Figure FIGREF2 provides an overview of PRU. We first elaborate on the details of the pyramidal transformation and the grouped linear transformation. We then describe our recurrent unit, PRU. Pyramidal transformation for input The basic transformation in many recurrent units is a linear transformation INLINEFORM0 defined as: DISPLAYFORM0 where INLINEFORM0 are learned weights that linearly map INLINEFORM1 to INLINEFORM2 . To simplify notation, we omit the biases. Motivated by successful applications of sub-sampling in computer vision (e.g., BIBREF22 , BIBREF23 , BIBREF9 , BIBREF24 ), we subsample input vector INLINEFORM0 into INLINEFORM1 pyramidal levels to achieve representation of the input vector at multiple scales. This sub-sampling operation produces INLINEFORM2 vectors, represented as INLINEFORM3 , where INLINEFORM4 is the sampling rate and INLINEFORM5 . We learn scale-specific transformations INLINEFORM6 for each INLINEFORM7 . The transformed subsamples are concatenated to produce the pyramidal analog to INLINEFORM8 , here denoted as INLINEFORM9 : DISPLAYFORM0 where INLINEFORM0 indicates concatenation. We note that pyramidal transformation with INLINEFORM1 is the same as the linear transformation. To improve gradient flow inside the recurrent unit, we combine the input and output using an element-wise sum (when dimension matches) to produce residual analog of pyramidal transformation, as shown in Figure FIGREF2 BIBREF25 . We sub-sample the input vector INLINEFORM0 into INLINEFORM1 pyramidal levels using the kernel-based approach BIBREF8 , BIBREF9 . Let us assume that we have a kernel INLINEFORM2 with INLINEFORM3 elements. Then, the input vector INLINEFORM4 can be sub-sampled as: DISPLAYFORM0 where INLINEFORM0 represents the stride and INLINEFORM1 . The number of parameters learned by the linear transformation and the pyramidal transformation with INLINEFORM0 pyramidal levels to map INLINEFORM1 to INLINEFORM2 are INLINEFORM3 and INLINEFORM4 respectively. Thus, pyramidal transformation reduces the parameters of a linear transformation by a factor of INLINEFORM5 . For example, the pyramidal transformation (with INLINEFORM6 and INLINEFORM7 ) learns INLINEFORM8 fewer parameters than the linear transformation. Grouped linear transformation for context Many RNN architectures apply linear transformations to both the input and context vector. However, this may not be ideal due to the differing semantics of each vector. In many NLP applications including language modeling, the input vector is a dense word embedding which is shared across all contexts for a given word in a dataset. In contrast, the context vector is highly contextualized by the current sequence. The differences between the input and context vector motivate their separate treatment in the PRU architecture. The weights learned using the linear transformation (Eq. EQREF9 ) are reused over multiple time steps, which makes them prone to over-fitting BIBREF26 . To combat over-fitting, various methods, such as variational dropout BIBREF26 and weight dropout BIBREF0 , have been proposed to regularize these recurrent connections. To further improve generalization abilities while simultaneously enabling the recurrent unit to learn representations at very high dimensional space, we propose to use grouped linear transformation (GLT) instead of standard linear transformation for recurrent connections BIBREF27 . While pyramidal and linear transformations can be applied to transform context vectors, our experimental results in Section SECREF39 suggests that GLTs are more effective. The linear transformation INLINEFORM0 maps INLINEFORM1 linearly to INLINEFORM2 . Grouped linear transformations break the linear interactions by factoring the linear transformation into two steps. First, a GLT splits the input vector INLINEFORM3 into INLINEFORM4 smaller groups such that INLINEFORM5 . Second, a linear transformation INLINEFORM6 is applied to map INLINEFORM7 linearly to INLINEFORM8 , for each INLINEFORM9 . The INLINEFORM10 resultant output vectors INLINEFORM11 are concatenated to produce the final output vector INLINEFORM12 . DISPLAYFORM0 GLTs learn representations at low dimensionality. Therefore, a GLT requires INLINEFORM0 fewer parameters than the linear transformation. We note that GLTs are subset of linear transformations. In a linear transformation, each neuron receives an input from each element in the input vector while in a GLT, each neuron receives an input from a subset of the input vector. Therefore, GLT is the same as a linear transformation when INLINEFORM1 . Pyramidal Recurrent Unit We extend the basic gating architecture of LSTM with the pyramidal and grouped linear transformations outlined above to produce the Pyramidal Recurrent Unit (PRU), whose improved sequence modeling capacity is evidenced in Section SECREF4 . At time INLINEFORM0 , the PRU combines the input vector INLINEFORM1 and the previous context vector (or previous hidden state vector) INLINEFORM2 using the following transformation function as: DISPLAYFORM0 where INLINEFORM0 indexes the various gates in the LSTM model, and INLINEFORM1 and INLINEFORM2 represents the pyramidal and grouped linear transformations defined in Eqns. EQREF10 and EQREF15 , respectively. We will now incorporate INLINEFORM0 into LSTM gating architecture to produce PRU. At time INLINEFORM1 , a PRU cell takes INLINEFORM2 , INLINEFORM3 , and INLINEFORM4 as inputs to produce forget INLINEFORM5 , input INLINEFORM6 , output INLINEFORM7 , and content INLINEFORM8 gate signals. The inputs are combined with these gate signals to produce context vector INLINEFORM9 and cell state INLINEFORM10 . Mathematically, the PRU with the LSTM gating architecture can be defined as: DISPLAYFORM0 where INLINEFORM0 represents the element-wise multiplication operation, and INLINEFORM1 and INLINEFORM2 are the sigmoid and hyperbolic tangent activation functions. We note that LSTM is a special case of PRU when INLINEFORM3 = INLINEFORM4 =1. Experiments To showcase the effectiveness of the PRU, we evaluate the performance on two standard datasets for word-level language modeling and compare with state-of-the-art methods. Additionally, we provide a detailed examination of the PRU and its behavior on the language modeling tasks. Set-up Following recent works, we compare on two widely used datasets, the Penn Treebank (PTB) BIBREF28 as prepared by BIBREF29 and WikiText2 (WT-2) BIBREF20 . For both datasets, we follow the same training, validation, and test splits as in BIBREF0 . We extend the language model, AWD-LSTM BIBREF0 , by replacing LSTM layers with PRU. Our model uses 3-layers of PRU with an embedding size of 400. The number of parameters learned by state-of-the-art methods vary from 18M to 66M with majority of the methods learning about 22M to 24M parameters on the PTB dataset. For a fair comparison with state-of-the-art methods, we fix the model size to 19M and vary the value of INLINEFORM0 and hidden layer sizes so that total number of learned parameters is similar across different configurations. We use 1000, 1200, and 1400 as hidden layer sizes for values of INLINEFORM1 =1,2, and 4, respectively. We use the same settings for the WT-2 dataset. We set the number of pyramidal levels INLINEFORM2 to two in our experiments and use average pooling for sub-sampling. These values are selected based on our ablation experiments on the validation set (Section SECREF39 ). We measure the performance of our models in terms of word-level perplexity. We follow the same training strategy as in BIBREF0 . To understand the effect of regularization methods on the performance of PRUs, we perform experiments under two different settings: (1) Standard dropout: We use a standard dropout BIBREF12 with probability of 0.5 after embedding layer, the output between LSTM layers, and the output of final LSTM layer. (2) Advanced dropout: We use the same dropout techniques with the same dropout values as in BIBREF0 . We call this model as AWD-PRU. Results Table TABREF23 compares the performance of the PRU with state-of-the-art methods. We can see that the PRU achieves the best performance with fewer parameters. PRUs achieve either the same or better performance than LSTMs. In particular, the performance of PRUs improves with the increasing value of INLINEFORM0 . At INLINEFORM1 , PRUs outperform LSTMs by about 4 points on the PTB dataset and by about 3 points on the WT-2 dataset. This is explained in part by the regularization effect of the grouped linear transformation (Figure FIGREF1 ). With grouped linear and pyramidal transformations, PRUs learn rich representations at very high dimensional space while learning fewer parameters. On the other hand, LSTMs overfit to the training data at such high dimensions and learn INLINEFORM2 to INLINEFORM3 more parameters than PRUs. With the advanced dropouts, the performance of PRUs improves by about 4 points on the PTB dataset and 7 points on the WT-2 dataset. This further improves with finetuning on the PTB (about 2 points) and WT-2 (about 1 point) datasets. For similar number of parameters, the PRU with standard dropout outperforms most of the state-of-the-art methods by large margin on the PTB dataset (e.g. RAN BIBREF7 by 16 points with 4M less parameters, QRNN BIBREF33 by 16 points with 1M more parameters, and NAS BIBREF31 by 1.58 points with 6M less parameters). With advanced dropouts, the PRU delivers the best performance. On both datasets, the PRU improves the perplexity by about 1 point while learning 15-20% fewer parameters. PRU is a drop-in replacement for LSTM, therefore, it can improve language models with modern inference techniques such as dynamic evaluation BIBREF21 . When we evaluate PRU-based language models (only with standard dropout) with dynamic evaluation on the PTB test set, the perplexity of PRU ( INLINEFORM0 ) improves from 62.42 to 55.23 while the perplexity of an LSTM ( INLINEFORM1 ) with similar settings improves from 66.29 to 58.79; suggesting that modern inference techniques are equally applicable to PRU-based language models. Analysis It is shown above that the PRU can learn representations at higher dimensionality with more generalization power, resulting in performance gains for language modeling. A closer analysis of the impact of the PRU in a language modeling system reveals several factors that help explain how the PRU achieves these gains. As exemplified in Table TABREF34 , the PRU tends toward more confident decisions, placing more of the probability mass on the top next-word prediction than the LSTM. To quantify this effect, we calculate the entropy of the next-token distribution for both the PRU and the LSTM using 3687 contexts from the PTB validation set. Figure FIGREF32 shows a histogram of the entropies of the distribution, where bins of size 0.23 are used to effect categories. We see that the PRU more often produces lower entropy distributions corresponding to higher confidences for next-token choices. This is evidenced by the mass of the red PRU curve lying in the lower entropy ranges compared to the blue LSTM's curve. The PRU can produce confident decisions in part because more information is encoded in the higher dimensional context vectors. The PRU has the ability to model individual words at different resolutions through the pyramidal transform; which provides multiple paths for the gradient to the embedding layer (similar to multi-task learning) and improves the flow of information. When considering the embeddings by part of speech, we find that the pyramid level 1 embeddings exhibit higher variance than the LSTM across all POS categories (Figure FIGREF33 ), and that pyramid level 2 embeddings show extremely low variance. We hypothesize that the LSTM must encode both coarse group similarities and individual word differences into the same vector space, reducing the space between individual words of the same category. The PRU can rely on the subsampled embeddings to account for coarse-grained group similarities, allowing for finer individual word distinctions in the embedding layer. This hypothesis is strengthened by the entropy results described above: a model which can make finer distinctions between individual words can more confidently assign probability mass. A model that cannot make these distinctions, such as the LSTM, must spread its probability mass across a larger class of similar words. Saliency analysis using gradients help identify relevant words in a test sequence that contribute to the prediction BIBREF34 , BIBREF35 , BIBREF36 . These approaches compute the relevance as the squared norm of the gradients obtained through back-propagation. Table TABREF34 visualizes the heatmaps for different sequences. PRUs, in general, give more relevance to contextual words than LSTMs, such as southeast (sample 1), cost (sample 2), face (sample 4), and introduced (sample 5), which help in making more confident decisions. Furthermore, when gradients during back-propagation are visualized BIBREF37 (Table TABREF34 ), we find that PRUs have better gradient coverage than LSTMs, suggesting PRUs use more features than LSTMs that contributes to the decision. This also suggests that PRUs update more parameters at each iteration which results in faster training. Language model in BIBREF0 takes 500 and 750 epochs to converge with PRU and LSTM as a recurrent unit, respectively. Ablation studies In this section, we provide a systematic analysis of our design choices. Our training methodology is the same as described in Section SECREF19 with the standard dropouts. For a thorough understanding of our design choices, we use a language model with a single layer of PRU and fix the size of embedding and hidden layers to 600. The word-level perplexities are reported on the validation sets of the PTB and the WT-2 datasets. The two hyper-parameters that control the trade-off between performance and number of parameters in PRUs are the number of pyramidal levels INLINEFORM0 and groups INLINEFORM1 . Figure FIGREF35 provides a trade-off between perplexity and recurrent unit (RU) parameters. Variable INLINEFORM0 and fixed INLINEFORM1 : When we increase the number of pyramidal levels INLINEFORM2 at a fixed value of INLINEFORM3 , the performance of the PRU drops by about 1 to 4 points while reducing the total number of recurrent unit parameters by up to 15%. We note that the PRU with INLINEFORM4 at INLINEFORM5 delivers similar performance as the LSTM while learning about 15% fewer recurrent unit parameters. Fixed INLINEFORM0 and variable INLINEFORM1 : When we vary the value of INLINEFORM2 at fixed number of pyramidal levels INLINEFORM3 , the total number of recurrent unit parameters decreases significantly with a minimal impact on the perplexity. For example, PRUs with INLINEFORM4 and INLINEFORM5 learns 77% fewer recurrent unit parameters while its perplexity (lower is better) increases by about 12% in comparison to LSTMs. Moreover, the decrease in number of parameters at higher value of INLINEFORM6 enables PRUs to learn the representations in high dimensional space with better generalizability (Table TABREF23 ). Table TABREF43 shows the impact of different transformations of the input vector INLINEFORM0 and the context vector INLINEFORM1 . We make following observations: (1) Using the pyramidal transformation for the input vectors improves the perplexity by about 1 point on both the PTB and WT-2 datasets while reducing the number of recurrent unit parameters by about 14% (see R1 and R4). We note that the performance of the PRU drops by up to 1 point when residual connections are not used (R4 and R6). (2) Using the grouped linear transformation for context vectors reduces the total number of recurrent unit parameters by about 75% while the performance drops by about 11% (see R3 and R4). When we use the pyramidal transformation instead of the linear transformation, the performance drops by up to 2% while there is no significant drop in the number of parameters (R4 and R5). We set sub-sampling kernel INLINEFORM0 (Eq. EQREF12 ) with stride INLINEFORM1 and size of 3 ( INLINEFORM2 ) in four different ways: (1) Skip: We skip every other element in the input vector. (2) Convolution: We initialize the elements of INLINEFORM3 randomly from normal distribution and learn them during training the model. We limit the output values between -1 and 1 using INLINEFORM4 activation function to make training stable. (3) Avg. pool: We initialize the elements of INLINEFORM5 to INLINEFORM6 . (4) Max pool: We select the maximum value in the kernel window INLINEFORM7 . Table TABREF45 compares the performance of the PRU with different sampling methods. Average pooling performs the best while skipping give comparable performance. Both of these methods enable the network to learn richer word representations while representing the input vector in different forms, thus delivering higher performance. Surprisingly, a convolution-based sub-sampling method does not perform as well as the averaging method. The INLINEFORM0 function used after convolution limits the range of output values which are further limited by the LSTM gating structure, thereby impeding in the flow of information inside the cell. Max pooling forces the network to learn representations from high magnitude elements, thus distinguishing features between elements vanishes, resulting in poor performance. Conclusion We introduce the Pyramidal Recurrent Unit, which better model contextual information by admitting higher dimensional representations with good generalizability. When applied to the task of language modeling, PRUs improve perplexity across several settings, including recent state-of-the-art systems. Our analysis shows that the PRU improves the flow of gradient and expand the word embedding subspace, resulting in more confident decisions. Here we have shown improvements for language modeling. In future, we plan to study the performance of PRUs on different tasks, including machine translation and question answering. In addition, we will study the performance of the PRU on language modeling with more recent inference techniques, such as dynamic evaluation and mixture of softmax. Acknowledgments This research was supported by NSF (IIS 1616112, III 1703166), Allen Distinguished Investigator Award, and gifts from Allen Institute for AI, Google, Amazon, and Bloomberg. We are grateful to Aaron Jaech, Hannah Rashkin, Mandar Joshi, Aniruddha Kembhavi, and anonymous reviewers for their helpful comments.
[ " Penn Treebank, WikiText2", "Penn Treebank (PTB) , WikiText2 (WT-2)" ]
3,302
qasper_e
en
null
e2a3009e3e15b0e45552a4ab0c593e027e64acf190b2903b
Do they use graphical models?
Introduction Following developing news stories is imperative to making real-time decisions on important political and public safety matters. Given the abundance of media providers and languages, this endeavor is an extremely difficult task. As such, there is a strong demand for automatic clustering of news streams, so that they can be organized into stories or themes for further processing. Performing this task in an online and efficient manner is a challenging problem, not only for newswire, but also for scientific articles, online reviews, forum posts, blogs, and microblogs. A key challenge in handling document streams is that the story clusters must be generated on the fly in an online fashion: this requires handling documents one-by-one as they appear in the document stream. In this paper, we provide a treatment to the problem of online document clustering, i.e. the task of clustering a stream of documents into themes. For example, for news articles, we would want to cluster them into related news stories. To this end, we introduce a system which aggregates news articles into fine-grained story clusters across different languages in a completely online and scalable fashion from a continuous stream. Our clustering approach is part of a larger media monitoring project to solve the problem of monitoring massive text and TV/Radio streams (speech-to-text). In particular, media monitors write intelligence reports about the most relevant events, and being able to search, visualize and explore news clusters assists in gathering more insight about a particular story. Since relevant events may be spawned from any part of the world (and from many multilingual sources), it becomes imperative to cluster news across different languages. In terms of granularity, the type of story clusters we are interested in are the group of articles which, for example : (i) Narrate recent air-strikes in Eastern Ghouta (Syria); (ii) Describe the recent launch of Space X's Falcon Heavy rocket. Problem Formulation We focus on clustering of a stream of documents, where the number of clusters is not fixed and learned automatically. We denote by INLINEFORM0 a (potentially infinite) space of multilingual documents. Each document INLINEFORM1 is associated with a language in which it is written through a function INLINEFORM2 where INLINEFORM3 is a set of languages. For example, INLINEFORM4 could return English, Spanish or German. (In the rest of the paper, for an integer INLINEFORM5 , we denote by INLINEFORM6 the set INLINEFORM7 .) We are interested in associating each document with a monolingual cluster via the function INLINEFORM0 , which returns the cluster label given a document. This is done independently for each language, such that the space of indices we use for each language is separate. Furthermore, we interlace the problem of monolingual clustering with crosslingual clustering. This means that as part of our problem formulation we are also interested in a function INLINEFORM0 that associates each monolingual cluster with a crosslingual cluster, such that each crosslingual cluster only groups one monolingual cluster per different language, at a given time. The crosslingual cluster for a document INLINEFORM1 is INLINEFORM2 . As such, a crosslingual cluster groups together monolingual clusters, at most one for each different language. Intuitively, building both monolingual and crosslingual clusters allows the system to leverage high-precision monolingual features (e.g., words, named entities) to cluster documents of the same language, while simplifying the task of crosslingual clustering to the computation of similarity scores across monolingual clusters - which is a smaller problem space, since there are (by definition) less clusters than articles. We validate this choice in § SECREF5 . The Clustering Algorithm Each document INLINEFORM0 is represented by two vectors in INLINEFORM1 and INLINEFORM2 . The first vector exists in a “monolingual space” (of dimensionality INLINEFORM3 ) and is based on a bag-of-words representation of the document. The second vector exists in a “crosslingual space” (of dimensionality INLINEFORM4 ) which is common to all languages. More details about these representations are discussed in § SECREF4 . Document Representation In this section, we give more details about the way we construct the document representations in the monolingual and crosslingual spaces. In particular, we introduce the definition of the similarity functions INLINEFORM0 and INLINEFORM1 that were referred in § SECREF3 . Similarity Metrics Our similarity metric computes weighted cosine similarity on the different subvectors, both in the case of monolingual clustering and crosslingual clustering. Formally, for the monolingual case, the similarity is given by a function defined as: DISPLAYFORM0 and is computed on the TF-IDF subvectors where INLINEFORM0 is the number of subvectors for the relevant document representation. For the crosslingual case, we discuss below the function INLINEFORM1 , which has a similar structure. Here, INLINEFORM0 is the INLINEFORM1 th document in the stream and INLINEFORM2 is a monolingual cluster. The function INLINEFORM3 returns the cosine similarity between the document representation of the INLINEFORM4 th document and the centroid for cluster INLINEFORM5 . The vector INLINEFORM6 denotes the weights through which each of the cosine similarity values for each subvectors are weighted, whereas INLINEFORM7 denotes the weights for the timestamp features, as detailed further. Details on learning the weights INLINEFORM8 and INLINEFORM9 are discussed in § SECREF26 . The function INLINEFORM0 that maps a pair of document and cluster to INLINEFORM1 is defined as follows. Let DISPLAYFORM0 for a given INLINEFORM0 and INLINEFORM1 . For each document INLINEFORM2 and cluster INLINEFORM3 , we generate the following three-dimensional vector INLINEFORM4 : INLINEFORM0 where INLINEFORM1 is the timestamp for document INLINEFORM2 and INLINEFORM3 is the timestamp for the newest document in cluster INLINEFORM4 . INLINEFORM0 where INLINEFORM1 is the average timestamp for all documents in cluster INLINEFORM2 . INLINEFORM0 where INLINEFORM1 is the timestamp for the oldest document in cluster INLINEFORM2 . These three timestamp features model the time aspect of the online stream of news data and help disambiguate clustering decisions, since time is a valuable indicator that a news story has changed, even if a cluster representation has a reasonable match in the textual features with the incoming document. The same way a news story becomes popular and fades over time BIBREF2 , we model the probability of a document belonging to a cluster (in terms of timestamp difference) with a probability distribution. For the case of crosslingual clustering, we introduce INLINEFORM0 , which has a similar definition to INLINEFORM1 , only instead of passing document/cluster similarity feature vectors, we pass cluster/cluster similarities, across all language pairs. Furthermore, the features are the crosslingual embedding vectors of the sections title, body and both combined (similarly to the monolingual case) and the timestamp features. For denoting the cluster timestamp, we use the average timestamps of all articles in it. Learning to Rank Candidates In § SECREF19 we introduced INLINEFORM0 and INLINEFORM1 as the weight vectors for the several document representation features. We experiment with both setting these weights to just 1 ( INLINEFORM2 and INLINEFORM3 ) and also learning these weights using support vector machines (SVMs). To generate the SVM training data, we simulate the execution of the algorithm on a training data partition (which we do not get evaluated on) and in which the gold standard labels are given. We run the algorithm using only the first subvector INLINEFORM4 , which is the TF-IDF vector with the words of the document in the body and title. For each incoming document, we create a collection of positive examples, for the document and the clusters which share at least one document in the gold labeling. We then generate 20 negative examples for the document from the 20 best-matching clusters which are not correct. To find out the best-matching clusters, we rank them according to their similarity to the input document using only the first subvector INLINEFORM5 . Using this scheme we generate a collection of ranking examples (one for each document in the dataset, with the ranking of the best cluster matches), which are then trained using the SVMRank algorithm BIBREF3 . We run 5-fold cross-validation on this data to select the best model, and train both a separate model for each language according to INLINEFORM0 and a crosslingual model according to INLINEFORM1 . Experiments Our system was designed to cluster documents from a (potentially infinite) real-word data stream. The datasets typically used in the literature (TDT, Reuters) have a small number of clusters ( INLINEFORM0 20) with coarse topics (economy, society, etc.), and therefore are not relevant to the use case of media monitoring we treat - as it requires much more fine-grained story clusters about particular events. To evaluate our approach, we adapted a dataset constructed for the different purpose of binary classification of joining cluster pairs. We processed it to become a collection of articles annotated with monolingual and crosslingual cluster labels. Statistics about this dataset are given in Table TABREF30 . As described further, we tune the hyper-parameter INLINEFORM0 on the development set. As for the hyper-parameters related to the timestamp features, we fixed INLINEFORM1 and tuned INLINEFORM2 on the development set, yielding INLINEFORM3 . To compute IDF scores (which are global numbers computed across a corpus), we used a different and much larger dataset that we collected from Deutsche Welle's news website (http://www.dw.com/). The dataset consists of 77,268, 118,045 and 134,243 documents for Spanish, English and German, respectively. The conclusions from our experiments are: (a) the weighting of the similarity metric features using SVM significantly outperforms unsupervised baselines such as CluStream (Table TABREF35 ); (b) the SVM approach significantly helps to learn when to create a new cluster, compared to simple grid search for the optimal INLINEFORM0 (Table TABREF39 ); (c) separating the feature space into one for monolingual clusters in the form of keywords and the other for crosslingual clusters based on crosslingual embeddings significantly helps performance. Monolingual Results In our first set of experiments, we report results on monolingual clustering for each language separately. Monolingual clustering of a stream of documents is an important problem that has been inspected by others, such as by ahmed2011unified and by aggarwal2006framework. We compare our results to our own implementation of the online micro-clustering routine presented by aggarwal2006framework, which shall be referred to as CluStream. We note that CluStream of aggarwal2006framework has been a widely used state-of-the-art system in media monitoring companies as well as academia, and serves as a strong baseline to this day. In our preliminary experiments, we also evaluated an online latent semantic analysis method, in which the centroids we keep for the function INLINEFORM0 (see § SECREF3 ) are the average of reduced dimensional vectors of the incoming documents as generated by an incremental singular value decomposition (SVD) of a document-term matrix that is updated after each incoming document. However, we discovered that online LSA performs significantly worse than representing the documents the way is described in § SECREF4 . Furthermore, it was also significantly slower than our algorithm due to the time it took to perform singular value decomposition. Table TABREF35 gives the final monolingual results on the three datasets. For English, we see that the significant improvement we get using our algorithm over the algorithm of aggarwal2006framework is due to an increased recall score. We also note that the trained models surpass the baseline for all languages, and that the timestamp feature (denoted by TS), while not required to beat the baseline, has a very relevant contribution in all cases. Although the results for both the baseline and our models seem to differ across languages, one can verify a consistent improvement from the latter to the former, suggesting that the score differences should be mostly tied to the different difficulty found across the datasets for each language. The presented scores show that our learning framework generalizes well to different languages and enables high quality clustering results. To investigate the impact of the timestamp features, we ran an additional experiment using only the same three timestamp features as used in the best model on the English dataset. This experiment yielded scores of INLINEFORM0 , INLINEFORM1 and INLINEFORM2 , which lead us to conclude that while these features are not competitive when used alone (hence temporal information by itself is not sufficient to predict the clusters), they contribute significantly to recall with the final feature ensemble. We note that as described in § SECREF3 , the optimization of the INLINEFORM0 parameter is part of the development process. The parameter INLINEFORM1 is a similarity threshold used to decide when an incoming document should merge to the best cluster or create a new one. We tune INLINEFORM2 on the development set for each language, and the sensitivity to it is demonstrated in Figure FIGREF36 (this process is further referred to as INLINEFORM3 ). Although applying grid-search on this parameter is the most immediate approach to this problem, we experimented with a different method which yielded superior results: as described further, we discuss how to do this process with an additional classifier (denoted SVM-merge), which captures more information about the incoming documents and the existing clusters. Additionally, we also experimented with computing the monolingual clusters with the same embeddings as used in the crosslingual clustering phase, which yielded poor results. In particular, this system achieved INLINEFORM0 score of INLINEFORM1 for English, which is below the bag-of-words baseline presented in Table TABREF35 . This result supports the approach we then followed of having two separate feature spaces for the monolingual and crosslingual clustering systems, where the monolingual space is discrete and the crosslingual space is based on embeddings. To investigate the importance of each feature, we now consider in Table TABREF37 the accuracy of the SVM ranker for English as described in § SECREF19 . We note that adding features increases the accuracy of the SVM ranker, especially the timestamp features. However, the timestamp feature actually interferes with our optimization of INLINEFORM0 to identify when new clusters are needed, although they improve the SVM reranking accuracy. We speculate this is true because high accuracy in the reranking problem does not necessarily help with identifying when new clusters need to be opened. To investigate this issue, we experimented with a different technique to learn when to create a new cluster. To this end, we trained another SVM classifier just to learn this decision, this time a binary classifier using LIBLINEAR BIBREF4 , by passing the max of the similarity of each feature between the incoming document and the current clustering pool as the input feature vector. This way, the classifier learns when the current clusters, as a whole, are of a different news story than the incoming document. As presented in Table TABREF39 , this method, which we refer to as SVM-merge, solved the issue of searching for the optimal INLINEFORM0 parameter for the SVM-rank model with timestamps, by greatly improving the F INLINEFORM1 score in respect to the original grid-search approach ( INLINEFORM2 ). Crosslingual Results As mentioned in § SECREF3 , crosslingual embeddings are used for crosslingual clustering. We experimented with the crosslingual embeddings of gardner2015translation and ammar2016massively. In our preliminary experiments we found that the former worked better for our use-case than the latter. We test two different scenarios for optimizing the similarity threshold INLINEFORM0 for the crosslingual case. Table TABREF41 shows the results for these experiments. First, we consider the simpler case of adjusting a global INLINEFORM1 parameter for the crosslingual distances, as also described for the monolingual case. As shown, this method works poorly, since the INLINEFORM2 grid-search could not find a reasonable INLINEFORM3 which worked well for every possible language pair. Subsequently, we also consider the case of using English as a pivot language (see § SECREF3 ), where distances for every other language are only compared to English, and crosslingual clustering decisions are made only based on this distance. This yielded our best crosslingual score of INLINEFORM0 , confirming that crosslingual similarity is of higher quality between each language and English, for the embeddings we used. This score represents only a small degradation in respect to the monolingual results, since clustering across different languages is a harder problem. Related Work Early research efforts, such as the TDT program BIBREF5 , have studied news clustering for some time. The problem of online monolingual clustering algorithms (for English) has also received a fair amount of attention in the literature. One of the earlier papers by aggarwal2006framework introduced a two-step clustering system with both offline and online components, where the online model is based on a streaming implementation of INLINEFORM0 -means and a bag-of-words document representation. Other authors have experimented with distributed representations, such as ahmed2011unified, who cluster news into storylines using Markov chain Monte Carlo methods, rehureklrec who used incremental Singular Value Decomposition (SVD) to find relevant topics from streaming data, and sato2017distributed who used the paragraph vector model BIBREF6 in an offline clustering setting. More recently, crosslingual linking of clusters has been discussed by rupnik2016news in the context of linking existing clusters from the Event Registry BIBREF7 in a batch fashion, and by steinberger2016mediagist who also present a batch clustering linking system. However, these are not “truly” online crosslingual clustering systems since they only decide on the linking of already-built monolingual clusters. In particular, rupnik2016news compute distances of document pairs across clusters using nearest neighbors, which might not scale well in an online setting. As detailed before, we adapted the cluster-linking dataset from rupnik2016news to evaluate our online crosslingual clustering approach. Preliminary work makes use of deep learning techniques BIBREF8 , BIBREF9 to cluster documents while learning their representations, but not in an online or multilingual fashion, and with a very small number of cluster labels (4, in the case of the text benchmark). In our work, we studied the problem of monolingual and crosslingual clustering, having experimented several directions and methods and the impact they have on the final clustering quality. We described the first system which aggregates news articles into fine-grained story clusters across different languages in a completely online and scalable fashion from a continuous stream. Conclusion We described a method for monolingual and crosslingual clustering of an incoming stream of documents. The method works by maintaining centroids for the monolingual and crosslingual clusters, where a monolingual cluster groups a set of documents and a crosslingual cluster groups a set of monolingual clusters. We presented an online crosslingual clustering method which auto-corrects past decisions in an efficient way. We showed that our method gives state-of-the-art results on a multilingual news article dataset for English, Spanish and German. Finally, we discussed how to leverage different SVM training procedures for ranking and classification to improve monolingual and crosslingual clustering decisions. Our system is integrated in a larger media monitoring project BIBREF10 , BIBREF11 and solving the use-cases of monitors and journalists, having been validated with qualitative user testing. Acknowledgments We would like to thank Esma Balkır, Nikos Papasarantopoulos, Afonso Mendes, Shashi Narayan and the anonymous reviewers for their feedback. This project was supported by the European H2020 project SUMMA, grant agreement 688139 (see http://www.summa-project.eu) and by a grant from Bloomberg.
[ "No", "No" ]
3,158
qasper_e
en
null
a98bc410d113f968de393f3da14435b31b34a8ce2ccec169
Why are prior knowledge distillation techniques models are ineffective in producing student models with vocabularies different from the original teacher models?
Introduction Recently, contextual-aware language models such as ELMo BIBREF0, GPT BIBREF1, BERT BIBREF2 and XLNet BIBREF3 have shown to greatly outperform traditional word embedding models including Word2Vec BIBREF4 and GloVe BIBREF5 in a variety of NLP tasks. These pre-trained language models, when fine-tuned on downstream language understanding tasks such as sentiment classification BIBREF6, natural language inference BIBREF7 and reading comprehension BIBREF8, BIBREF9, have achieved state-of-the-art performance. However, the large number of parameters in these models, often above hundreds of millions, makes it impossible to host them on resource-constrained tasks such as doing real-time inference on mobile and edge devices. Besides utilizing model quantization techniques BIBREF10, BIBREF11 which aim to reduce the floating-point accuracy of the parameters, significant recent research has focused on knowledge distillation BIBREF12, BIBREF13 techniques. Here, the goal is to train a small-footprint student model by borrowing knowledge, such as through a soft predicted label distribution, from a larger pre-trained teacher model. However, a significant bottleneck that has been overlooked by previous efforts is the input vocabulary size and its corresponding word embedding matrix, often accounting for a significant proportion of all model parameters. For instance, the embedding table of the BERTBASE model, comprising over 30K WordPiece tokens BIBREF14, accounts for over $21\%$ of the model size. While there has been existing work on reducing NLP model vocabulary sizes BIBREF15, distillation techniques cannot utilize these, since they require the student and teacher models to share the same vocabulary and output space. This profoundly limits their potential to further reduce model sizes. We present two novel ideas to improve the effectiveness of knowledge distillation, in particular for BERT, with the focus on bringing down model sizes to as much as a few mega-bytes. Our model is among the first to propose to use a significantly smaller vocabulary for the student model learned during distillation. In addition, instead of distilling solely on the teacher model's final-layer outputs, our model leverages layer-wise teacher model parameters to directly optimize the parameters of the corresponding layers in the student model. Specifically, our contributions are: [leftmargin=*] Dual Training: Our teacher and student models have different vocabularies and incompatible tokenizations for the same sequence. To address this during distillation, we feed the teacher model a mix of teacher vocabulary-tokenized and student vocabulary-tokenized words within a single sequence. Coupled with the masked language modeling task, this encourages an implicit alignment of the teacher and student WordPiece embeddings, since the student vocabulary embedding may be used as context to predict a word tokenized by the teacher vocabulary and vice versa. Shared Variable Projections: To minimize the loss of information from reducing the hidden state dimension, we introduce a separate loss to align the teacher and student models' trainable variables. This allows for more direct layer-wise transfer of knowledge to the student model. Using the combination of dual training and shared variable projections, we train a 12-layer highly-compressed student BERT model, achieving a maximum compression ratio of $\sim $61.94x (with 48 dimension size) compared to the teacher BERTBASE model. We conduct experiments for measuring both generalized language modeling perspective and for downstream tasks, demonstrating competitive performance with high compression ratios for both families of tasks. Related Work Research in neural network model compression has been concomitant with the rise in popularity of neural networks themselves, since these models have often been memory-intensive for the hardware of their time. Work in model compression for NLP applications falls broadly into four categories: matrix approximation, parameter pruning/sharing, weight quantization and knowledge distillation. A family of approaches BIBREF16, BIBREF17 seeks to compress the matrix parameters of the models by low-rank approximation i.e. the full-rank matrix parameter is approximated using multiple low-rank matrices, thereby reducing the effective number of model parameters. Another line of work explores parameter pruning and sharing-based methods BIBREF18, BIBREF19, BIBREF20, BIBREF21, which explore the redundancy in model parameters and try to remove redundant weights as well as neurons, for a variety of neural network architectures. Model weight quantization techniques BIBREF22, BIBREF23, BIBREF24, BIBREF25 focus on mapping model weights to lower-precision integers and floating-point numbers. These can be especially effective with hardware supporting efficient low-precision calculations. More recently, BIBREF26 apply quantization to BERT-based transformer models. Knowledge distillation BIBREF12, BIBREF13 differs from the other discussed approaches: the smaller student model may be parametrized differently from the bigger teacher model, affording more modeling freedom. Teaching a student model to match the soft output label distributions from a larger model alongside the hard ground-truth distribution works well for many tasks, such as machine translation BIBREF27 and language modeling BIBREF28. Not limited to the teacher model outputs, some approaches perform knowledge distillation via attention transfer BIBREF29, or via feature maps or intermediate model outputs BIBREF30, BIBREF31, BIBREF32. More relevant to current work, BIBREF33 and BIBREF34 employ variants of these techniques to BERT model compression by reducing the number of transformer layers. However, as explained before, these approaches are not immediately applicable to our setting due to incompatible teacher and student model vocabularies, and do not focus sufficiently on the embedding matrix size BIBREF35. Methodology Our knowledge distillation approach is centered around reducing the number of WordPiece tokens in the model vocabulary. In this section, we first discuss the rationale behind this reduction and the challenges it introduces, followed by our techniques, namely dual training and shared projection. Methodology ::: Optimal Subword Embeddings via Knowledge Distillation We follow the general knowledge distillation paradigm of training a small student model from a large teacher model. Our teacher model is a 12-layer uncased BERTBASE, trained with 30522 WordPiece tokens and 768-dimensional embeddings and hidden states. We denote the teacher model parameters by $\theta _{t}$. Our student model consists of an equal number of transformer layers with parameters denoted by $\theta _{s}$, but with a smaller vocabulary as well as embedding/hidden dimensions, illustrated in Figure FIGREF4. Using the same WordPiece algorithm and training corpus as BERT, we obtain a vocabulary of 4928 WordPieces, which we use for the student model. WordPiece tokens BIBREF14 are sub-word units obtained by applying a greedy segmentation algorithm to the training corpus: a desired number (say, $D$) of WordPieces are chosen such that the segmented corpus is minimal in the number of WordPieces used. A cursory look at both vocabularies reveals that $93.9\%$ of the WordPieces in the student vocabulary also exist in the teacher vocabulary, suggesting room for a reduction in the WordPiece vocabulary size from 30K tokens. Since we seek to train a general-purpose student language model, we elect to reuse the teacher model's original training objective to optimize the student model, i.e., masked language modeling and next sentence prediction, before any fine-tuning. In the former task, words in context are randomly masked, and the language model needs to predict those words given the masked context. In the latter task, given a pair of sentences, the language model predicts whether the pair is consistent. However, since the student vocabulary is not a complete subset of the teacher vocabulary, the two vocabularies may tokenize the same words differently. As a result, the outputs of the teacher and student model for the masked language modeling task may not align. Even with the high overlap between the two vocabularies, the need to train the student embedding from scratch, and the change in embedding dimension precludes existing knowledge distillation techniques, which rely on the alignment of both models' output spaces. As a result, we explore two alternative approaches that enable implicit transfer of knowledge to the student model, which we describe below. Methodology ::: Dual Training During distillation, for a given training sequence input to the teacher model, we propose to mix the teacher and student vocabularies by randomly selecting (with a probability $p_{DT}$, a hyperparameter) tokens from the sequence to segment using the student vocabulary, with the other tokens segmented using the teacher vocabulary. As illustrated in Figure FIGREF4, given the input context [ `I', `like', `machine', `learning'], the words `I' and `machine' are segmented using the teacher vocabulary (in green), while `like' and `learning' are segmented using the student vocabulary (in blue). Similar to cross-lingual training in BIBREF36, this encourages alignment of the representations for the same word as per the teacher and student vocabularies. This is effected through the masked language modeling task: the model now needs to learn to predict words from the student vocabulary using context words segmented using the teacher vocabulary, and vice versa. The expectation is that the student embeddings can be learned effectively this way from the teacher embeddings as well as model parameters $\theta _{t}$. Note that we perform dual training only for the teacher model inputs: the student model receives words segmented exclusively using the student vocabulary. Also, during masked language modeling, the model uses different softmax layers for the teacher and the student vocabularies depending on which one was used to segment the word in question. Methodology ::: Shared Projections Relying solely on teacher model outputs to train the student model may not generalize well BIBREF34. Therefore, some approaches utilize and try to align the student model's intermediate predictions to those of the teacher BIBREF30. In our setting, however, since the student and teacher model output spaces are not identical, intermediate model outputs may prove hard to align. Instead, we seek to directly minimize the loss of information from the teacher model parameters $\theta _{t}$ to the student parameters $\theta _{s}$ with smaller dimensions. We achieve this by projecting the model parameters into the same space, to encourage alignment. More specifically, as in Figure FIGREF4, we project each trainable variable in $\theta _{t}$ to the same shape as the corresponding variable in $\theta _{s}$. For example, for all the trainable variables $\theta _{t}$ with shape $768\times 768$, we learn two projection matrices $\mathbf {U}\in \mathbb {R}^{d\times 768}$ and $\mathbf {V}\in \mathbb {R}^{768\times d}$ to project them into the corresponding space of the student model variable $\theta _{t}^\prime $, where $d$ is the student model's hidden dimension. $\mathbf {U}$ and $\mathbf {V}$ are common to all BERT model parameters of that dimensionality; in addition, $\mathbf {U}$ and $\mathbf {V}$ are not needed for fine-tuning or inference after distillation. In order to align the student variable and the teacher variable's projection, we introduce a separate mean square error loss defined in Equation DISPLAY_FORM7, where $\downarrow $ stands for down projection (since the projection is to a lower dimension). The above loss function aligns the trainable variables in the student space. Alternatively, we can project trainable variables in $\theta _{s}$ to the same shape as in $\theta _{t}$. This way, the loss function in Equation DISPLAY_FORM8, ($\uparrow $ denotes up projection) can compare the trainable variables in the teacher space. Methodology ::: Optimization Objective Our final loss function includes, in addition to an optional projection loss, masked language modeling cross-entropy losses for the student as well as the teacher models, since the teacher model is trained with dual-vocabulary inputs and is not static. $P(y_i=c|\theta _{s})$ and $P(y_i=c|\theta _{t})$ denote the student and teacher model prediction probabilities for class c respectively, and $\mathbb {1}$ denotes an indicator function. Equations DISPLAY_FORM10 and below define the final loss $L_{final}$, where $\epsilon $ is a hyperparameter. Experiments To evaluate our knowledge distillation approach, we design two classes of experiments. First, we evaluate the distilled student language models using the masked word prediction task on an unseen evaluation corpus, for an explicit evaluation of the language model. Second, we fine-tune the language model by adding a task-specific affine layer on top of the student language model outputs, on a suite of downstream sentence and sentence pair classification tasks. This is meant to be an implicit evaluation of the quality of the representations learned by the student language model. We describe these experiments, along with details on training, implementation and our baselines below. Experiments ::: Language Model Training During the distillation of the teacher BERT model to train the student BERT language model, we utilize the same corpus as was used to train the teacher i.e. BooksCorpus BIBREF37 and English Wikipedia, with whitespaces used tokenize the text into words. We only use the masked language modeling task to calculate the overall distillation loss from Section SECREF6, since the next sentence prediction loss hurt performance slightly. Dual training is enabled for teacher model inputs, with $p_{DT}$, the probability of segmenting a teacher model input word using the student vocabulary, set to 0.5. For experiments including shared projection, the projection matrices $\mathbf {U}$ and $\mathbf {V}$ utilized Xavier initialization BIBREF38. The loss weight coefficient $\epsilon $ is set to 1 after tuning. It is worth noting that in contrast to a number of existing approaches, we directly distill the teacher BERT language model, not yet fine-tuned on a downstream task, to obtain a student language model that is task-agnostic. For downstream tasks, we fine-tune this distilled student language model. Distillation is carried out on Cloud TPUs in a 4x4 pod configuration (32 TPU cores overall). We optimized the loss using LAMB BIBREF39 for 250K steps, with a learning rate of 0.00125 and batch size of 4096. Depending on the student model dimension, training took between 2-4 days. Experiments ::: Models and Baselines We evaluate three variants of our distilled student models: with only dual training of the teacher and student vocabularies (DualTrain) and with dual training along with down-projection (DualTrain + SharedProjDown) or up-projection (DualTrain + SharedProjUp) of the teacher model parameters. For each of these configurations, we train student models with embedding and hidden dimensions 48, 96 and 192, for 9 total variants, each using a compact 5K-WordPiece vocabulary. Table TABREF14 presents some statistics on these models' sizes: our smallest model contains two orders of magnitude fewer parameters, and requires only 1% floating-point operations when compared to the BERTBASE model. For the language modeling evaluation, we also evaluate a baseline without knowledge distillation (termed NoKD), with a model parameterized identically to the distilled student models but trained directly on the teacher model objective from scratch. For downstream tasks, we compare with NoKD as well as Patient Knowledge Distillation (PKD) from BIBREF34, who distill the 12-layer BERTBASE model into 3 and 6-layer BERT models by using the teacher model's hidden states. Experiments ::: Evaluation Tasks and Datasets For explicit evaluation of the generalized language perspective of the distilled student language models, we use the Reddit dataset BIBREF40 to measure word mask prediction accuracy of the student models, since the language used on Reddit is different from that in the training corpora. The dataset is preprocessed similarly to the training corpora, except we do not need to tokenize it using the teacher vocabulary, since we only run and evaluate the student models. For implicit evaluation on downstream language understanding tasks, we fine-tune and evaluate the distilled student models on three tasks from the GLUE benchmark BIBREF41: [leftmargin=*,topsep=0pt] Stanford Sentiment Treebank (SST-2) BIBREF6, a two-way sentence sentiment classification task with 67K training instances, Microsoft Research Paraphrase Corpus (MRPC) BIBREF42, a two-way sentence pair classification task to identify paraphrases, with 3.7K training instances, and Multi-Genre Natural Language Inference (MNLI) BIBREF7, a three-way sentence pair classification task with 393K training instances, to identify premise-hypothesis relations. There are separate development and test sets for genre-matched and genre-mismatched premise-hypothesis pairs; we tune our models solely on the genre-matched development set. For all downstream task evaluations, we fine-tune for 10 epochs using LAMB with a learning rate of 0.0002 and batch size of 32. Since our language models are trained with a maximum sequence length of 128 tokens, we do not evaluate on reading comprehension datasets such as SQuAD BIBREF8 or RACE BIBREF9, which require models supporting longer sequences. Results Table TABREF20 contains masked word prediction accuracy figures for the different models and the NoKD baseline. We observe that dual training significantly improves over the baseline for all model dimensions, and that both shared projection losses added to dual training further improve the word prediction accuracy. It is interesting to note that for all model dimensions, SharedProjUp projecting into the teacher space outperforms SharedProjDown, significantly so for dimension 48. Expectedly, there is a noticeable performance drop going from 192 to 96 to 48-dimensional hidden state models. Note that because of the differing teacher and student model vocabularies, masked word prediction accuracy for the teacher BERTBASE model is not directly comparable with the student models. Table TABREF21 shows results on the downstream language understanding tasks, as well as model sizes, for our approaches, the BERTBASE teacher model, and the PKD and NoKD baselines. We note that models trained with our proposed approaches perform strongly and consistently improve upon the identically parametrized NoKD baselines, indicating that the dual training and shared projection techniques are effective, without incurring significant losses against the BERTBASE teacher model. Comparing with the PKD baseline, our 192-dimensional models, achieving a higher compression rate than either of the PKD models, perform better than the 3-layer PKD baseline and are competitive with the larger 6-layer baseline on task accuracy while being nearly 5 times as small. Another observation we make is that the performance drop from 192-dimensional to 96-dimensional models is minimal (less than 2% for most tasks). For the MRPC task, in fact, the 96-dimensional model trained with dual training achieves an accuracy of 80.5%, which is higher than even the PKD 6-layer baseline with nearly 12 times as many parameters. Finally, our highly-compressed 48-dimensional models also perform respectably: the best 48-dimensional models are in a similar performance bracket as the 3-layer PKD model, a model 25 times larger by memory footprint. Discussion Shared projections and model performance: We see that for downstream task performance, dual training still consistently improves upon the direct fine-tuning approach for virtually all experiments. The effect of shared variable projection, however, is less pronounced, with consistent improvements visible only for MRPC and for the 48-dimensional models i.e. the smallest dataset and models respectively in our experiments. This aligns with our intuition for variable projection as a more direct way to provide a training signal from the teacher model internals, which can assume more importance for a low-data or small-model scenario. However, for larger models and more data, the linear projection of parameters may be reducing the degrees of freedom available to the model, since linear projection is a fairly simple function to align the teacher and student parameter spaces. A related comparison of interest is between up-projection and down-projection of the model variables: we note up-projection does visibly better on the language modeling task and slightly better on the downstream tasks. The parameters of a well-trained teacher model represent a high-quality local minimum in the teacher space, which may be easier to search for during up-projection. Vocabulary size tradeoffs: Issues with input vocabulary size are peculiar to problems in natural language processing: they do not always apply to other areas such as computer vision, where a small fixed number of symbols can encode most inputs. There has been some work on reducing input vocabulary sizes for NLP, but typically not targeting model compression. One concern with reducing the vocabularies of NLP models is it pushes the average tokenized sequence lengths up, making model training harder. In this work, however, we consider classification tasks on shorter texts, which are not as affected by input sequence lengths as, say, tasks such as machine translation are. Furthermore, many real-world applications revolve around short text inputs, which is why a better trade-off between vocabulary size and sequence lengths may be worthwhile for such applications. Order of distillation and fine-tuning: Most of the existing work on distilling language models such as BERT and reporting results on downstream tasks, including some of the baselines in this work, first fine-tune a teacher model on the downstream tasks, and then distill this model. Our goal in this work, however, is to explore the limits to which BERT's language modeling capacity itself, and how much of it is driven by its large WordPiece vocabulary. We leave experiments on distilling fine-tuned teacher models, potentially yielding better results on downstream tasks, to future work. Conclusion We proposed two novel ideas to improve the effectiveness of knowledge distillation for BERT, focusing on using a significantly smaller vocabulary, as well as smaller embedding and hidden dimensions for the student BERT language models. Our dual training mechanism encourages implicit alignment of the teacher and student WordPiece embeddings, and shared variable projection allows for the faster and direct layer-wise transfer of knowledge to the student BERT model. Combining the two techniques, we trained a series of highly-compressed 12-layer student BERT models. Experiments on these models, to evaluate both generalized language perspective and four standardized downstream tasks, demonstrate the effectiveness of our proposed methods on both model accuracy and compression efficiency. One future direction of interest is to combine our approach with existing work to reduce the number of layers in the student models and explore other approaches such as low-rank matrix factorization to transfer model parameters from the teacher space to the student space. In addition, taking into account the frequency distribution of the WordPiece tokens while training embeddings may help optimize the model size further.
[ "While there has been existing work on reducing NLP model vocabulary sizes BIBREF15, distillation techniques cannot utilize these, since they require the student and teacher models to share the same vocabulary and output space. This profoundly limits their potential to further reduce model sizes.", "distillation techniques cannot utilize these, since they require the student and teacher models to share the same vocabulary and output space. This profoundly limits their potential to further reduce model sizes." ]
3,570
qasper_e
en
null
86d091f9a2a36e97ae2f587ff9e1beb3e41c34862d70451e
What baseline method is used?
Introduction Sentiment analysis has recently been one of the hottest topics in natural language processing (NLP). It is used to identify and categorise opinions expressed by reviewers on a topic or an entity. Sentiment analysis can be leveraged in marketing, social media analysis, and customer service. Although many studies have been conducted for sentiment analysis in widely spoken languages, this topic is still immature for Turkish and many other languages. Neural networks outperform the conventional machine learning algorithms in most classification tasks, including sentiment analysis BIBREF0. In these networks, word embedding vectors are fed as input to overcome the data sparsity problem and make the representations of words more “meaningful” and robust. Those embeddings indicate how close the words are to each other in the vector space model (VSM). Most of the studies utilise embeddings, such as word2vec BIBREF1, which take into account the syntactic and semantic representations of the words only. Discarding the sentimental aspects of words may lead to words of different polarities being close to each other in the VSM, if they share similar semantic and syntactic features. For Turkish, there are only a few studies which leverage sentimental information in generating the word and document embeddings. Unlike the studies conducted for English and other widely-spoken languages, in this paper, we use the official dictionaries for this language and combine the unsupervised and supervised scores to generate a unified score for each dimension of the word embeddings in this task. Our main contribution is to create original and effective word vectors that capture syntactic, semantic, and sentimental characteristics of words, and use all of this knowledge in generating embeddings. We also utilise the word2vec embeddings trained on a large corpus. Besides using these word embeddings, we also generate hand-crafted features on a review basis and create document vectors. We evaluate those embeddings on two datasets. The results show that we outperform the approaches which do not take into account the sentimental information. We also had better performances than other studies carried out on sentiment analysis in Turkish media. We also evaluated our novel embedding approaches on two English corpora of different genres. We outperformed the baseline approaches for this language as well. The source code and datasets are publicly available. The paper is organised as follows. In Section 2, we present the existing works on sentiment classification. In Section 3, we describe the methods proposed in this work. The experimental results are shown and the main contributions of our proposed approach are discussed in Section 4. In Section 5, we conclude the paper. Related Work In the literature, the main consensus is that the use of dense word embeddings outperform the sparse embeddings in many tasks. Latent semantic analysis (LSA) used to be the most popular method in generating word embeddings before the invention of the word2vec and other word vector algorithms which are mostly created by shallow neural network models. Although many studies have been employed on generating word vectors including both semantic and sentimental components, generating and analysing the effects of different types of embeddings on different tasks is an emerging field for Turkish. Latent Dirichlet allocation (LDA) is used in BIBREF2 to extract mixture of latent topics. However, it focusses on finding the latent topics of a document, not the word meanings themselves. In BIBREF3, LSA is utilised to generate word vectors, leveraging indirect co-occurrence statistics. These outperform the use of sparse vectors BIBREF4. Some of the prior studies have also taken into account the sentimental characteristics of a word when creating word vectors BIBREF5, BIBREF6, BIBREF7. A model with semantic and sentiment components is built in BIBREF8, making use of star-ratings of reviews. In BIBREF9, a sentiment lexicon is induced preferring the use of domain-specific co-occurrence statistics over the word2vec method and they outperform the latter. In a recent work on sentiment analysis in Turkish BIBREF10, they learn embeddings using Turkish social media. They use the word2vec algorithm, create several unsupervised hand-crafted features, generate document vectors and feed them as input into the support vector machines (SVM) approach. We outperform this baseline approach using more effective word embeddings and supervised hand-crafted features. In English, much of the recent work on learning sentiment-specific embeddings relies only on distant supervision. In BIBREF11, emojis are used as features and a bi-LSTM (bidirectional long short-term memory) neural network model is built to learn sentiment-aware word embeddings. In BIBREF12, a neural network that learns word embeddings is built by using contextual information about the data and supervised scores of the words. This work captures the supervised information by utilising emoticons as features. Most of our approaches do not rely on a neural network model in learning embeddings. However, they produce state-of-the-art results. Methodology We generate several word vectors, which capture the sentimental, lexical, and contextual characteristics of words. In addition to these mostly original vectors, we also create word2vec embeddings to represent the corpus words by training the embedding model on these datasets. After generating these, we combine them with hand-crafted features to create document vectors and perform classification, as will be explained in Section 3.5. Methodology ::: Corpus-based Approach Contextual information is informative in the sense that, in general, similar words tend to appear in the same contexts. For example, the word smart is more likely to cooccur with the word hardworking than with the word lazy. This similarity can be defined semantically and sentimentally. In the corpus-based approach, we capture both of these characteristics and generate word embeddings specific to a domain. Firstly, we construct a matrix whose entries correspond to the number of cooccurrences of the row and column words in sliding windows. Diagonal entries are assigned the number of sliding windows that the corresponding row word appears in the whole corpus. We then normalise each row by dividing entries in the row by the maximum score in it. Secondly, we perform the principal component analysis (PCA) method to reduce the dimensionality. It captures latent meanings and takes into account high-order cooccurrence removing noise. The attribute (column) number of the matrix is reduced to 200. We then compute cosine similarity between each row pair $w_i$ and $w_j$ as in (DISPLAY_FORM3) to find out how similar two word vectors (rows) are. Thirdly, all the values in the matrix are subtracted from 1 to create a dissimilarity matrix. Then, we feed the matrix as input into the fuzzy c-means clustering algorithm. We chose the number of clusters as 200, as it is considered a standard for word embeddings in the literature. After clustering, the dimension i for a corresponding word indicates the degree to which this word belongs to cluster i. The intuition behind this idea is that if two words are similar in the VSM, they are more likely to belong to the same clusters with akin probabilities. In the end, each word in the corpus is represented by a 200-dimensional vector. In addition to this method, we also perform singular value decomposition (SVD) on the cooccurrence matrices, where we compute the matrix $M^{PPMI} = U\Sigma V^{T}$. Positive pointwise mutual information (PPMI) scores between words are calculated and the truncated singular value decomposition is computed. We take into account the U matrix only for each word. We have chosen the singular value number as 200. That is, each word in the corpus is represented by a 200-dimensional vector as follows. Methodology ::: Dictionary-based Approach In Turkish, there do not exist well-established sentiment lexicons as in English. In this approach, we made use of the TDK (Türk Dil Kurumu - “Turkish Language Institution”) dictionary to obtain word polarities. Although it is not a sentiment lexicon, combining it with domain-specific polarity scores obtained from the corpus led us to have state-of-the-art results. We first construct a matrix whose row entries are corpus words and column entries are the words in their dictionary definitions. We followed the boolean approach. For instance, for the word cat, the column words occurring in its dictionary definition are given a score of 1. Those column words not appearing in the definition of cat are assigned a score of 0 for that corresponding row entry. When we performed clustering on this matrix, we observed that those words having similar meanings are, in general, assigned to the same clusters. However, this similarity fails in capturing the sentimental characteristics. For instance, the words happy and unhappy are assigned to the same cluster, since they have the same words, such as feeling, in their dictionary definitions. However, they are of opposite polarities and should be discerned from each other. Therefore, we utilise a metric to move such words away from each other in the VSM, even though they have common words in their dictionary definitions. We multiply each value in a row with the corresponding row word's raw supervised score, thereby having more meaningful clusters. Using the training data only, the supervised polarity score per word is calculated as in (DISPLAY_FORM4). Here, $ w_{t}$ denotes the sentiment score of word $t$, $N_{t}$ is the number of documents (reviews or tweets) in which $t$ occurs in the dataset of positive polarity, $N$ is the number of all the words in the corpus of positive polarity. $N^{\prime }$ denotes the corpus of negative polarity. $N^{\prime }_{t}$ and $N^{\prime }$ denote similar values for the negative polarity corpus. We perform normalisation to prevent the imbalance problem and add a small number to both numerator and denominator for smoothing. As an alternative to multiplying with the supervised polarity scores, we also separately multiplied all the row scores with only +1 if the row word is a positive word, and with -1 if it is a negative word. We have observed it boosts the performance more compared to using raw scores. The effect of this multiplication is exemplified in Figure FIGREF7, showing the positions of word vectors in the VSM. Those “x" words are sentimentally negative words, those “o" words are sentimentally positive ones. On the top coordinate plane, the words of opposite polarities are found to be close to each other, since they have common words in their dictionary definitions. Only the information concerned with the dictionary definitions are used there, discarding the polarity scores. However, when we utilise the supervised score (+1 or -1), words of opposite polarities (e.g. “happy" and “unhappy") get far away from each other as they are translated across coordinate regions. Positive words now appear in quadrant 1, whereas negative words appear in quadrant 3. Thus, in the VSM, words that are sentimentally similar to each other could be clustered more accurately. Besides clustering, we also employed the SVD method to perform dimensionality reduction on the unsupervised dictionary algorithm and used the newly generated matrix by combining it with other subapproaches. The number of dimensions is chosen as 200 again according to the $U$ matrix. The details are given in Section 3.4. When using and evaluating this subapproach on the English corpora, we used the SentiWordNet lexicon BIBREF13. We have achieved better results for the dictionary-based algorithm when we employed the SVD reduction method compared to the use of clustering. Methodology ::: Supervised Contextual 4-scores Our last component is a simple metric that uses four supervised scores for each word in the corpus. We extract these scores as follows. For a target word in the corpus, we scan through all of its contexts. In addition to the target word's polarity score (the self score), out of all the polarity scores of words occurring in the same contexts as the target word, minimum, maximum, and average scores are taken into consideration. The word polarity scores are computed using (DISPLAY_FORM4). Here, we obtain those scores from the training data. The intuition behind this method is that those four scores are more indicative of a word's polarity rather than only one (the self score). This approach is fully supervised unlike the previous two approaches. Methodology ::: Combination of the Word Embeddings In addition to using the three approaches independently, we also combined all the matrices generated in the previous approaches. That is, we concatenate the reduced forms (SVD - U) of corpus-based, dictionary-based, and the whole of 4-score vectors of each word, horizontally. Accordingly, each corpus word is represented by a 404-dimensional vector, since corpus-based and dictionary-based vector components are each composed of 200 dimensions, whereas the 4-score vector component is formed by four values. The main intuition behind the ensemble method is that some approaches compensate for what the others may lack. For example, the corpus-based approach captures the domain-specific, semantic, and syntactic characteristics. On the other hand, the 4-scores method captures supervised features, and the dictionary-based approach is helpful in capturing the general semantic characteristics. That is, combining those three approaches makes word vectors more representative. Methodology ::: Generating Document Vectors After creating several embeddings as mentioned above, we create document (review or tweet) vectors. For each document, we sum all the vectors of words occurring in that document and take their average. In addition to it, we extract three hand-crafted polarity scores, which are minimum, mean, and maximum polarity scores, from each review. These polarity scores of words are computed as in (DISPLAY_FORM4). For example, if a review consists of five words, it would have five polarity scores and we utilise only three of these sentiment scores as mentioned. Lastly, we concatenate these three scores to the averaged word vector per review. That is, each review is represented by the average word vector of its constituent word embeddings and three supervised scores. We then feed these inputs into the SVM approach. The flowchart of our framework is given in Figure FIGREF11. When combining the unsupervised features, which are word vectors created on a word basis, with supervised three scores extracted on a review basis, we have better state-of-the-art results. Datasets We utilised two datasets for both Turkish and English to evaluate our methods. For Turkish, as the first dataset, we utilised the movie reviews which are collected from a popular website. The number of reviews in this movie corpus is 20,244 and the average number of words in reviews is 39. Each of these reviews has a star-rating score which is indicative of sentiment. These polarity scores are between the values 0.5 and 5, at intervals of 0.5. We consider a review to be negative it the score is equal to or lower than 2.5. On the other hand, if it is equal to or higher than 4, it is assumed to be positive. We have randomly selected 7,020 negative and 7,020 positive reviews and processed only them. The second Turkish dataset is the Twitter corpus which is formed of tweets about Turkish mobile network operators. Those tweets are mostly much noisier and shorter compared to the reviews in the movie corpus. In total, there are 1,716 tweets. 973 of them are negative and 743 of them are positive. These tweets are manually annotated by two humans, where the labels are either positive or negative. We measured the Cohen's Kappa inter-annotator agreement score to be 0.82. If there was a disagreement on the polarity of a tweet, we removed it. We also utilised two other datasets in English to test the cross-linguality of our approaches. One of them is a movie corpus collected from the web. There are 5,331 positive reviews and 5,331 negative reviews in this corpus. The other is a Twitter dataset, which has nearly 1.6 million tweets annotated through a distant supervised method BIBREF14. These tweets have positive, neutral, and negative labels. We have selected 7,020 positive tweets and 7,020 negative tweets randomly to generate a balanced dataset. Experiments ::: Preprocessing In Turkish, people sometimes prefer to spell English characters for the corresponding Turkish characters (e.g. i for ı, c for ç) when writing in electronic format. To normalise such words, we used the Zemberek tool BIBREF15. All punctuation marks except “!" and “?" are removed, since they do not contribute much to the polarity of a document. We took into account emoticons, such as “:))", and idioms, such as “kafayı yemek” (lose one's mind), since two or more words can express a sentiment together, irrespective of the individual words thereof. Since Turkish is an agglutinative language, we used the morphological parser and disambiguator tools BIBREF16, BIBREF17. We also performed negation handling and stop-word elimination. In negation handling, we append an underscore to the end of a word if it is negated. For example, “güzel değil" (not beautiful) is redefined as “güzel_" (beautiful_) in the feature selection stage when supervised scores are being computed. Experiments ::: Hyperparameters We used the LibSVM utility of the WEKA tool. We chose the linear kernel option to classify the reviews. We trained word2vec embeddings on all the four corpora using the Gensim library BIBREF18 with the skip-gram method. The dimension size of these embeddings are set at 200. As mentioned, other embeddings, which are generated utilising the clustering and the SVD approach, are also of size 200. For c-mean clustering, we set the maximum number of iterations at 25, unless it converges. Experiments ::: Results We evaluated our models on four corpora, which are the movie and the Twitter datasets in Turkish and English. All of the embeddings are learnt on four corpora separately. We have used the accuracy metric since all the datasets are completely or nearly completely balanced. We performed 10-fold cross-validation for both of the datasets. We used the approximate randomisation technique to test whether our results are statistically significant. Here, we tried to predict the labels of reviews and assess the performance. We obtained varying accuracies as shown in Table TABREF17. “3 feats" features are those hand-crafted features we extracted, which are the minimum, mean, and maximum polarity scores of the reviews as explained in Section 3.5. As can be seen, at least one of our methods outperforms the baseline word2vec approach for all the Turkish and English corpora, and all categories. All of our approaches performed better when we used the supervised scores, which are extracted on a review basis, and concatenated them to word vectors. Mostly, the supervised 4-scores feature leads to the highest accuracies, since it employs the annotational information concerned with polarities on a word basis. As can be seen in Table TABREF17, the clustering method, in general, yields the lowest scores. We found out that the corpus - SVD metric does always perform better than the clustering method. We attribute it to that in SVD the most important singular values are taken into account. The corpus - SVD technique outperforms the word2vec algorithm for some corpora. When we do not take into account the 3-feats technique, the corpus-based SVD method yields the highest accuracies for the English Twitter dataset. We show that simple models can outperform more complex models, such as the concatenation of the three subapproaches or the word2vec algorithm. Another interesting finding is that for some cases the accuracy decreases when we utilise the polarity labels, as in the case for the English Twitter dataset. Since the TDK dictionary covers most of the domain-specific vocabulary used in the movie reviews, the dictionary method performs well. However, the dictionary lacks many of the words, occurring in the tweets; therefore, its performance is not the best of all. When the TDK method is combined with the 3-feats technique, we observed a great improvement, as can be expected. Success rates obtained for the movie corpus are much better than those for the Twitter dataset for most of our approaches, since tweets are, in general, much shorter and noisier. We also found out that, when choosing the p value as 0.05, our results are statistically significant compared to the baseline approach in Turkish BIBREF10. Some of our subapproaches also produce better success rates than those sentiment analysis models employed in English BIBREF11, BIBREF12. We have achieved state-of-the-art results for the sentiment classification task for both Turkish and English. As mentioned, our approaches, in general, perform best in predicting the labels of reviews when three supervised scores are additionality utilised. We also employed the convolutional neural network model (CNN). However, the SVM classifier, which is a conventional machine learning algorithm, performed better. We did not include the performances of CNN for embedding types here due to the page limit of the paper. As a qualitative assessment of the word representations, given some query words we visualised the most similar words to those words using the cosine similarity metric. By assessing the similarities between a word and all the other corpus words, we can find the most akin words according to different approaches. Table TABREF18 shows the most similar words to given query words. Those words which are indicative of sentiment are, in general, found to be most similar to those words of the same polarity. For example, the most akin word to muhteşem (gorgeous) is 10/10, both of which have positive polarity. As can be seen in Table TABREF18, our corpus-based approach is more adept at capturing domain-specific features as compared to word2vec, which generally captures general semantic and syntactic characteristics, but not the sentimental ones. Conclusion We have demonstrated that using word vectors that capture only semantic and syntactic characteristics may be improved by taking into account their sentimental aspects as well. Our approaches are cross-lingual and cross-domain. They can be applied to other domains and other languages than Turkish and English with minor changes. Our study is one of the few ones that perform sentiment analysis in Turkish and leverages sentimental characteristics of words in generating word vectors and outperforms all the others. Any of the approaches we propose can be used independently of the others. Our approaches without using sentiment labels can be applied to other classification tasks, such as topic classification and concept mining. The experiments show that even unsupervised approaches, as in the corpus-based approach, can outperform supervised approaches in classification tasks. Combining some approaches, which can compensate for what others lack, can help us build better vectors. Our word vectors are created by conventional machine learning algorithms; however, they, as in the corpus-based model, produce state-of-the-art results. Although we preferred to use a classical machine learning algorithm, which is SVM, over a neural network classifier to predict the labels of reviews, we achieved accuracies of over 90 per cent for the Turkish movie corpus and about 88 per cent for the English Twitter dataset. We performed only binary sentiment classification in this study as most of the studies in the literature do. We will extend our system in future by using neutral reviews as well. We also plan to employ Turkish WordNet to enhance the generalisability of our embeddings as another future work. Acknowledgments This work was supported by Boğaziçi University Research Fund Grant Number 6980D, and by Turkish Ministry of Development under the TAM Project number DPT2007K12-0610. Cem Rıfkı Aydın is supported by TÜBİTAK BIDEB 2211E.
[ "using word2vec to create features that are used as input to the SVM", "use the word2vec algorithm, create several unsupervised hand-crafted features, generate document vectors and feed them as input into the support vector machines (SVM) approach" ]
3,820
qasper_e
en
null
a533586fadb3dbf78a61381a0f040d993ae1d5cd3e66b98d
Where does the ancient Chinese dataset come from?
Introduction Ancient Chinese is the writing language in ancient China. It is a treasure of Chinese culture which brings together the wisdom and ideas of the Chinese nation and chronicles the ancient cultural heritage of China. Learning ancient Chinese not only helps people to understand and inherit the wisdom of the ancients, but also promotes people to absorb and develop Chinese culture. However, it is difficult for modern people to read ancient Chinese. Firstly, compared with modern Chinese, ancient Chinese is more concise and shorter. The grammatical order of modern Chinese is also quite different from that of ancient Chinese. Secondly, most modern Chinese words are double syllables, while the most of the ancient Chinese words are monosyllabic. Thirdly, there is more than one polysemous phenomenon in ancient Chinese. In addition, manual translation has a high cost. Therefore, it is meaningful and useful to study the automatic translation from ancient Chinese to modern Chinese. Through ancient-modern Chinese translation, the wisdom, talent and accumulated experience of the predecessors can be passed on to more people. Neural machine translation (NMT) BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 has achieved remarkable performance on many bilingual translation tasks. It is an end-to-end learning approach for machine translation, with the potential to show great advantages over the statistic machine translation (SMT) systems. However, NMT approach has not been widely applied to the ancient-modern Chinese translation task. One of the main reasons is the limited high-quality parallel data resource. The most popular method of acquiring translation examples is bilingual text alignment BIBREF5 . This kind of method can be classified into two types: lexical-based and statistical-based. The lexical-based approaches BIBREF6 , BIBREF7 focus on lexical information, which utilize the bilingual dictionary BIBREF8 , BIBREF9 or lexical features. Meanwhile, the statistical-based approaches BIBREF10 , BIBREF11 rely on statistical information, such as sentence length ratio in two languages and align mode probability. However, these methods are designed for other bilingual language pairs that are written in different language characters (e.g. English-French, Chinese-Japanese). The ancient-modern Chinese has some characteristics that are quite different from other language pairs. For example, ancient and modern Chinese are both written in Chinese characters, but ancient Chinese is highly concise and its syntactical structure is different from modern Chinese. The traditional methods do not take these characteristics into account. In this paper, we propose an effective ancient-modern Chinese text alignment method at the level of clause based on the characteristics of these two languages. The proposed method combines both lexical-based information and statistical-based information, which achieves 94.2 F1-score on Test set. Recently, a simple longest common subsequence based approach for ancient-modern Chinese sentence alignment is proposed in BIBREF12 . Our experiments showed that our proposed alignment approach performs much better than their method. We apply the proposed method to create a large translation parallel corpus which contains INLINEFORM0 1.24M bilingual sentence pairs. To our best knowledge, this is the first large high-quality ancient-modern Chinese dataset. Furthermore, we test SMT models and various NMT models on the created dataset and provide a strong baseline for this task. Overview There are four steps to build the ancient-modern Chinese translation dataset: (i) The parallel corpus crawling and cleaning. (ii) The paragraph alignment. (iii) The clause alignment based on aligned paragraphs. (iv) Augmenting data by merging aligned adjacent clauses. The most critical step is the third step. Clause Alignment In the clause alignment step, we combine both statistical-based and lexical-based information to measure the score for each possible clause alignment between ancient and modern Chinese strings. The dynamic programming is employed to further find overall optimal alignment paragraph by paragraph. According to the characteristics of the ancient and modern Chinese languages, we consider the following factors to measure the alignment score INLINEFORM0 between a bilingual clause pair: Lexical Matching. The lexical matching score is used to calculate the matching coverage of the ancient clause INLINEFORM0 . It contains two parts: exact matching and dictionary matching. An ancient Chinese character usually corresponds to one or more modern Chinese words. In the first part, we carry out Chinese Word segmentation to the modern Chinese clause INLINEFORM1 . Then we match the ancient characters and modern words in the order from left to right. In further matching, the words that have been matched will be deleted from the original clauses. However, some ancient characters do not appear in its corresponding modern Chinese words. An ancient Chinese dictionary is employed to address this issue. We preprocess the ancient Chinese dictionary and remove the stop words. In this dictionary matching step, we retrieve the dictionary definition of each unmatched ancient character and use it to match the remaining modern Chinese words. To reduce the impact of universal word matching, we use Inverse Document Frequency (IDF) to weight the matching words. The lexical matching score is calculated as: DISPLAYFORM0 The above equation is used to calculate the matching coverage of the ancient clause INLINEFORM0 . The first term of equation ( EQREF8 ) represents exact matching score. INLINEFORM1 denotes the length of INLINEFORM2 , INLINEFORM3 denotes each ancient character in INLINEFORM4 , and the indicator function INLINEFORM5 indicates whether the character INLINEFORM6 can match the words in the clause INLINEFORM7 . The second term is dictionary matching score. Here INLINEFORM8 and INLINEFORM9 represent the remaining unmatched strings of INLINEFORM10 and INLINEFORM11 , respectively. INLINEFORM12 denotes the INLINEFORM13 -th character in the dictionary definition of the INLINEFORM14 and its IDF score is denoted as INLINEFORM15 . The INLINEFORM16 is a predefined parameter which is used to normalize the IDF score. We tuned the value of this parameter on the Dev set. Statistical Information. Similar to BIBREF11 and BIBREF6 , the statistical information contains alignment mode and length information. There are many alignment modes between ancient and modern Chinese languages. If one ancient Chinese clause aligns two adjacent modern Chinese clauses, we call this alignment as 1-2 alignment mode. We show some examples of different alignment modes in Figure FIGREF9 . In this paper, we only consider 1-0, 0-1, 1-1, 1-2, 2-1 and 2-2 alignment modes which account for INLINEFORM0 of the Dev set. We estimate the probability Pr INLINEFORM1 n-m INLINEFORM2 of each alignment mode n-m on the Dev set. To utilize length information, we make an investigation on length correlation between these two languages. Based on the assumption of BIBREF11 that each character in one language gives rise to a random number of characters in the other language and those random variables INLINEFORM3 are independent and identically distributed with a normal distribution, we estimate the mean INLINEFORM4 and standard deviation INLINEFORM5 from the paragraph aligned parallel corpus. Given a clause pair INLINEFORM6 , the statistical information score can be calculated by: DISPLAYFORM0 where INLINEFORM0 denotes the normal distribution probability density function. Edit Distance. Because ancient and modern Chinese are both written in Chinese characters, we also consider using the edit distance. It is a way of quantifying the dissimilarity between two strings by counting the minimum number of operations (insertion, deletion, and substitution) required to transform one string into the other. Here we define the edit distance score as: DISPLAYFORM0 Dynamic Programming. The overall alignment score for each possible clause alignment is as follows: DISPLAYFORM0 Here INLINEFORM0 and INLINEFORM1 are pre-defined interpolation factors. We use dynamic programming to find the overall optimal alignment paragraph by paragraph. Let INLINEFORM2 be total alignment scores of aligning the first to INLINEFORM3 -th ancient Chinese clauses with the first to to INLINEFORM4 -th modern Chinese clauses, and the recurrence then can be described as follows: DISPLAYFORM0 Where INLINEFORM0 denotes concatenate clause INLINEFORM1 to clause INLINEFORM2 . As we discussed above, here we only consider 1-0, 0-1, 1-1, 1-2, 2-1 and 2-2 alignment modes. Ancient-Modern Chinese Dataset Data Collection. To build the large ancient-modern Chinese dataset, we collected 1.7K bilingual ancient-modern Chinese articles from the internet. More specifically, a large part of the ancient Chinese data we used come from ancient Chinese history records in several dynasties (about 1000BC-200BC) and articles written by celebrities of that era. They used plain and accurate words to express what happened at that time, and thus ensure the generality of the translated materials. Paragraph Alignment. To further ensure the quality of the new dataset, the work of paragraph alignment is manually completed. After data cleaning and manual paragraph alignment, we obtained 35K aligned bilingual paragraphs. Clause Alignment. We applied our clause alignment algorithm on the 35K aligned bilingual paragraphs and obtained 517K aligned bilingual clauses. The reason we use clause alignment algorithm instead of sentence alignment is because we can construct more aligned sentences more flexibly and conveniently. To be specific, we can get multiple additional sentence level bilingual pairs by “data augmentation”. Data Augmentation. We augmented the data in the following way: Given an aligned clause pair, we merged its adjacent clause pairs as a new sample pair. For example, suppose we have three adjacent clause level bilingual pairs: ( INLINEFORM0 , INLINEFORM1 ), ( INLINEFORM2 , INLINEFORM3 ), and ( INLINEFORM4 , INLINEFORM5 ). We can get some additional sentence level bilingual pairs, such as: ( INLINEFORM6 , INLINEFORM7 ) and ( INLINEFORM8 , INLINEFORM9 ). Here INLINEFORM10 , INLINEFORM11 , and INLINEFORM12 are adjacent clauses in the original paragraph, and INLINEFORM13 denotes concatenate clause INLINEFORM14 to clause INLINEFORM15 . The advantage of using this data augmentation method is that compared with only using ( INLINEFORM16 , INLINEFORM17 ) as the training data, we can also use ( INLINEFORM18 , INLINEFORM19 ) and ( INLINEFORM20 , INLINEFORM21 ) as the training data, which can provide richer supervision information for the model and make the model learn the align information between the source language and the target language better. After the data augmentation, we filtered the sentences which are longer than 50 or contain more than four clause pairs. Dataset Creation. Finally, we split the dataset into three sets: training (Train), development (Dev) and testing (Test). Note that the unaugmented dataset contains 517K aligned bilingual clause pairs from 35K aligned bilingual paragraphs. To keep all the sentences in different sets come from different articles, we split the 35K aligned bilingual paragraphs into Train, Dev and Test sets following these ratios respectively: 80%, 10%, 10%. Before data augmentation, the unaugmented Train set contains INLINEFORM0 aligned bilingual clause pairs from 28K aligned bilingual paragraphs. Then we augmented the Train, Dev and Test sets respectively. Note that the augmented Train, Dev and Test sets also contain the unaugmented data. The statistical information of the three data sets is shown in Table TABREF17 . We show some examples of data in Figure FIGREF14 . RNN-based NMT model We first briefly introduce the RNN based Neural Machine Translation (RNN-based NMT) model. The RNN-based NMT with attention mechanism BIBREF0 has achieved remarkable performance on many translation tasks. It consists of encoder and decoder part. We firstly introduce the encoder part. The input word sequence of source language are individually mapped into a INLINEFORM0 -dimensional vector space INLINEFORM1 . Then a bi-directional RNN BIBREF15 with GRU BIBREF16 or LSTM BIBREF17 cell converts these vectors into a sequences of hidden states INLINEFORM2 . For the decoder part, another RNN is used to generate target sequence INLINEFORM0 . The attention mechanism BIBREF0 , BIBREF18 is employed to allow the decoder to refer back to the hidden state sequence and focus on a particular segment. The INLINEFORM1 -th hidden state INLINEFORM2 of decoder part is calculated as: DISPLAYFORM0 Here g INLINEFORM0 is a linear combination of attended context vector c INLINEFORM1 and INLINEFORM2 is the word embedding of (i-1)-th target word: DISPLAYFORM0 The attended context vector c INLINEFORM0 is computed as a weighted sum of the hidden states of the encoder: DISPLAYFORM0 The probability distribution vector of the next word INLINEFORM0 is generated according to the following: DISPLAYFORM0 We take this model as the basic RNN-based NMT model in the following experiments. Transformer-NMT Recently, the Transformer model BIBREF4 has made remarkable progress in machine translation. This model contains a multi-head self-attention encoder and a multi-head self-attention decoder. As proposed by BIBREF4 , an attention function maps a query and a set of key-value pairs to an output, where the queries INLINEFORM0 , keys INLINEFORM1 , and values INLINEFORM2 are all vectors. The input consists of queries and keys of dimension INLINEFORM3 , and values of dimension INLINEFORM4 . The attention function is given by: DISPLAYFORM0 Multi-head attention mechanism projects queries, keys and values to INLINEFORM0 different representation subspaces and calculates corresponding attention. The attention function outputs are concatenated and projected again before giving the final output. Multi-head attention allows the model to attend to multiple features at different positions. The encoder is composed of a stack of INLINEFORM0 identical layers. Each layer has two sub-layers: multi-head self-attention mechanism and position-wise fully connected feed-forward network. Similarly, the decoder is also composed of a stack of INLINEFORM1 identical layers. In addition to the two sub-layers in each encoder layer, the decoder contains a third sub-layer which performs multi-head attention over the output of the encoder stack (see more details in BIBREF4 ). Experiments Our experiments revolve around the following questions: Q1: As we consider three factors for clause alignment, do all these factors help? How does our method compare with previous methods? Q2: How does the NMT and SMT models perform on this new dataset we build? Clause Alignment Results (Q1) In order to evaluate our clause alignment algorithm, we manually aligned bilingual clauses from 37 bilingual ancient-modern Chinese articles, and finally got 4K aligned bilingual clauses as the Test set and 2K clauses as the Dev set. Metrics. We used F1-score and precision score as the evaluation metrics. Suppose that we get INLINEFORM0 bilingual clause pairs after running the algorithm on the Test set, and there are INLINEFORM1 bilingual clause pairs of these INLINEFORM2 pairs are in the ground truth of the Test set, the precision score is defined as INLINEFORM3 (the algorithm gives INLINEFORM4 outputs, INLINEFORM5 of which are correct). And suppose that the ground truth of the Test set contains INLINEFORM6 bilingual clause pairs, the recall score is INLINEFORM7 (there are INLINEFORM8 ground truth samples, INLINEFORM9 of which are output by the algorithm), then the F1-score is INLINEFORM10 . Baselines. Since the related work BIBREF10 , BIBREF11 can be seen as the ablation cases of our method (only statistical score INLINEFORM0 with dynamic programming), we compared the full proposed method with its variants on the Test set for ablation study. In addition, we also compared our method with the longest common subsequence (LCS) based approach proposed by BIBREF12 . To the best of our knowledge, BIBREF12 is the latest related work which are designed for Ancient-Modern Chinese alignment. Hyper-parameters. For the proposed method, we estimated INLINEFORM0 and INLINEFORM1 on all aligned paragraphs. The probability Pr INLINEFORM2 n-m INLINEFORM3 of each alignment mode n-m was estimated on the Dev set. For the hyper-parameters INLINEFORM4 , INLINEFORM5 and INLINEFORM6 , the grid search was applied to tune them on the Dev set. In order to show the effect of hyper-parameters INLINEFORM7 , INLINEFORM8 , and INLINEFORM9 , we reported the results of various hyper-parameters on the Dev set in Table TABREF26 . Based on the results of grid search on the Dev set, we set INLINEFORM10 , INLINEFORM11 , and INLINEFORM12 in the following experiment. The Jieba Chinese text segmentation is employed for modern Chinese word segmentation. Results. The results on the Test set are shown in Table TABREF28 , the abbreviation w/o means removing a particular part from the setting. From the results, we can see that the lexical matching score is the most important among these three factors, and statistical information score is more important than edit distance score. Moreover, the dictionary term in lexical matching score significantly improves the performance. From these results, we obtain the best setting that involves all these three factors. We used this setting for dataset creation. Furthermore, the proposed method performs much better than LCS BIBREF12 . Translation Results (Q2) In this experiment, we analyzed and compared the performance of the SMT and various NMT models on our built dataset. To verify the effectiveness of our data augmented method. We trained the NMT and SMT models on both unaugmented dataset (including 0.46M training pairs) and augmented dataset, and test all the models on the same Test set which is augmented. The models to be tested and their configurations are as follows: SMT. The state-of-art Moses toolkit BIBREF19 was used to train SMT model. We used KenLM BIBREF20 to train a 5-gram language model, and the GIZA++ toolkit to align the data. RNN-based NMT. The basic RNN-based NMT model is based on BIBREF0 which is introduced above. Both the encoder and decoder used 2-layer RNN with 1024 LSTM cells, and the encoder is a bi-directional RNN. The batch size, threshold of element-wise gradient clipping and initial learning rate of Adam optimizer BIBREF21 were set to 128, 5.0 and 0.001. When trained the model on augmented dataset, we used 4-layer RNN. Several techniques were investigated to train the model, including layer-normalization BIBREF22 , RNN-dropout BIBREF23 , and learning rate decay BIBREF1 . The hyper-parameters were chosen empirically and adjusted in the Dev set. Furthermore, we tested the basic NMT model with several techniques, such as target language reversal BIBREF24 (reversing the order of the words in all target sentences, but not source sentences), residual connection BIBREF25 and pre-trained word2vec BIBREF26 . For word embedding pre-training, we collected an external ancient corpus which contains INLINEFORM0 134M tokens. Transformer-NMT. We also trained the Transformer model BIBREF4 which is a strong baseline of NMT on both augmented and unaugmented parallel corpus. The training configuration of the Transformer model is shown in Table TABREF32 . The hyper-parameters are set based on the settings in the paper BIBREF4 and the sizes of our training sets. For the evaluation, we used the average of 1 to 4 gram BLEUs multiplied by a brevity penalty BIBREF27 which computed by multi-bleu.perl in Moses as metrics. The results are reported in Table TABREF34 . For RNN-based NMT, we can see that target language reversal, residual connection, and word2vec can further improve the performance of the basic RNN-based NMT model. However, we find that word2vec and reversal tricks seem no obvious improvement when trained the RNN-based NMT and Transformer models on augmented parallel corpus. For SMT, it performs better than NMT models when they were trained on the unaugmented dataset. Nevertheless, when trained on the augmented dataset, both the RNN-based NMT model and Transformer based NMT model outperform the SMT model. In addition, as with other translation tasks BIBREF4 , the Transformer also performs better than RNN-based NMT. Because the Test set contains both augmented and unaugmented data, it is not surprising that the RNN-based NMT model and Transformer based NMT model trained on unaugmented data would perform poorly. In order to further verify the effect of data augmentation, we report the test results of the models on only unaugmented test data (including 48K test pairs) in Table TABREF35 . From the results, it can be seen that the data augmentation can still improve the models. Analysis The generated samples of various models are shown in Figure FIGREF36 . Besides BLEU scores, we analyze these examples from a human perspective and draw some conclusions. At the same time, we design different metrics and evaluate on the whole Test set to support our conclusions as follows: On the one hand, we further compare the translation results from the perspective of people. We find that although the original meaning can be basically translated by SMT, its translation results are less smooth when compared with the other two NMT models (RNN-based NMT and Transformer). For example, the translations of SMT are usually lack of auxiliary words, conjunctions and function words, which is not consistent with human translation habits. To further confirm this conclusion, the average length of the translation results of the three models are measured (RNN-based NMT:17.12, SMT:15.50, Transformer:16.78, Reference:16.47). We can see that the average length of the SMT outputs is shortest, and the length gaps between the SMT outputs and the references are largest. Meanwhile, the average length of the sentences translated by Transformer is closest to the average length of references. These results indirectly verify our point of view, and show that the NMT models perform better than SMT in this task. On the other hand, there still exists some problems to be solved. We observe that translating proper nouns and personal pronouns (such as names, place names and ancient-specific appellations) is very difficult for all of these models. For instance, the ancient Chinese appellation `Zhen' should be translated into `Wo' in modern Chinese. Unfortunately, we calculate the accurate rate of some special words (such as `Zhen',`Chen' and `Gua'), and find that this rate is very low (the accurate rate of translating `Zhen' are: RNN-based NMT:0.14, SMT:0.16, Transformer:0.05). We will focus on this issue in the future. Conclusion and Future Work We propose an effective ancient-modern Chinese clause alignment method which achieves 94.2 F1-score on Test set. Based on it, we build a large scale parallel corpus which contains INLINEFORM0 1.24M bilingual sentence pairs. To our best knowledge, this is the first large high-quality ancient-modern Chinese dataset. In addition, we test the performance of the SMT and various NMT models on our built dataset and provide a strong NMT baseline for this task which achieves 27.16 BLEU score (4-gram). We further analyze the performance of the SMT and various NMT models and summarize some specific problems that machine translation models will encounter when translating ancient Chinese. For the future work, firstly, we are going to expand the dataset using the proposed method continually. Secondly, we will focus on solving the problem of proper noun translation and improve the translation system according to the features of ancient Chinese translation. Finally, we plan to introduce some techniques of statistical translation into neural machine translation to improve the performance. This work is supported by National Natural Science Fund for Distinguished Young Scholar (Grant No. 61625204) and partially supported by the State Key Program of National Science Foundation of China (Grant Nos. 61836006 and 61432014).
[ "ancient Chinese history records in several dynasties (about 1000BC-200BC) and articles written by celebrities of that era", "Ancient Chinese history records in several dynasties and articles written by celebrities during 1000BC-200BC collected from the internet " ]
3,722
qasper_e
en
null
861cbf67995653276e955ddb68584eb35263f970a710c86c
Do the answered questions measure for the usefulness of the answer?
Introduction From a group of small users at the time of its inception in 2009, Quora has evolved in the last few years into one of the largest community driven Q&A sites with diverse user communities. With the help of efficient content moderation/review policies and active in-house review team, efficient Quora bots, this site has emerged into one of the largest and reliable sources of Q&A on the Internet. On Quora, users can post questions, follow questions, share questions, tag them with relevant topics, follow topics, follow users apart from answering, commenting, upvoting/downvoting etc. The integrated social structure at the backbone of it and the topical organization of its rich content have made Quora unique with respect to other Q&A sites like Stack Overflow, Yahoo! Answers etc. and these are some of the prime reasons behind its popularity in recent times. Quality question posting and getting them answered are the key objectives of any Q&A site. In this study we focus on the answerability of questions on Quora, i.e., whether a posted question shall eventually get answered. In Quora, the questions with no answers are referred to as “open questions”. These open questions need to be studied separately to understand the reason behind their not being answered or to be precise, are there any characteristic differences between `open' questions and the answered ones. For example, the question “What are the most promising advances in the treatment of traumatic brain injuries?” was posted on Quora on 23rd June, 2011 and got its first answer after almost 2 years on 22nd April, 2013. The reason that this question remained open so long might be the hardness of answering it and the lack of visibility and experts in the domain. Therefore, it is important to identify the open questions and take measures based on the types - poor quality questions can be removed from Quora and the good quality questions can be promoted so that they get more visibility and are eventually routed to topical experts for better answers. Characterization of the questions based on question quality requires expert human interventions often judging if a question would remain open based on factors like if it is subjective, controversial, open-ended, vague/imprecise, ill-formed, off-topic, ambiguous, uninteresting etc. Collecting judgment data for thousands of question posts is a very expensive process. Therefore, such an experiment can be done only for a small set of questions and it would be practically impossible to scale it up for the entire collection of posts on the Q&A site. In this work, we show that appropriate quantification of various linguistic activities can naturally correspond to many of the judgment factors mentioned above (see table 2 for a collection of examples). These quantities encoding such linguistic activities can be easily measured for each question post and thus helps us to have an alternative mechanism to characterize the answerability on the Q&A site. There are several research works done in Q&A focusing on content of posts. BIBREF0 exploit community feedback to identify high quality content on Yahoo! Answers. BIBREF1 use textual features to predict answer quality on Yahoo! Answers. BIBREF2 , investigate predictors of answer quality through a comparative, controlled field study of user responses. BIBREF3 study the problem of how long questions remain unanswered. BIBREF4 propose a prediction model on how many answers a question shall receive. BIBREF5 analyze and predict unanswered questions on Yahoo Answers. BIBREF6 study question quality in Yahoo! Answers. Dataset description We obtained our Quora dataset BIBREF7 through web-based crawls between June 2014 to August 2014. This crawling exercise has resulted in the accumulation of a massive Q&A dataset spanning over a period of over four years starting from January 2010 to May 2014. We initiated crawling with 100 questions randomly selected from different topics so that different genre of questions can be covered. The crawling of the questions follow a BFS pattern through the related question links. We obtained 822,040 unique questions across 80,253 different topics with a total of 1,833,125 answers to these questions. For each question, we separately crawl their revision logs that contain different types of edit information for the question and the activity log of the question asker. Linguistic activities on Quora In this section, we identify various linguistic activities on Quora and propose quantifications of the language usage patterns in this Q&A site. In particular, we show that there exists significant differences in the linguistic structure of the open and the answered questions. Note that most of the measures that we define are simple, intuitive and can be easily obtained automatically from the data (without manual intervention). Therefore the framework is practical, inexpensive and highly scalable. Content of a question text is important to attract people and make them engage more toward it. The linguistic structure (i.e., the usage of POS tags, the use of Out-of-Vocabulary words, character usage etc.) one adopts are key factors for answerability of questions. We shall discuss the linguistic structure that often represents the writing style of a question asker. In fig 1 (a), we observe that askers of open questions generally use more no. of words compared to answered questions. To understand the nature of words (standard English words or chat-like words frequently used in social media) used in the text, we compare the words with GNU Aspell dictionary to see whether they are present in the dictionary or not. We observe that both open questions and answered questions follow similar distribution (see fig 1 (b)). Part-of-Speech (POS) tags are indicators of grammatical aspects of texts. To observe how the Part-of-Speech tags are distributed in the question texts, we define a diversity metric. We use the standard CMU POS tagger BIBREF8 for identifying the POS tags of the constituent words in the question. We define the POS tag diversity (POSDiv) of a question $q_i$ as follows: $POSDiv(q_i) = -\sum _{j \in pos_{set}}p_j\times \log (p_j)$ where $p_j$ is the probability of the $j^{th}$ POS in the set of POS tags. Fig 1 (c) shows that the answered questions have lower POS tag diversity compared to open questions. Question texts undergo several edits so that their readability and the engagement toward them are enhanced. It is interesting to identify how far such edits can make the question different from the original version of it. To capture this phenomena, we have adopted ROUGE-LCS recall BIBREF9 from the domain of text summarization. Higher the recall value, lesser are the changes in the question text. From fig 1 (d), we observe that open questions tend to have higher recall compared to the answered ones which suggests that they have not gone through much of text editing thus allowing for almost no scope of readability enhancement. Psycholinguistic analysis: The way an individual talks or writes, give us clue to his/her linguistic, emotional, and cognitive states. A question asker's linguistic, emotional, cognitive states are also revealed through the language he/she uses in the question text. In order to capture such psycholinguistic aspects of the asker, we use Linguistic Inquiry and Word Count (LIWC) BIBREF10 that analyzes various emotional, cognitive, and structural components present in individuals' written texts. LIWC takes a text document as input and outputs a score for the input for each of the LIWC categories such as linguistic (part-of-speech of the words, function words etc.) and psychological categories (social, anger, positive emotion, negative emotion, sadness etc.) based on the writing style and psychometric properties of the document. In table 1 , we perform a comparative analysis of the asker's psycholinguistic state while asking an open question and an answered question. Askers of open questions use more function words, impersonal pronouns, articles on an average whereas asker of answered questions use more personal pronouns, conjunctions and adverbs to describe their questions. Essentially, open questions lack content words compared to answered questions which, in turn, affects the readability of the question. As far as the psychological aspects are concerned, answered question askers tend to use more social, family, human related words on average compared to an open question asker. The open question askers express more positive emotions whereas the answered question asker tend to express more negative emotions in their texts. Also, answered question askers are more emotionally involved and their questions reveal higher usage of anger, sadness, anxiety related words compared to that of open questions. Open questions, on the other hand, contains more sexual, body, health related words which might be reasons why they do not attract answers. In table 2 , we show a collection of examples of open questions to illustrate that many of the above quantities based on the linguistic activities described in this section naturally correspond to the factors that human judges consider responsible for a question remaining unanswered. This is one of the prime reasons why these quantities qualify as appropriate indicators of answerability. Prediction model In this section, we describe the prediction framework in detail. Our goal is to predict whether a given question after a time period $t$ will be answered or not. Linguistic styles of the question asker The content and way of posing a question is important to attract answers. We have observed in the previous section that these linguistic as well as psycholinguistic aspects of the question asker are discriminatory factors. For the prediction, we use the following features:
[ "No" ]
1,561
qasper_e
en
null
ca6ab28a7624b35d6e14bdd51a24f6cba93df17b033a0945
Based on this paper, what is the more predictive set of features to detect fake news?
Introduction Social media platforms have made the spreading of fake news easier, faster as well as able to reach a wider audience. Social media offer another feature which is the anonymity for the authors, and this opens the door to many suspicious individuals or organizations to utilize these platforms. Recently, there has been an increased number of spreading fake news and rumors over the web and social media BIBREF0. Fake news in social media vary considering the intention to mislead. Some of these news are spread with the intention to be ironic or to deliver the news in an ironic way (satirical news). Others, such as propaganda, hoaxes, and clickbaits, are spread to mislead the audience or to manipulate their opinions. In the case of Twitter, suspicious news annotations should be done on a tweet rather than an account level, since some accounts mix fake with real news. However, these annotations are extremely costly and time consuming – i.e., due to high volume of available tweets Consequently, a first step in this direction, e.g., as a pre-filtering step, can be viewed as the task of detecting fake news at the account level. The main obstacle for detecting suspicious Twitter accounts is due to the behavior of mixing some real news with the misleading ones. Consequently, we investigate ways to detect suspicious accounts by considering their tweets in groups (chunks). Our hypothesis is that suspicious accounts have a unique pattern in posting tweet sequences. Since their intention is to mislead, the way they transition from one set of tweets to the next has a hidden signature, biased by their intentions. Therefore, reading these tweets in chunks has the potential to improve the detection of the fake news accounts. In this work, we investigate the problem of discriminating between factual and non-factual accounts in Twitter. To this end, we collect a large dataset of tweets using a list of propaganda, hoax and clickbait accounts and compare different versions of sequential chunk-based approaches using a variety of feature sets against several baselines. Several approaches have been proposed for news verification, whether in social media (rumors detection) BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, or in news claims BIBREF5, BIBREF6, BIBREF7, BIBREF8. The main orientation in the previous works is to verify the textual claims/tweets but not their sources. To the best of our knowledge, this is the first work aiming to detect factuality at the account level, and especially from a textual perspective. Our contributions are: [leftmargin=4mm] We propose an approach to detect non-factual Twitter accounts by treating post streams as a sequence of tweets' chunks. We test several semantic and dictionary-based features together with a neural sequential approach, and apply an ablation test to investigate their contribution. We benchmark our approach against other approaches that discard the chronological order of the tweets or read the tweets individually. The results show that our approach produces superior results at detecting non-factual accounts. Methodology Given a news Twitter account, we read its tweets from the account's timeline. Then we sort the tweets by the posting date in ascending way and we split them into $N$ chunks. Each chunk consists of a sorted sequence of tweets labeled by the label of its corresponding account. We extract a set of features from each chunk and we feed them into a recurrent neural network to model the sequential flow of the chunks' tweets. We use an attention layer with dropout to attend over the most important tweets in each chunk. Finally, the representation is fed into a softmax layer to produce a probability distribution over the account types and thus predict the factuality of the accounts. Since we have many chunks for each account, the label for an account is obtained by taking the majority class of the account's chunks. Input Representation. Let $t$ be a Twitter account that contains $m$ tweets. These tweets are sorted by date and split into a sequence of chunks $ck = \langle ck_1, \ldots , ck_n \rangle $, where each $ck_i$ contains $s$ tweets. Each tweet in $ck_i$ is represented by a vector $v \in {\rm I\!R}^d$ , where $v$ is the concatenation of a set of features' vectors, that is $v = \langle f_1, \ldots , f_n \rangle $. Each feature vector $f_i$ is built by counting the presence of tweet's words in a set of lexical lists. The final representation of the tweet is built by averaging the single word vectors. Features. We argue that different kinds of features like the sentiment of the text, morality, and other text-based features are critical to detect the nonfactual Twitter accounts by utilizing their occurrence during reporting the news in an account's timeline. We employ a rich set of features borrowed from previous works in fake news, bias, and rumors detection BIBREF0, BIBREF1, BIBREF8, BIBREF9. [leftmargin=4mm] Emotion: We build an emotions vector using word occurrences of 15 emotion types from two available emotional lexicons. We use the NRC lexicon BIBREF10, which contains $\sim $14K words labeled using the eight Plutchik's emotions BIBREF11. The other lexicon is SentiSense BIBREF12 which is a concept-based affective lexicon that attaches emotional meanings to concepts from the WordNet lexical database. It has $\sim $5.5 words labeled with emotions from a set of 14 emotional categories We use the categories that do not exist in the NRC lexicon. Sentiment: We extract the sentiment of the tweets by employing EffectWordNet BIBREF13, SenticNet BIBREF14, NRC BIBREF10, and subj_lexicon BIBREF15, where each has the two sentiment classes, positive and negative. Morality: Features based on morality foundation theory BIBREF16 where words are labeled in one of the following 10 categories (care, harm, fairness, cheating, loyalty, betrayal, authority, subversion, sanctity, and degradation). Style: We use canonical stylistic features, such as the count of question marks, exclamation marks, consecutive characters and letters, links, hashtags, users' mentions. In addition, we extract the uppercase ratio and the tweet length. Words embeddings: We extract words embeddings of the words of the tweet using $Glove\-840B-300d$ BIBREF17 pretrained model. The tweet final representation is obtained by averaging its words embeddings. Model. To account for chunk sequences we make use of a de facto standard approach and opt for a recurrent neural model using long short-term memory (LSTM) BIBREF18. In our model, the sequence consists of a sequence of tweets belonging to one chunk (Figure FIGREF11). The LSTM learns the hidden state $h\textsubscript {t}$ by capturing the previous timesteps (past tweets). The produced hidden state $h\textsubscript {t}$ at each time step is passed to the attention layer which computes a `context' vector $c\textsubscript {t}$ as the weighted mean of the state sequence $h$ by: Where $T$ is the total number of timesteps in the input sequence and $\alpha \textsubscript {tj}$ is a weight computed at each time step $j$ for each state hj. Experiments and Results Data. We build a dataset of Twitter accounts based on two lists annotated in previous works. For the non-factual accounts, we rely on a list of 180 Twitter accounts from BIBREF1. This list was created based on public resources where suspicious Twitter accounts were annotated with the main fake news types (clickbait, propaganda, satire, and hoax). We discard the satire labeled accounts since their intention is not to mislead or deceive. On the other hand, for the factual accounts, we use a list with another 32 Twitter accounts from BIBREF19 that are considered trustworthy by independent third parties. We discard some accounts that publish news in languages other than English (e.g., Russian or Arabic). Moreover, to ensure the quality of the data, we remove the duplicate, media-based, and link-only tweets. For each account, we collect the maximum amount of tweets allowed by Twitter API. Table TABREF13 presents statistics on our dataset. Baselines. We compare our approach (FacTweet) to the following set of baselines: [leftmargin=4mm] LR + Bag-of-words: We aggregate the tweets of a feed and we use a bag-of-words representation with a logistic regression (LR) classifier. Tweet2vec: We use the Bidirectional Gated recurrent neural network model proposed in BIBREF20. We keep the default parameters that were provided with the implementation. To represent the tweets, we use the decoded embedding produced by the model. With this baseline we aim at assessing if the tweets' hashtags may help detecting the non-factual accounts. LR + All Features (tweet-level): We extract all our features from each tweet and feed them into a LR classifier. Here, we do not aggregate over tweets and thus view each tweet independently. LR + All Features (chunk-level): We concatenate the features' vectors of the tweets in a chunk and feed them into a LR classifier. FacTweet (tweet-level): Similar to the FacTweet approach, but at tweet-level; the sequential flow of the tweets is not utilized. We aim at investigating the importance of the sequential flow of tweets. Top-$k$ replies, likes, or re-tweets: Some approaches in rumors detection use the number of replies, likes, and re-tweets to detect rumors BIBREF21. Thus, we extract top $k$ replied, liked or re-tweeted tweets from each account to assess the accounts factuality. We tested different $k$ values between 10 tweets to the max number of tweets from each account. Figure FIGREF24 shows the macro-F1 values for different $k$ values. It seems that $k=500$ for the top replied tweets achieves the highest result. Therefore, we consider this as a baseline. Experimental Setup. We apply a 5 cross-validation on the account's level. For the FacTweet model, we experiment with 25% of the accounts for validation and parameters selection. We use hyperopt library to select the hyper-parameters on the following values: LSTM layer size (16, 32, 64), dropout ($0.0-0.9$), activation function ($relu$, $selu$, $tanh$), optimizer ($sgd$, $adam$, $rmsprop$) with varying the value of the learning rate (1e-1,..,-5), and batch size (4, 8, 16). The validation split is extracted on the class level using stratified sampling: we took a random 25% of the accounts from each class since the dataset is unbalanced. Discarding the classes' size in the splitting process may affect the minority classes (e.g. hoax). For the baselines' classifier, we tested many classifiers and the LR showed the best overall performance. Results. Table TABREF25 presents the results. We present the results using a chunk size of 20, which was found to be the best size on the held-out data. Figure FIGREF24 shows the results of different chunks sizes. FacTweet performs better than the proposed baselines and obtains the highest macro-F1 value of $0.565$. Our results indicate the importance of taking into account the sequence of the tweets in the accounts' timelines. The sequence of these tweets is better captured by our proposed model sequence-agnostic or non-neural classifiers. Moreover, the results demonstrate that the features at tweet-level do not perform well to detect the Twitter accounts factuality, since they obtain a result near to the majority class ($0.18$). Another finding from our experiments shows that the performance of the Tweet2vec is weak. This demonstrates that tweets' hashtags are not informative to detect non-factual accounts. In Table TABREF25, we present ablation tests so as to quantify the contribution of subset of features. The results indicate that most performance gains come from words embeddings, style, and morality features. Other features (emotion and sentiment) show lower importance: nevertheless, they still improve the overall system performance (on average 0.35% Macro-F$_1$ improvement). These performance figures suggest that non-factual accounts use semantic and stylistic hidden signatures mostly while tweeting news, so as to be able to mislead the readers and behave as reputable (i.e., factual) sources. We leave a more fine-grained, diachronic analysis of semantic and stylistic features – how semantic and stylistic signature evolve across time and change across the accounts' timelines – for future work. Conclusions In this paper, we proposed a model that utilizes chunked timelines of tweets and a recurrent neural model in order to infer the factuality of a Twitter news account. Our experimental results indicate the importance of analyzing tweet stream into chunks, as well as the benefits of heterogeneous knowledge source (i.e., lexica as well as text) in order to capture factuality. In future work, we would like to extend this line of research with further in-depth analysis to understand the flow change of the used features in the accounts' streams. Moreover, we would like to take our approach one step further incorporating explicit temporal information, e.g., using timestamps. Crucially, we are also interested in developing a multilingual version of our approach, for instance by leveraging the now ubiquitous cross-lingual embeddings BIBREF22, BIBREF23.
[ "words embeddings, style, and morality features", "words embeddings, style, and morality features" ]
2,091
qasper_e
en
null
689421214f9b71be11d80ef9ae68b49974a8bd7e4313cd58
What datasets are used for evaluation?
Introduction Pre-trained models BIBREF0, BIBREF1 have received much of attention recently thanks to their impressive results in many down stream NLP tasks. Additionally, multilingual pre-trained models enable many NLP applications for other languages via zero-short cross-lingual transfer. Zero-shot cross-lingual transfer has shown promising results for rapidly building applications for low resource languages. BIBREF2 show the potential of multilingual-BERT BIBREF0 in zero-shot transfer for a large number of languages from different language families on five NLP tasks, namely, natural language inference, document classification, named entity recognition, part-of-speech tagging, and dependency parsing. Although multilingual models are an important ingredient for enhancing language technology in many languages, recent research on improving pre-trained models puts much emphasis on English BIBREF3, BIBREF4, BIBREF5. The current state of affairs makes it difficult to translate advancements in pre-training from English to non-English languages. To our best knowledge, there are only three available multilingual pre-trained models to date: (1) the multilingual-BERT (mBERT) that supports 104 languages, (2) cross-lingual language model BIBREF6 that supports 100 languages, and (3) Language Agnostic SEntence Representations BIBREF7 that supports 93 languages. Among the three models, LASER is based on neural machine translation approach and strictly requires parallel data to train. Do multilingual models always need to be trained from scratch? Can we transfer linguistic knowledge learned by English pre-trained models to other languages? In this work, we develop a technique to rapidly transfer an existing pre-trained model from English to other languages in an energy efficient way BIBREF8. As the first step, we focus on building a bilingual language model (LM) of English and a target language. Starting from a pre-trained English LM, we learn the target language specific parameters (i.e., word embeddings), while keeping the encoder layers of the pre-trained English LM fixed. We then fine-tune both English and target model to obtain the bilingual LM. We apply our approach to autoencoding language models with masked language model objective and show the advantage of the proposed approach in zero-shot transfer. Our main contributions in this work are: We propose a fast adaptation method for obtaining a bilingual BERT$_{\textsc {base}}$ of English and a target language within a day using one Tesla V100 16GB GPU. We evaluate our bilingual LMs for six languages on two zero-shot cross-lingual transfer tasks, namely natural language inference BIBREF9 and universal dependency parsing. We show that our models offer competitive performance or even better that mBERT. We illustrate that our bilingual LMs can serve as an excellent feature extractor in supervised dependency parsing task. Bilingual Pre-trained LMs We first provide some background of pre-trained language models. Let $_e$ be English word-embeddings and $\Psi ()$ be the Transformer BIBREF10 encoder with parameters $$. Let $_{w_i}$ denote the embedding of word $w_i$ (i.e., $_{w_i} = _e[w_1]$). We omit positional embeddings and bias for clarity. A pre-trained LM typically performs the following computations: (i) transform a sequence of input tokens to contextualized representations $[_{w_1},\dots ,_{w_n}] = \Psi (_{w_1}, \dots , _{w_n}; )$, and (ii) predict an output word $y_i$ at $i^{\text{th}}$ position $p(y_i | _{w_i}) \propto \exp (_{w_i}^\top _{y_i})$. Autoencoding LM BIBREF0 corrupts some input tokens $w_i$ by replacing them with a special token [MASK]. It then predicts the original tokens $y_i = w_i$ from the corrupted tokens. Autoregressive LM BIBREF3 predicts the next token ($y_i = w_{i+1}$) given all the previous tokens. The recently proposed XLNet model BIBREF5 is an autoregressive LM that factorizes output with all possible permutations, which shows empirical performance improvement over GPT-2 due to the ability to capture bidirectional context. Here we assume that the encoder performs necessary masking with respect to each training objective. Given an English pre-trained LM, we wish to learn a bilingual LM for English and a given target language $f$ under a limited computational resource budget. To quickly build a bilingual LM, we directly adapt the English pre-traind model to the target model. Our approach consists of three steps. First, we initialize target language word-embeddings $_f$ in the English embedding space such that embeddings of a target word and its English equivalents are close together (§SECREF8). Next, we create a target LM from the target embeddings and the English encoder $\Psi ()$. We then fine-tune target embeddings while keeping $\Psi ()$ fixed (§SECREF14). Finally, we construct a bilingual LM of $_e$, $_f$, and $\Psi ()$ and fine-tune all the parameters (§SECREF15). Figure FIGREF7 illustrates the last two steps in our approach. Bilingual Pre-trained LMs ::: Initializing Target Embeddings Our approach to learn the initial foreign word embeddings $_f \in ^{|V_f| \times d}$ is based on the idea of mapping the trained English word embeddings $_e \in ^{|V_e| \times d}$ to $_f$ such that if a foreign word and an English word are similar in meaning then their embeddings are similar. Borrowing the idea of universal lexical sharing from BIBREF11, we represent each foreign word embedding $_f[i] \in ^d$ as a linear combination of English word embeddings $_e[j] \in ^d$ where $_i\in ^{|V_e|}$ is a sparse vector and $\sum _j^{|V_e|} \alpha _{ij} = 1$. In this step of initializing foreign embeddings, having a good estimation of $$ could speed of the convergence when tuning the foreign model and enable zero-shot transfer (§SECREF5). In the following, we discuss how to estimate $_i\;\forall i\in \lbrace 1,2, \dots , |V_f|\rbrace $ under two scenarios: (i) we have parallel data of English-foreign, and (ii) we only rely on English and foreign monolingual data. Bilingual Pre-trained LMs ::: Initializing Target Embeddings ::: Learning from Parallel Corpus Given an English-foreign parallel corpus, we can estimate word translation probability $p(e\,|\,f)$ for any (English-foreign) pair $(e, f)$ using popular word-alignment BIBREF12 toolkits such as fast-align BIBREF13. We then assign: Since $_i$ is estimated from word alignment, it is a sparse vector. Bilingual Pre-trained LMs ::: Initializing Target Embeddings ::: Learning from Monolingual Corpus For low resource languages, parallel data may not be available. In this case, we rely only on monolingual data (e.g., Wikipedias). We estimate word translation probabilities from word embeddings of the two languages. Word vectors of these languages can be learned using fastText BIBREF14 and then are aligned into a shared space with English BIBREF15, BIBREF16. Unlike learning contextualized representations, learning word vectors is fast and computationally cheap. Given the aligned vectors $\bar{}_f$ of foreign and $\bar{}_e$ of English, we calculate the word translation matrix $\in ^{|V_f|\times |V_e|}$ as Here, we use $\mathrm {sparsemax}$ BIBREF17 instead of softmax. Sparsemax is a sparse version of softmax and it puts zero probabilities on most of the word in the English vocabulary except few English words that are similar to a given foreign word. This property is desirable in our approach since it leads to a better initialization of the foreign embeddings. Bilingual Pre-trained LMs ::: Fine-tuning Target Embeddings After initializing foreign word-embeddings, we replace English word-embeddings in the English pre-trained LM with foreign word-embeddings to obtain the foreign LM. We then fine-tune only foreign word-embeddings on monolingual data. The training objective is the same as the training objective of the English pre-trained LM (i.e., masked LM for BERT). Since the trained encoder $\Psi ()$ is good at capturing association, the purpose of this step is to further optimize target embeddings such that the target LM can utilized the trained encoder for association task. For example, if the words Albert Camus presented in a French input sequence, the self-attention in the encoder more likely attends to words absurde and existentialisme once their embeddings are tuned. Bilingual Pre-trained LMs ::: Fine-tuning Bilingual LM We create a bilingual LM by plugging foreign language specific parameters to the pre-trained English LM (Figure FIGREF7). The new model has two separate embedding layers and output layers, one for English and one for foreign language. The encoder layer in between is shared. We then fine-tune this model using English and foreign monolingual data. Here, we keep tuning the model on English to ensure that it does not forget what it has learned in English and that we can use the resulting model for zero-shot transfer (§SECREF3). In this step, the encoder parameters are also updated so that in can learn syntactic aspects (i.e., word order, morphological agreement) of the target languages. Zero-shot Experiments We build our bilingual LMs, named RAMEN, starting from BERT$_{\textsc {base}}$, BERT$_{\textsc {large}}$, RoBERTa$_{\textsc {base}}$, and RoBERTa$_{\textsc {large}}$ pre-trained models. Using BERT$_{\textsc {base}}$ allows us to compare the results with mBERT model. Using BERT$_{\textsc {large}}$ and RoBERTa allows us to investigate whether the performance of the target LM correlates with the performance of the source LM. We evaluate our models on two cross-lingual zero-shot tasks: (1) Cross-lingual Natural Language Inference (XNLI) and (2) dependency parsing. Zero-shot Experiments ::: Data We evaluate our approach for six target languages: French (fr), Russian (ru), Arabic (ar), Chinese (zh), Hindi (hi), and Vietnamese (vi). These languages belong to four different language families. French, Russian, and Hindi are Indo-European languages, similar to English. Arabic, Chinese, and Vietnamese belong to Afro-Asiatic, Sino-Tibetan, and Austro-Asiatic family respectively. The choice of the six languages also reflects different training conditions depending on the amount of monolingual data. French and Russian, and Arabic can be regarded as high resource languages whereas Hindi has far less data and can be considered as low resource. For experiments that use parallel data to initialize foreign specific parameters, we use the same datasets in the work of BIBREF6. Specifically, we use United Nations Parallel Corpus BIBREF18 for en-ru, en-ar, en-zh, and en-fr. We collect en-hi parallel data from IIT Bombay corpus BIBREF19 and en-vi data from OpenSubtitles 2018. For experiments that use only monolingual data to initialize foreign parameters, instead of training word-vectors from the scratch, we use the pre-trained word vectors from fastText BIBREF14 to estimate word translation probabilities (Eq. DISPLAY_FORM13). We align these vectors into a common space using orthogonal Procrustes BIBREF20, BIBREF15, BIBREF16. We only use identical words between the two languages as the supervised signal. We use WikiExtractor to extract extract raw sentences from Wikipedias as monolingual data for fine-tuning target embeddings and bilingual LMs (§SECREF15). We do not lowercase or remove accents in our data preprocessing pipeline. We tokenize English using the provided tokenizer from pre-trained models. For target languages, we use fastBPE to learn 30,000 BPE codes and 50,000 codes when transferring from BERT and RoBERTa respectively. We truncate the BPE vocabulary of foreign languages to match the size of the English vocabulary in the source models. Precisely, the size of foreign vocabulary is set to 32,000 when transferring from BERT and 50,000 when transferring from RoBERTa. We use XNLI dataset BIBREF9 for classification task and Universal Dependencies v2.4 BIBREF21 for parsing task. Since a language might have more than one treebank in Universal Dependencies, we use the following treebanks: en_ewt (English), fr_gsd (French), ru_syntagrus (Russian) ar_padt (Arabic), vi_vtb (Vietnamese), hi_hdtb (Hindi), and zh_gsd (Chinese). Zero-shot Experiments ::: Data ::: Remark on BPE BIBREF22 show that sharing subwords between languages improves alignments between embedding spaces. BIBREF2 observe a strong correlation between the percentage of overlapping subwords and mBERT's performances for cross-lingual zero-shot transfer. However, in our current approach, subwords between source and target are not shared. A subword that is in both English and foreign vocabulary has two different embeddings. Zero-shot Experiments ::: Estimating translation probabilities Since pre-trained models operate on subword level, we need to estimate subword translation probabilities. Therefore, we subsample 2M sentence pairs from each parallel corpus and tokenize the data into subwords before running fast-align BIBREF13. Estimating subword translation probabilities from aligned word vectors requires an additional processing step since the provided vectors from fastText are not at subword level. We use the following approximation to obtain subword vectors: the vector $_s$ of subword $s$ is the weighted average of all the aligned word vectors $_{w_i}$ that have $s$ as an subword where $p(w_j)$ is the unigram probability of word $w_j$ and $n_s = \sum _{w_j:\, s\in w_j} p(w_j)$. We take the top 50,000 words in each aligned word-vectors to compute subword vectors. In both cases, not all the words in the foreign vocabulary can be initialized from the English word-embeddings. Those words are initialized randomly from a Gaussian $\mathcal {N}(0, {1}{d^2})$. Zero-shot Experiments ::: Hyper-parameters In all the experiments, we tune RAMEN$_{\textsc {base}}$ for 175,000 updates and RAMEN$_{\textsc {large}}$ for 275,000 updates where the first 25,000 updates are for language specific parameters. The sequence length is set to 256. The mini-batch size are 64 and 24 when tuning language specific parameters using RAMEN$_{\textsc {base}}$ and RAMEN$_{\textsc {large}}$ respectively. For tuning bilingual LMs, we use a mini-batch size of 64 for RAMEN$_{\textsc {base}}$ and 24 for RAMEN$_{\textsc {large}}$ where half of the batch are English sequences and the other half are foreign sequences. This strategy of balancing mini-batch has been used in multilingual neural machine translation BIBREF23, BIBREF24. We optimize RAMEN$_{\textsc {base}}$ using Lookahead optimizer BIBREF25 wrapped around Adam with the learning rate of $10^{-4}$, the number of fast weight updates $k=5$, and interpolation parameter $\alpha =0.5$. We choose Lookahead optimizer because it has been shown to be robust to the initial parameters of the based optimizer (Adam). For Adam optimizer, we linearly increase the learning rate from $10^{-7}$ to $10^{-4}$ in the first 4000 updates and then follow an inverse square root decay. All RAMEN$_{\textsc {large}}$ models are optimized with Adam due to memory limit. When fine-tuning RAMEN on XNLI and UD, we use a mini-batch size of 32, Adam's learning rate of $10^{-5}$. The number of epochs are set to 4 and 50 for XNLI and UD tasks respectively. All experiments are carried out on a single Tesla V100 16GB GPU. Each RAMEN$_{\textsc {base}}$ model is trained within a day and each RAMEN$_{\textsc {large}}$ is trained within two days. Results In this section, we present the results of out models for two zero-shot cross lingual transfer tasks: XNLI and dependency parsing. Results ::: Cross-lingual Natural Language Inference Table TABREF32 shows the XNLI test accuracy. For reference, we also include the scores from the previous work, notably the state-of-the-art system XLM BIBREF6. Before discussing the results, we spell out that the fairest comparison in this experiment is the comparison between mBERT and RAMEN$_{\textsc {base}}$+BERT trained with monolingual only. We first discuss the transfer results from BERT. Initialized from fastText vectors, RAMEN$_{\textsc {base}}$ slightly outperforms mBERT by 1.9 points on average and widen the gap of 3.3 points on Arabic. RAMEN$_{\textsc {base}}$ gains extra 0.8 points on average when initialized from parallel data. With triple number of parameters, RAMEN$_{\textsc {large}}$ offers an additional boost in term of accuracy and initialization with parallel data consistently improves the performance. It has been shown that BERT$_{\textsc {large}}$ significantly outperforms BERT$_{\textsc {base}}$ on 11 English NLP tasks BIBREF0, the strength of BERT$_{\textsc {large}}$ also shows up when adapted to foreign languages. Transferring from RoBERTa leads to better zero-shot accuracies. With the same initializing condition, RAMEN$_{\textsc {base}}$+RoBERTa outperforms RAMEN$_{\textsc {base}}$+BERT on average by 2.9 and 2.3 points when initializing from monolingual and parallel data respectively. This result show that with similar number of parameters, our approach benefits from a better English pre-trained model. When transferring from RoBERTa$_{\textsc {large}}$, we obtain state-of-the-art results for five languages. Currently, RAMEN only uses parallel data to initialize foreign embeddings. RAMEN can also exploit parallel data through translation objective proposed in XLM. We believe that by utilizing parallel data during the fine-tuning of RAMEN would bring additional benefits for zero-shot tasks. We leave this exploration to future work. In summary, starting from BERT$_{\textsc {base}}$, our approach obtains competitive bilingual LMs with mBERT for zero-shot XNLI. Our approach shows the accuracy gains when adapting from a better pre-trained model. Results ::: Universal Dependency Parsing We build on top of RAMEN a graph-based dependency parser BIBREF27. For the purpose of evaluating the contextual representations learned by our model, we do not use part-of-speech tags. Contextualized representations are directly fed into Deep-Biaffine layers to predict arc and label scores. Table TABREF34 presents the Labeled Attachment Scores (LAS) for zero-shot dependency parsing. We first look at the fairest comparison between mBERT and monolingually initialized RAMEN$_{\textsc {base}}$+BERT. The latter outperforms the former on five languages except Arabic. We observe the largest gain of +5.2 LAS for French. Chinese enjoys +3.1 LAS from our approach. With similar architecture (12 or 24 layers) and initialization (using monolingual or parallel data), RAMEN+RoBERTa performs better than RAMEN+BERT for most of the languages. Arabic and Hindi benefit the most from bigger models. For the other four languages, RAMEN$_{\textsc {large}}$ renders a modest improvement over RAMEN$_{\textsc {base}}$. Analysis ::: Impact of initialization Initializing foreign embeddings is the backbone of our approach. A good initialization leads to better zero-shot transfer results and enables fast adaptation. To verify the importance of a good initialization, we train a RAMEN$_{\textsc {base}}$+RoBERTa with foreign word-embeddings are initialized randomly from $\mathcal {N}(0, {1}{d^2})$. For a fair comparison, we use the same hyper-parameters in §SECREF27. Table TABREF36 shows the results of XNLI and UD parsing of random initialization. In comparison to the initialization using aligned fastText vectors, random initialization decreases the zero-shot performance of RAMEN$_{\textsc {base}}$ by 15.9% for XNLI and 27.8 points for UD parsing on average. We also see that zero-shot parsing of SOV languages (Arabic and Hindi) suffers random initialization. Analysis ::: Are contextual representations from RAMEN also good for supervised parsing? All the RAMEN models are built from English and tuned on English for zero-shot cross-lingual tasks. It is reasonable to expect RAMENs do well in those tasks as we have shown in our experiments. But are they also a good feature extractor for supervised tasks? We offer a partial answer to this question by evaluating our model for supervised dependency parsing on UD datasets. We used train/dev/test splits provided in UD to train and evaluate our RAMEN-based parser. Table TABREF38 summarizes the results (LAS) of our supervised parser. For a fair comparison, we choose mBERT as the baseline and all the RAMEN models are initialized from aligned fastText vectors. With the same architecture of 12 Transformer layers, RAMEN$_{\textsc {base}}$+BERT performs competitive to mBERT and outshines mBERT by +1.2 points for Vietnamese. The best LAS results are obtained by RAMEN$_{\textsc {large}}$+RoBERTa with 24 Transformer layers. Overall, our results indicate the potential of using contextual representations from RAMEN for supervised tasks. Analysis ::: How does linguistic knowledge transfer happen through each training stages? We evaluate the performance of RAMEN+RoBERTa$_{\textsc {base}}$ (initialized from monolingual data) at each training steps: initialization of word embeddings (0K update), fine-tuning target embeddings (25K), and fine-tuning the model on both English and target language (at each 25K updates). The results are presented in Figure FIGREF40. Without fine-tuning, the average accuracy of XLNI is 39.7% for a three-ways classification task, and the average LAS score is 3.6 for dependency parsing. We see the biggest leap in the performance after 50K updates. While semantic similarity task profits significantly at 25K updates of the target embeddings, syntactic task benefits with further fine-tuning the encoder. This is expected since the target languages might exhibit different syntactic structures than English and fine-tuning encoder helps to capture language specific structures. We observe a substantial gain of 19-30 LAS for all languages except French after 50K updates. Language similarities have more impact on transferring syntax than semantics. Without tuning the English encoder, French enjoys 50.3 LAS for being closely related to English, whereas Arabic and Hindi, SOV languages, modestly reach 4.2 and 6.4 points using the SVO encoder. Although Chinese has SVO order, it is often seen as head-final while English is strong head-initial. Perhaps, this explains the poor performance for Chinese. Limitations While we have successfully adapted autoencoding pre-trained LMs from English to other languages, the question whether our approach can also be applied for autoregressive LM such as XLNet still remains. We leave the investigation to future work. Conclusions In this work, we have presented a simple and effective approach for rapidly building a bilingual LM under a limited computational budget. Using BERT as the starting point, we demonstrate that our approach produces better than mBERT on two cross-lingual zero-shot sentence classification and dependency parsing. We find that the performance of our bilingual LM, RAMEN, correlates with the performance of the original pre-trained English models. We also find that RAMEN is also a powerful feature extractor in supervised dependency parsing. Finally, we hope that our work sparks of interest in developing fast and effective methods for transferring pre-trained English models to other languages.
[ "United Nations Parallel Corpus, IIT Bombay corpus, OpenSubtitles 2018" ]
3,405
qasper_e
en
null
57825167e9b5b23b2f03e2675d9c7e96814b6785975b5dff
For which languages do they build word embeddings for?
Introduction Low dimensional word representations (embeddings) have become a key component in modern NLP systems for language modeling, parsing, sentiment classification, and many others. These embeddings are usually derived by employing the distributional hypothesis: that similar words appear in similar contexts BIBREF0 . The models that perform the word embedding can be divided into two classes: predictive, which learn a target or context word distribution, and counting, which use a raw, weighted, or factored word-context co-occurrence matrix BIBREF1 . The most well-known predictive model, which has become eponymous with word embedding, is word2vec BIBREF2 . Popular counting models include PPMI-SVD BIBREF3 , GloVe BIBREF4 , and LexVec BIBREF5 . These models all learn word-level representations, which presents two main problems: 1) Learned information is not explicitly shared among the representations as each word has an independent vector. 2) There is no clear way to represent out-of-vocabulary (OOV) words. fastText BIBREF6 addresses these issues in the Skip-gram word2vec model by representing a word by the sum of a unique vector and a set of shared character n-grams (from hereon simply referred to as n-grams) vectors. This addresses both issues above as learned information is shared through the n-gram vectors and from these OOV word representations can be constructed. In this paper we propose incorporating subword information into counting models using a strategy similar to fastText. We use LexVec as the counting model as it generally outperforms PPMI-SVD and GloVe on intrinsic and extrinsic evaluations BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , but the method proposed here should transfer to GloVe unchanged. The LexVec objective is modified such that a word's vector is the sum of all its subword vectors. We compare 1) the use of n-gram subwords, like fastText, and 2) unsupervised morphemes identified using Morfessor BIBREF11 to learn whether more linguistically motivated subwords offer any advantage over simple n-grams. To evaluate the impact subword information has on in-vocabulary (IV) word representations, we run intrinsic evaluations consisting of word similarity and word analogy tasks. The incorporation of subword information results in similar gains (and losses) to that of fastText over Skip-gram. Whereas incorporating n-gram subwords tends to capture more syntactic information, unsupervised morphemes better preserve semantics while also improving syntactic results. Given that intrinsic performance can correlate poorly with performance on downstream tasks BIBREF12 , we also conduct evaluation using the VecEval suite of tasks BIBREF13 , in which all subword models, including fastText, show no significant improvement over word-level models. We verify the model's ability to represent OOV words by quantitatively evaluating nearest-neighbors. Results show that, like fastText, both LexVec n-gram and (to a lesser degree) unsupervised morpheme models give coherent answers. This paper discusses related word ( $§$ "Related Work" ), introduces the subword LexVec model ( $§$ "Subword LexVec" ), describes experiments ( $§$ "Materials" ), analyzes results ( $§$ "Results" ), and concludes with ideas for future works ( $§$ "Conclusion and Future Work" ). Related Work Word embeddings that leverage subword information were first introduced by BIBREF14 which represented a word of as the sum of four-gram vectors obtained running an SVD of a four-gram to four-gram co-occurrence matrix. Our model differs by learning the subword vectors and resulting representation jointly as weighted factorization of a word-context co-occurrence matrix is performed. There are many models that use character-level subword information to form word representations BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , as well as fastText (the model on which we base our work). Closely related are models that use morphological segmentation in learning word representations BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 . Our model also uses n-grams and morphological segmentation, but it performs explicit matrix factorization to learn subword and word representations, unlike these related models which mostly use neural networks. Finally, BIBREF26 and BIBREF27 retrofit morphological information onto pre-trained models. These differ from our work in that we incorporate morphological information at training time, and that only BIBREF26 is able to generate embeddings for OOV words. Subword LexVec The LexVec BIBREF7 model factorizes the PPMI-weighted word-context co-occurrence matrix using stochastic gradient descent. $$PPMI_{wc} = max(0, \log \frac{M_{wc} \; M_{**}}{ M_{w*} \; M_{*c} })$$ (Eq. 3) where $M$ is the word-context co-occurrence matrix constructed by sliding a window of fixed size centered over every target word $w$ in the subsampled BIBREF2 training corpus and incrementing cell $M_{wc}$ for every context word $c$ appearing within this window (forming a $(w,c)$ pair). LexVec adjusts the PPMI matrix using context distribution smoothing BIBREF3 . With the PPMI matrix calculated, the sliding window process is repeated and the following loss functions are minimized for every observed $(w,c)$ pair and target word $w$ : $$L_{wc} &= \frac{1}{2} (u_w^\top v_c - PPMI_{wc})^2 \\ L_{w} &= \frac{1}{2} \sum \limits _{i=1}^k{\mathbf {E}_{c_i \sim P_n(c)} (u_w^\top v_{c_i} - PPMI_{wc_i})^2 }$$ (Eq. 4) where $u_w$ and $v_c$ are $d$ -dimensional word and context vectors. The second loss function describes how, for each target word, $k$ negative samples BIBREF2 are drawn from the smoothed context unigram distribution. Given a set of subwords $S_w$ for a word $w$ , we follow fastText and replace $u_w$ in eq:lexvec2,eq:lexvec3 by $u^{\prime }_w$ such that: $$u^{\prime }_w = \frac{1}{|S_w| + 1} (u_w + \sum _{s \in S_w} q_{hash(s)})$$ (Eq. 5) such that a word is the sum of its word vector and its $d$ -dimensional subword vectors $q_x$ . The number of possible subwords is very large so the function $hash(s)$ hashes a subword to the interval $[1, buckets]$ . For OOV words, $$u^{\prime }_w = \frac{1}{|S_w|} \sum _{s \in S_w} q_{hash(s)}$$ (Eq. 7) We compare two types of subwords: simple n-grams (like fastText) and unsupervised morphemes. For example, given the word “cat”, we mark beginning and end with angled brackets and use all n-grams of length 3 to 6 as subwords, yielding $S_{\textnormal {cat}} = \lbrace \textnormal {$ $ ca, at$ $, cat} \rbrace $ . Morfessor BIBREF11 is used to probabilistically segment words into morphemes. The Morfessor model is trained using raw text so it is entirely unsupervised. For the word “subsequent”, we get $S_{\textnormal {subsequent}} = \lbrace \textnormal {$ $ sub, sequent$ $} \rbrace $ . Materials Our experiments aim to measure if the incorporation of subword information into LexVec results in similar improvements as observed in moving from Skip-gram to fastText, and whether unsupervised morphemes offer any advantage over n-grams. For IV words, we perform intrinsic evaluation via word similarity and word analogy tasks, as well as downstream tasks. OOV word representation is tested through qualitative nearest-neighbor analysis. All models are trained using a 2015 dump of Wikipedia, lowercased and using only alphanumeric characters. Vocabulary is limited to words that appear at least 100 times for a total of 303517 words. Morfessor is trained on this vocabulary list. We train the standard LexVec (LV), LexVec using n-grams (LV-N), and LexVec using unsupervised morphemes (LV-M) using the same hyper-parameters as BIBREF7 ( $\textnormal {window} = 2$ , $\textnormal {initial learning rate} = .025$ , $\textnormal {subsampling} = 10^{-5}$ , $\textnormal {negative samples} = 5$ , $\textnormal {context distribution smoothing} = .75$ , $\textnormal {positional contexts} = \textnormal {True}$ ). Both Skip-gram (SG) and fastText (FT) are trained using the reference implementation of fastText with the hyper-parameters given by BIBREF6 ( $\textnormal {window} = 5$ , $\textnormal {initial learning rate} = .025$ , $\textnormal {subsampling} = 10^{-4}$ , $\textnormal {negative samples} = 5$ ). All five models are run for 5 iterations over the training corpus and generate 300 dimensional word representations. LV-N, LV-M, and FT use 2000000 buckets when hashing subwords. For word similarity evaluations, we use the WordSim-353 Similarity (WS-Sim) and Relatedness (WS-Rel) BIBREF28 and SimLex-999 (SimLex) BIBREF29 datasets, and the Rare Word (RW) BIBREF20 dataset to verify if subword information improves rare word representation. Relationships are measured using the Google semantic (GSem) and syntactic (GSyn) analogies BIBREF2 and the Microsoft syntactic analogies (MSR) dataset BIBREF30 . We also evaluate all five models on downstream tasks from the VecEval suite BIBREF13 , using only the tasks for which training and evaluation data is freely available: chunking, sentiment and question classification, and natural language identification (NLI). The default settings from the suite are used, but we run only the fixed settings, where the embeddings themselves are not tunable parameters of the models, forcing the system to use only the information already in the embeddings. Finally, we use LV-N, LV-M, and FT to generate OOV word representations for the following words: 1) “hellooo”: a greeting commonly used in instant messaging which emphasizes a syllable. 2) “marvelicious”: a made-up word obtained by merging “marvelous” and “delicious”. 3) “louisana”: a misspelling of the proper name “Louisiana”. 4) “rereread”: recursive use of prefix “re”. 5) “tuzread”: made-up prefix “tuz”. Results Results for IV evaluation are shown in tab:intrinsic, and for OOV in tab:oov. Like in FT, the use of subword information in both LV-N and LV-M results in 1) better representation of rare words, as evidenced by the increase in RW correlation, and 2) significant improvement on the GSyn and MSR tasks, in evidence of subwords encoding information about a word's syntactic function (the suffix “ly”, for example, suggests an adverb). There seems to a trade-off between capturing semantics and syntax as in both LV-N and FT there is an accompanying decrease on the GSem tasks in exchange for gains on the GSyn and MSR tasks. Morphological segmentation in LV-M appears to favor syntax less strongly than do simple n-grams. On the downstream tasks, we only observe statistically significant ( $p < .05$ under a random permutation test) improvement on the chunking task, and it is a very small gain. We attribute this to both regular and subword models having very similar quality on frequent IV word representation. Statistically, these are the words are that are most likely to appear in the downstream task instances, and so the superior representation of rare words has, due to their nature, little impact on overall accuracy. Because in all tasks OOV words are mapped to the “ $\langle $ unk $\rangle $ ” token, the subword models are not being used to the fullest, and in future work we will investigate whether generating representations for all words improves task performance. In OOV representation (tab:oov), LV-N and FT work almost identically, as is to be expected. Both find highly coherent neighbors for the words “hellooo”, “marvelicious”, and “rereread”. Interestingly, the misspelling of “louisana” leads to coherent name-like neighbors, although none is the expected correct spelling “louisiana”. All models stumble on the made-up prefix “tuz”. A possible fix would be to down-weigh very rare subwords in the vector summation. LV-M is less robust than LV-N and FT on this task as it is highly sensitive to incorrect segmentation, exemplified in the “hellooo” example. Finally, we see that nearest-neighbors are a mixture of similarly pre/suffixed words. If these pre/suffixes are semantic, the neighbors are semantically related, else if syntactic they have similar syntactic function. This suggests that it should be possible to get tunable representations which are more driven by semantics or syntax by a weighted summation of subword vectors, given we can identify whether a pre/suffix is semantic or syntactic in nature and weigh them accordingly. This might be possible without supervision using corpus statistics as syntactic subwords are likely to be more frequent, and so could be down-weighted for more semantic representations. This is something we will pursue in future work. Conclusion and Future Work In this paper, we incorporated subword information (simple n-grams and unsupervised morphemes) into the LexVec word embedding model and evaluated its impact on the resulting IV and OOV word vectors. Like fastText, subword LexVec learns better representations for rare words than its word-level counterpart. All models generated coherent representations for OOV words, with simple n-grams demonstrating more robustness than unsupervised morphemes. In future work, we will verify whether using OOV representations in downstream tasks improves performance. We will also explore the trade-off between semantics and syntax when subword information is used.
[ "Unanswerable", "English" ]
2,009
qasper_e
en
null
c5692a8d9bf0880f0cb4d893511e2b9245a914f828cebcf6
Is the dataset balanced between speakers of different L1s?
Introduction Several learner corpora have been compiled for English, such as the International Corpus of Learner English BIBREF0 . The importance of such resources has been increasingly recognized across a variety of research areas, from Second Language Acquisition to Natural Language Processing. Recently, we have seen substantial growth in this area and new corpora for languages other than English have appeared. For Romance languages, there are a several corpora and resources for French, Spanish BIBREF1 , and Italian BIBREF2 . Portuguese has also received attention in the compilation of learner corpora. There are two corpora compiled at the School of Arts and Humanities of the University of Lisbon: the corpus Recolha de dados de Aprendizagem do Português Língua Estrangeira (hereafter, Leiria corpus), with 470 texts and 70,500 tokens, and the Learner Corpus of Portuguese as Second/Foreign Language, COPLE2 BIBREF3 , with 1,058 texts and 201,921 tokens. The Corpus de Produções Escritas de Aprendentes de PL2, PEAPL2 compiled at the University of Coimbra, contains 516 texts and 119,381 tokens. Finally, the Corpus de Aquisição de L2, CAL2, compiled at the New University of Lisbon, contains 1,380 texts and 281,301 words, and it includes texts produced by adults and children, as well as a spoken subset. The aforementioned Portuguese learner corpora contain very useful data for research, particularly for Native Language Identification (NLI), a task that has received much attention in recent years. NLI is the task of determining the native language (L1) of an author based on their second language (L2) linguistic productions BIBREF4 . NLI works by identifying language use patterns that are common to groups of speakers of the same native language. This process is underpinned by the presupposition that an author’s L1 disposes them towards certain language production patterns in their L2, as influenced by their mother tongue. A major motivation for NLI is studying second language acquisition. NLI models can enable analysis of inter-L1 linguistic differences, allowing us to study the language learning process and develop L1-specific pedagogical methods and materials. However, there are limitations to using existing Portuguese data for NLI. An important issue is that the different corpora each contain data collected from different L1 backgrounds in varying amounts; they would need to be combined to have sufficient data for an NLI study. Another challenge concerns the annotations as only two of the corpora (PEAPL2 and COPLE2) are linguistically annotated, and this is limited to POS tags. The different data formats used by each corpus presents yet another challenge to their usage. In this paper we present NLI-PT, a dataset collected for Portuguese NLI. The dataset is made freely available for research purposes. With the goal of unifying learner data collected from various sources, listed in Section "Collection methodology" , we applied a methodology which has been previously used for the compilation of language variety corpora BIBREF5 . The data was converted to a unified data format and uniformly annotated at different linguistic levels as described in Section "Preprocessing and annotation of texts" . To the best of our knowledge, NLI-PT is the only Portuguese dataset developed specifically for NLI, this will open avenues for research in this area. Related Work NLI has attracted a lot of attention in recent years. Due to the availability of suitable data, as discussed earlier, this attention has been particularly focused on English. The most notable examples are the two editions of the NLI shared task organized in 2013 BIBREF6 and 2017 BIBREF7 . Even though most NLI research has been carried out on English data, an important research trend in recent years has been the application of NLI methods to other languages, as discussed in multilingual-nli. Recent NLI studies on languages other than English include Arabic BIBREF8 and Chinese BIBREF9 , BIBREF10 . To the best of our knowledge, no study has been published on Portuguese and the NLI-PT dataset opens new possibilities of research for Portuguese. In Section "A Baseline for Portuguese NLI" we present the first simple baseline results for this task. Finally, as NLI-PT can be used in other applications besides NLI, it is important to point out that a number of studies have been published on educational NLP applications for Portuguese and on the compilation of learner language resources for Portuguese. Examples of such studies include grammatical error correction BIBREF11 , automated essay scoring BIBREF12 , academic word lists BIBREF13 , and the learner corpora presented in the previous section. Collection methodology The data was collected from three different learner corpora of Portuguese: (i) COPLE2; (ii) Leiria corpus, and (iii) PEAPL2 as presented in Table 1 . The three corpora contain written productions from learners of Portuguese with different proficiency levels and native languages (L1s). In the dataset we included all the data in COPLE2 and sections of PEAPL2 and Leiria corpus. The main variable we used for text selection was the presence of specific L1s. Since the three corpora consider different L1s, we decided to use the L1s present in the largest corpus, COPLE2, as the reference. Therefore, we included in the dataset texts corresponding to the following 15 L1s: Chinese, English, Spanish, German, Russian, French, Japanese, Italian, Dutch, Tetum, Arabic, Polish, Korean, Romanian, and Swedish. It was the case that some of the L1s present in COPLE2 were not documented in the other corpora. The number of texts from each L1 is presented in Table 2 . Concerning the corpus design, there is some variability among the sources we used. Leiria corpus and PEAPL2 followed a similar approach for data collection and show a close design. They consider a close list of topics, called “stimulus”, which belong to three general areas: (i) the individual; (ii) the society; (iii) the environment. Those topics are presented to the students in order to produce a written text. As a whole, texts from PEAPL2 and Leiria represent 36 different stimuli or topics in the dataset. In COPLE2 corpus the written texts correspond to written exercises done during Portuguese lessons, or to official Portuguese proficiency tests. For this reason, the topics considered in COPLE2 corpus are different from the topics in Leiria and PEAPL2. The number of topics is also larger in COPLE2 corpus: 149 different topics. There is some overlap between the different topics considered in COPLE2, that is, some topics deal with the same subject. This overlap allowed us to reorganize COPLE2 topics in our dataset, reducing them to 112. Due to the different distribution of topics in the source corpora, the 148 topics in the dataset are not represented uniformly. Three topics account for a 48.7% of the total texts and, on the other hand, a 72% of the topics are represented by 1-10 texts (Figure 1 ). This variability affects also text length. The longest text has 787 tokens and the shortest has only 16 tokens. Most texts, however, range roughly from 150 to 250 tokens. To better understand the distribution of texts in terms of word length we plot a histogram of all texts with their word length in bins of 10 (1-10 tokens, 11-20 tokens, 21-30 tokens and so on) (Figure 2 ). The three corpora use the proficiency levels defined in the Common European Framework of Reference for Languages (CEFR), but they show differences in the number of levels they consider. There are five proficiency levels in COPLE2 and PEAPL2: A1, A2, B1, B2, and C1. But there are 3 levels in Leiria corpus: A, B, and C. The number of texts included from each proficiency level is presented in Table 4 . Preprocessing and annotation of texts As demonstrated earlier, these learner corpora use different formats. COPLE2 is mainly codified in XML, although it gives the possibility of getting the student version of the essay in TXT format. PEAPL2 and Leiria corpus are compiled in TXT format. In both corpora, the TXT files contain the student version with special annotations from the transcription. For the NLI experiments we were interested in a clean txt version of the students' text, together with versions annotated at different linguistics levels. Therefore, as a first step, we removed all the annotations corresponding to the transcription process in PEAPL2 and Leiria files. As a second step, we proceeded to the linguistic annotation of the texts using different NLP tools. We annotated the dataset at two levels: Part of Speech (POS) and syntax. We performed the annotation with freely available tools for the Portuguese language. For POS we added a simple POS, that is, only type of word, and a fine-grained POS, which is the type of word plus its morphological features. We used the LX Parser BIBREF14 , for the simple POS and the Portuguese morphological module of Freeling BIBREF15 , for detailed POS. Concerning syntactic annotations, we included constituency and dependency annotations. For constituency parsing, we used the LX Parser, and for dependency, the DepPattern toolkit BIBREF16 . Applications NLI-PT was developed primarily for NLI, but it can be used for other research purposes ranging from second language acquisition to educational NLP applications. Here are a few examples of applications in which the dataset can be used: A Baseline for Portuguese NLI To demonstrate the usefulness of the dataset we present the first lexical baseline for Portuguese NLI using a sub-set of NLI-PT. To the best of our knowledge, no study has been published on Portuguese NLI and our work fills this gap. In this experiment we included the five L1s in NLI-PT which contain the largest number of texts in this sub-set and run a simple linear SVM BIBREF21 classifier using a bag of words model to identify the L1 of each text. The languages included in this experiment were Chinese (355 texts), English (236 texts), German (214 texts), Italian (216 texts), and Spanish (271 texts). We evaluated the model using stratified 10-fold cross-validation, achieving 70% accuracy. An important limitation of this experiment is that it does not account for topic bias, an important issue in NLI BIBREF22 . This is due to the fact that NLI-PT is not balanced by topic and the model could be learning topic associations instead. In future work we would like to carry out using syntactic features such as function words, syntactic relations and POS annotation. Conclusion and Future Work This paper presented NLI-PT, the first Portuguese dataset compiled for NLI. NLI-PT contains 1,868 texts written by speakers of 15 L1s amounting to over 380,000 tokens. As discussed in Section "Applications" , NLI-PT opens several avenues for future research. It can be used for different research purposes beyond NLI such as grammatical error correction and CALL. An experiment with the texts written by the speakers of five L1s: Chinese, English, German, Italian, and Spanish using a bag of words model achieved 70% accuracy. We are currently experimenting with different features taking advantage of the annotation available in NLI-PT thus reducing topic bias in classification. In future work we would like to include more texts in the dataset following the same methodology and annotation. Acknowledgement We want to thank the research teams that have made available the data we used in this work: Centro de Estudos de Linguística Geral e Aplicada at Universidade de Coimbra (specially Cristina Martins) and Centro de Linguística da Universidade de Lisboa (particularly Amália Mendes). This work was partially supported by Fundação para a Ciência e a Tecnologia (postdoctoral research grant SFRH/BPD/109914/2015).
[ "No", "No" ]
1,899
qasper_e
en
null
ba8b9314903599bb79e4a4148ebae6ae59c5d717a1a2a936
How large is the collection of COVID-19 literature?
Introduction Coronavirus disease 2019 (COVID-19) is an infectious disease that has affected more than one million individuals all over the world and caused more than 55,000 deaths, as of April 3 in 2020. The science community has been working very actively to understand this new disease and make diagnosis and treatment guidelines based on the findings. One major stream of efforts are focused on discovering the correlation between radiological findings in the lung areas and COVID-19. There have been several works BIBREF0, BIBREF1 publishing such results. However, existing studies are mostly conducted separately by different hospitals and medical institutes. Due to geographic affinity, the populations served by different hospitals have different genetic, social, and ethnic characteristics. As a result, the radiological findings from COVID-19 patient cases in different populations are different. This population bias incurs inconsistent or even conflicting conclusions regarding the correlation between radiological findings and COVID-19. As a result, medical professionals cannot make informed decisions on how to use radiological findings to guide diagnosis and treatment of COVID-19. We aim to address this issue. Our research goal is to develop natural language processing methods to collectively analyze the study results reported by many hospitals and medical institutes all over the world, reconcile these results, and make a holistic and unbiased conclusion regarding the correlation between radiological findings and COVID-19. Specifically, we take the CORD-19 dataset BIBREF2, which contains over 45,000 scholarly articles, including over 33,000 with full text, about COVID-19, SARS-CoV-2, and related coronaviruses. We develop sentence classification methods to identify all sentences narrating radiological findings from COVID-19. Then constituent parsing is utilized to identify all noun phrases from these sentences and these noun phrases contain abnormalities, lesions, diseases identified by radiology imaging such as X-ray and computed tomography (CT). We calculate the frequency of these noun phrases and select those with top frequencies for medical professionals to further investigate. Since these clinical entities are aggregated from a number of hospitals all over the world, the population bias is largely mitigated and the conclusions are more objective and universally informative. From the CORD-19 dataset, our method successfully discovers a set of clinical findings that are closely related with COVID-19. The major contributions of this paper include: We develop natural language processing methods to perform unbiased study of the correlation between radiological findings and COVID-19. We develop a bootstrapping approach to effectively train a sentence classifier with light-weight manual annotation effort. The sentence classifier is used to extract radiological findings from a vast amount of literature. We conduct experiments to verify the effectiveness of our method. From the CORD-19 dataset, our method successfully discovers a set of clinical findings that are closely related with COVID-19. The rest of the paper is organized as follows. In Section 2, we introduce the data. Section 3 presents the method. Section 4 gives experimental results. Section 5 concludes the paper. Dataset We used the COVID-19 Open Research Dataset (CORD-19) BIBREF2 for our study. In response to the COVID-19 pandemic, the White House and a coalition of research groups prepared the CORD-19 dataset. It contains over 45,000 scholarly articles, including over 33,000 with full text, about COVID-19, SARS-CoV-2, and related coronaviruses. These articles are contributed by hospitals and medical institutes all over the world. Since the outbreak of COVID-19 is after November 2019, we select articles published after November 2019 to study, which include a total of 2081 articles and about 360000 sentences. Many articles report the radiological findings related to COVID-19. Table TABREF4 shows some examples. Methods Our goal is to develop natural language processing (NLP) methods to analyze a large collection of COVID-19 literature and discover unbiased and universally informative correlation between radiological findings and COVID-19. To achieve this goal, we need to address two technical challenges. First, in the large collection of COVID-19 literature, only a small part of sentences are about radiological findings. It is time-consuming to manually identify these sentences. Simple methods such as keyword-based retrieval will falsely retrieve sentences that are not about radiological findings and miss sentences that are about radiological findings. How can we develop NLP methods to precisely and comprehensively extract sentences containing radiological findings with minimum human annotation? Second, given the extracted sentences, they are still highly unstructured, which are difficult for medical professionals to digest and index. How can we further process these sentences into structured information that is more concise and easy to use? To address the first challenge, we develop a sentence classifier to judge whether a sentence contains radiological findings. To minimize manual-labeling overhead, we propose easy ways of constructing positive and negative training examples, develop a bootstrapping approach to mine hard examples, and use hard examples to re-train the classifier for reducing false positives. To address the second challenge, we use constituent parsing to recognize noun phrases which contain critical medical information (e.g., lesions, abnormalities, diseases) and are easy to index and digest. We select noun phrases with top frequencies for medical professionals to further investigate. Methods ::: Extracting Sentences Containing Radiological Findings In this section, we develop a sentence-level classifier to determine whether a sentence contains radiological findings. To build such a classifier, we need to create positive and negative training sentences, without labor-intensive annotations. To obtain positive examples, we resort to the MedPix database, which contains radiology reports narrating radiological findings. MedPix is an open-access online database of medical images, teaching cases, and clinical topics. It contains more than 9000 topics, 59000 images from 12000 patient cases. We selected diagnostic reports for CT images and used sentences in the reports as positive samples. To obtain negative sentences, we randomly sample some sentences from the articles and quickly screen them to ensure that they are not about radiological findings. Since most sentences in the literature are not about radiological findings, a random sampling can almost ensure the select sentences are negative. A manual screening is conducted to further ensure this and the screening effort is not heavy. Given these positive and negative training sentences, we use them to train a sentence classifier which predicts whether a sentence is about the radiological findings of COVID-19. We use the Bidirectional Encoder Representations from Transformers (BERT) BIBREF3 model for sentence classification. BERT is a neural language model that learns contextual representations of words and sentences. BERT pretrains deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. To apply the pretrained BERT to a downstream task such as sentence classification, one can add an additional layer on top of the BERT architecture and train this newly-added layer using the labeled data in the target task. In our case, similar to BIBREF4, we pretrain the BERT model on a vast amount of biomedical literature to obtain semantic representations of words. A linear layer is added to the output of BERT for predicting whether this sentence is positive (containing radiological finding) or negative. The architecture and hyperparameters of the BERT model used in our method are the same as those in BIBREF4. Figure FIGREF7 shows the architecture of the classification model. When applying this trained sentence classifier to unseen sentences, we found that it yields a lot of false positives: many sentences irrelevant to radiological findings of COVID-19 are predicted as being relevant. To solve this problem, we iteratively perform hard example mining in a bootstrapping way and use these hard examples to retrain the classifier for reducing false positives. At iteration $t$, given the classifier $C_t$, we apply it to make predictions on unseen sentences. Each sentence is associated with a prediction score where a larger score indicates that this sentence is more likely to be positive. We rank these sentences in descending order of their prediction scores. Then for the top-K sentences with the largest prediction scores, we read them and label each of them as either being positive or negative. Then we add the labeled pairs to the training set and re-train the classifier and get $C_{t+1}$. This procedure is repeated again to identify new false positives and update the classifier using the new false positives. Methods ::: Extracting Noun Phrases The extracted sentences containing radiological findings of COVID-19 are highly unstructured, which are still difficult to digest for medical professionals. To solve this problem, from these unstructured sentences, we extract structured information that is both clinically important and easy to use. We notice that important information, such as lesions, abnormalities, diseases, is mostly contained in noun phrases. Therefore, we use NLP to extract noun phrases and perform further analysis therefrom. First, we perform part-of-speech (POS) tagging to label each word in a sentence as being a noun, verb, adjective, etc. Then on top of these words and their POS tags, we perform constituent parsing to obtain the syntax tree of the sentence. An example is shown in Figure FIGREF9. From bottom to top of the tree, fine-grained linguistic units such as words are composed into coarse-grained units such as phrases, including noun phrases. We obtain the noun phrases by reading the node labels in the tree. Given the extracted noun phrases, we remove stop words in them and perform lemmatization to eliminate non-essential linguistic variations. We count the frequency of each noun phrase and rank them in descending frequency. Then we select the noun phrases with top frequencies and present them to medical professionals for further investigation. Experiment ::: Experimental Settings For building the initial sentence classifier (before hard-example mining), we collected 2350 positive samples from MedPix and 3000 negative samples from CORD-19. We used 90% sentences for training and the rest 10% sentences for validation. The weights in the sentence classifier are optimized using the Adam algorithm with a learning rate of $2\times 10^{-5}$ and a mini-batch size of 4. In bootstrapping for hard example mining, we added 400 false positives in each iteration for classifier-retraining and we performed 4 iterations of bootstrapping. Experiment ::: Results of Sentence Classification Under the final classifier, 998 sentences are predicted as being positive. Among them, 717 are true positives (according to manual check). The classifier achieves a precision of 71.8%. For the initial classifier (before adding mined hard examples using bootstrapping), among the top 100 sentences with the largest prediction scores, 53 are false positives. The initial classifier only achieves a precision of 47%. The precision achieved by classifiers trained after round 1-3 in bootstrapping is 55%, 57%, and 69% respectively, as shown in Table TABREF12. This demonstrates the effectiveness of hard example mining. Table TABREF13 shows some example sentences that are true positives, true negatives, and false positives, under the predictions made by the final classifier. Experiment ::: Results of Noun Phrase Extraction Table TABREF15 shows the extracted noun phrases with top frequencies that are relevant to radiology. Medical professionals can look at this table and select noun phrases indicating radiological findings for further investigation, such as consolidation, pleural effusion, ground glass opacity, thickening, etc. We mark such noun phrases with bold font in the table. To further investigate how a noun phrase is relevant to COVID-19, medical professionals can review the sentences mentioning this noun phrase. Table TABREF16,TABREF17,TABREF18 show some examples. For example, reading the five example sentences containing consolidation, one can judge that consolidation is a typical manifestation of COVID-19. This is in accordance with the conclusion in BIBREF5: “Consolidation becomes the dominant CT findings as the disease progresses." Similarly, the example sentences of pleural effusion, ground glass opacity, thickening, fibrosis, bronchiectasis, lymphadenopathy show that these abnormalities are closely related with COVID-19. This is consistent with the results reported in the literature: Pleural effusion: “In terms of pleural changes, CT showed that six (9.7%) had pleural effusion." BIBREF6 Ground glass opacity: “The predominant pattern of abnormalities after symptom onset was ground-glass opacity (35/78 [45%] to 49/79 [62%] in different periods." BIBREF7 Thickening: “Furthermore, ground-glass opacity was subcategorized into: (1) pure ground-glass opacity; (2) ground-glass opacity with smooth interlobular septal thickening." BIBREF7 Fibrosis: “In five patients, follow-up CT showed improvement with the appearance of fibrosis and resolution of GGOs.", BIBREF8 Bronchiectasis and lymphadenopathy: “The most common patterns seen on chest CT were ground-glass opacity, in addition to ill-defined margins, smooth or irregular interlobular septal thickening, air bronchogram , crazy-paving pattern, and thickening of the adjacent pleura. Less common CT findings were nodules, cystic changes, bronchiolectasis, pleural effusion , and lymphadenopathy." BIBREF9 Conclusions In this paper, we develop natural language processing methods to automatically extract unbiased radiological findings of COVID-19. We develop a BERT-based classifier to select sentences that contain COVID-related radiological findings and use bootstrapping to mine hard examples for reducing false positives. Constituent parsing is used to extract noun phrases from the positive sentences and those with top frequencies are selected for medical professionals to further investigate. From the CORD-19 dataset, our method successfully discovers radiological findings that are closely related with COVID-19.
[ "45,000 scholarly articles, including over 33,000 with full text" ]
2,150
qasper_e
en
null
b1e77f6928aff3baafd48d7674dc39bd72293dce6956dbde
To what baseline models is proposed model compared?
Introduction Medical text mining is an exciting area and is becoming attractive to natural language processing (NLP) researchers. Clinical notes are an example of text in the medical area that recent work has focused on BIBREF0, BIBREF1, BIBREF2. This work studies abbreviation disambiguation on clinical notes BIBREF3, BIBREF4, specifically those used commonly by physicians and nurses. Such clinical abbreviations can have a large number of meanings, depending on the specialty BIBREF5, BIBREF6. For example, the term MR can mean magnetic resonance, mitral regurgitation, mental retardation, medical record and the general English Mister (Mr.). Table TABREF1 illustrates such an example. Abbreviation disambiguation is an important task in medical text understanding task BIBREF7. Successful recognition of the abbreviations in the notes can contribute to downstream tasks such as classification, named entity recognition, and relation extraction BIBREF7. Recent work focuses on formulating the abbreviation disambiguation task as a classification problem, where the possible senses of a given abbreviation term are pre-defined with the help of domain experts BIBREF6, BIBREF5. Traditional features such as part-of-speech (POS) and Term Frequency-Inverse Document Frequency (TF-IDF) are widely investigated for clinical notes classification. Classifiers like support vector machines (SVMs) and random forests (RFs) are used to make predictions BIBREF1. Such methods depend heavily on feature engineering. Recently, deep features have been investigated in the medical domain. Word embeddings BIBREF8 and Convolutional Neural Networks (CNNs) BIBREF9, BIBREF10 provide very competitive performance on text classification for clinical notes and abbreviation disambiguation task BIBREF0, BIBREF11, BIBREF12, BIBREF6. Beyond vanilla embeddings, BIBREF13 utilized contextual features to do abbreviation expansion. Another challenge is the difficulty in obtaining training data: clinical notes have many restrictions due to privacy issues and require domain experts to develop high-quality annotations, thus leading to limited annotated training and testing data. Another difficulty is that in the real world (and in the existing public datasets), some abbreviation term-sense pairs (for example, AB as abortion) have a very high frequency of occurrence BIBREF5, while others are rarely found. This long tail issue creates the challenge of training under unbalanced sample distributions. We tackle these problems in the setting of few-shot learning BIBREF14, BIBREF15 where only a few or a low number of samples can be found in the training dataset to make use of limited resources. We propose a model that combines deep contextual features based on ELMo BIBREF16 and topic information to solve the abbreviation disambiguation task. Our contributions can be summarized as: 1) we re-examined and corrected an existing dataset for training and we collected a test set for evaluation with focus especially for rare senses; 2) we proposed a few-shot learning approach which combines topic information and contextualized word embeddings to solve clinical abbreviation disambiguation task. The implementation is available online; 3) as limited research are conducted on this particular task, we evaluated and compared a number of baseline methods including classical models and deep models comprehensively. Datasets Training Dataset UM Inventory BIBREF5 is a public dataset created by researchers from the University of Minnesota, containing about 37,500 training samples with 75 abbreviation terms. Existing work reports abbreviation disambiguation results on 50 abbreviation terms BIBREF6, BIBREF5, BIBREF17. However, after carefully reviewing this dataset, we found that it contains many samples where medical professionals disagree: wrong samples and uncategorized samples. Due to these mistakes and flaws of this dataset, we removed the erroneous samples and eventually selected 30 abbreviation terms as our training dataset that can be made public. Among all the abbreviation terms, we have 11,466 samples, and 93 term-sense pairs in total (on average 123.3 samples/term-sense pair and 382.2 samples/term). Some term-sense pairs are very popular with a larger number of training samples but some are not (typically less than 5), we call them rare-sense cases . More details can be found in Appendix SECREF7. Testing Dataset Our testing dataset takes MIMIC-III BIBREF18 and PubMed as data sources. Here we are referring to all the notes data from MIMIC-III (NOTEEVENTS table ) and Case Reports articles from PubMed, as this type of contents are close to medical notes. We provide detailed information in Appendix SECREF8, including the steps to create the testing dataset. Eventually, we have a balanced testing dataset, where each term-sense pair has at least 11 and up to 15 samples for training (on average, each pair has 14.56 samples and the median sample number is 15). Baselines We conducted a comprehensive comparison with the baseline models, and some of them were never investigated for the abbreviation disambiguation task. We applied traditional features by simply taking the TF-IDF features as the inputs into the classic classifiers. Deep features are also considered: a Doc2vec model BIBREF19 was pre-trained using Gensim and these word embeddings were applied to initialize deep models and fine-tuned. TF-IDF: We applied TF-IDF features as inputs to four classifiers: support vector machine classifier (SVM), logistic regression classifier (LR), Naive Bayes classifier (NB) and random forest (RF); CNN: We followed the same architecture as BIBREF9; LSTM: We applied an LSTM model BIBREF20 to classify the sentences with pre-trained word embeddings;LSTM-soft: We then added a soft-attention BIBREF21 layer on top of the LSTM model where we computed soft alignment scores over each of the hidden states; LSTM-self: We applied a self-attention layer BIBREF22 to LSTM model. We denote the content vector as $c_i$ for each sentence $i$, as the input to the last classification layer. Topic-attention Model ELMo ELMo is a new type of word representation introduced by BIBREF16 which considers contextual information captured by pre-trained BiLSTM language models. Similar works like BERT BIBREF23 and BioBERT BIBREF24 also consider context but with deeper structures. Compared with those models, ELMo has less parameters and not easy to be overfitting. We trained our ELMo model on the MIMIC-III corpus. Since some sentences also appear in the test set, one may raise the concern of performance inflation in testing. However, we pre-trained ELMo using the whole corpus of MIMIC in an unsupervised way, so the classification labels were not visible during training. To train the ELMo model, we adapted code from AllenNLP, we set the word embedding dimension to 64 and applied two BiLSTM layers. For all of our experiments that involved with ELMo, we initialized the word embeddings from the last layer of ELMo. Topic-attention We propose a neural topic-attention model for text classification. Our assumption is that the topic information plays an important role in doing the disambiguation given a sentence. For example, the abbreviation of the medical term FISH has two potential senses: general English fish as seafood and the sense of fluorescent in situ hybridization BIBREF17. The former case always goes with the topic of food, allergies while the other one appears in the topic of some examination test reports. In our model, a topic-attention module is applied to add topic information into the final sentence representation. This module calculates the distribution of topic-attention weights on a list of topic vectors. As shown in Figure SECREF5, we took the content vector $c_i$ (from soft-attention BIBREF22) and a topic matrix $T_{topic}=[t_1,t_2,..,t_j]$ (where each $t_i$ is a column vector in the figure and we illustrate four topics) as the inputs to the topic-attention module, and then calculated a weighted summation of the topic vectors. The final sentence representation $r_i$ for the sentence $i$ was calculated as the following: where $W_{topic}$ and $b _ { topic }$ are trainable parameters, $\beta _{it}$ represents the topic-attention weights. The final sentence representation is denoted as $r_i$, and $[\cdot ,\cdot ]$ means concatenation. Here $s_i$ is the representation of the sentence which includes the topic information. The final sentence representation $r_i$ is the concatenation of $c_i$ and $s_i$, now we have both context-aware sentence representation and topic-related representation. Then we added a fully-connected layer, followed by a Softmax to do classification with cross-entropy loss function. Topic Matrix To generate the topic matrix $T_{topic}$ as in Equation DISPLAY_FORM9, we propose a convolution-based method to generate topic vectors. We first pre-trained a topic model using the Latent Dirichlet Allocation (LDA) BIBREF25 model on MIMIC-III notes as used by the ELMo model. We set the number of topics to be 50 and ran for 30 iterations. Then we were able to get a list of top $k$ words (we set $k=100$) for each topic. To get the topic vector $t$ for a specific topic: where $e_j$ (column vector) is the pre-trained Doc2vec word embedding for the top word $j$ from the current topic, and $Conv(\cdot )$ indicates a convolutional layer; $maxpool(\cdot )$ is a max pooling layer. We finally reshaped the output $t$ into a 1-dimension vector. Eventually we collected all topic vectors as the topic matrix $T_{topic}$ in Figure SECREF5. Evaluation We did three groups of experiments including two baseline groups and our proposed model. The first group was the TF-IDF features in Section SECREF3 for traditional models. The Naïve Bayesian classifier (NB) has the highest scores among all traditional methods. The second group of experiments used neural models described in Section SECREF3, where LSTM with self attention model (LSTM-self) has competitive results among this group, we choose this model as our base model. Notably, this is the first study that compares and evaluates LSTM-based models on the medical term abbreviation disambiguation task. tableExperimental Results: we report macro F1 in all the experiments. figureTopic-attention Model The last group contains the results of our proposed model with different settings. We used Topic Only setting on top of the base model, where we only added the topic-attention layer, and all the word embeddings were from our pre-trained Doc2vec model and were fine-tuned during back propagation. We could observe that compared with the base model, we have an improvement of 7.36% on accuracy and 9.69% on F1 score. Another model (ELMo Only) is to initialize the word embeddings of the base model using the pre-trained ELMo model, and here no topic information was added. Here we have higher scores than Topic Only, and the accuracy and F1 score increased by 9.87% and 12.26% respectively compared with the base model. We then conducted a combination of both(ELMo+Topic), where the word embeddings from the sentences were computed from the pre-trained ELMo model, and the topic representations were from the pre-trained Doc2vec model. We have a remarkable increase of 12.27% on the accuracy and 14.86% on F1 score. To further compare our proposed topic-attention model and the base model, we report an average of area under the curve(AUC) score among all 30 terms: the base model has an average AUC of 0.7189, and our topic-attention model (ELMo+Topic) achieves an average AUC of 0.8196. We provide a case study in Appendix SECREF9. The results show that the model can benefit from ELMo, which considers contextual features, and the topic-attention module, which brings in topic information. We can conclude that under the few-shot learning setting, our proposed model can better capture the sentence features when only limited training samples are explored in a small-scale dataset. Conclusion In this paper, we propose a neural topic-attention model with few-shot learning for medical abbreviation disambiguation task. We also manually cleaned and collected training and testing data for this task, which we release to promote related research in NLP with clinical notes. In addition, we evaluated and compared a comprehensive set of baseline models, some of which had never been applied to the medical term abbreviation disambiguation task. Future work would be to adapt other models like BioBERT or BERT to our proposed topic-attention model. We will also extend the method into other clinical notes tasks such as drug name recognition and ICD-9 code auto-assigning BIBREF26. In addition, other LDA-based approach can be investigated. Dataset Details Figure FIGREF11 shows the histogram for the distribution of term-sense pair sample numbers. The X-axis gives the pair sample numbers and the Y-axis shows the counts. For example, the first bar shows that there are 43 term-sense pairs that have the sample number in the range of 0-50. We also show histogram of class numbers for all terms in Figure FIGREF11. The Y-axis gives the counts while the X-axis gives the number of classes. For instance, the first bar means there are 12 terms contain 2 classes. Testing Dataset Since the training dataset is unbalanced and relatively small, it is hard to split it further into training and testing. Existing work BIBREF6, BIBREF5 conducted fold validation on the datasets and we found there are extreme rare senses for which only one or two samples exist. Besides, we believe that it is better to evaluate on a balanced dataset to determine whether it performs equally well on all classes. While most of the works deal with unbalanced training and testing that which may lead to very high accuracy if there is a dominating class in both training and testing set, the model may have a poor performance in rare testing cases because only a few samples have been seen during training. To be fair to all the classes, a good performance on these rare cases is also required otherwise it may lead to a severe situation. In this work, we are very interested to see how the model works among all senses especially the rare ones. Also, we can prevent the model from trivially predicting the dominant class and achieving high accuracy. As a result, we decided to create a dataset with the same number of samples for each case in the test dataset. Our testing dataset takes MIMIC-III BIBREF18 and PubMed as data sources. Here we are referring to all the notes data from MIMIC-III (NOTEEVENTS table ) and Case Reports articles from PubMed, as these contents are close to medical notes. To create the test set, we first followed the approach by BIBREF6 who applied an auto-generating method. Initially, we built a term-sense dictionary from the training dataset. Then we did matching for the sense words or phrases in the MIMIC-III notes dataset, and once there is a match, we replaced the words or phrases with the abbreviation term . We then asked two researchers with a medical background to check the matched samples manually with the following judgment: given this sentence and the abbreviation term and its sense, do you think the content is enough for you to guess the sense and whether this is the right sense? To estimate the agreement on the annotations, we selected a subset which contains 120 samples randomly and let the two annotators annotate individually. We got a Kappa score BIBREF27 of 0.96, which is considered as a near perfect agreement. We then distributed the work to the two annotators, and each of them labeled a half of the dataset, which means each sample was only labeled by a single annotator. For some rare term-sense pairs, we failed to find samples from MIMIC-III. The annotators then searched these senses via PubMed data source manually, aiming to find clinical notes-like sentences. They picked good sentences from these results as testing samples where the keywords exist and the content is informative. For those senses that are extremely rare, we let the annotators create sentences in the clinical note style as testing samples according to their experiences. Eventually, we have a balanced testing dataset, where each term-sense pair has around 15 samples for testing (on average, each pair has 14.56 samples and the median sample number is 15), and a comparison with training dataset is shown in Figure FIGREF11. Due to the difficulty for collecting the testing dataset, we decided to only collect for a random selection of 30 terms. On average, it took few hours to generate testing samples for each abbreviation term per researcher. Case Study We now select two representative terms AC, IM and plot their receiver operating characteristic(ROC) curves. The term has a relatively large number of classes and the second one has extremely unbalanced training samples. We show the details in Table TABREF16. We have 8 classes in term AC. Figure FIGREF15(a) illustrates the results of our best performed model and Figure FIGREF15(b) shows the results of the base model. The accuracy and F1 score have an improvement from 0.3898 and 0.2830 to 0.4915 and 0.4059 respectively. Regarding the rare senses (for example, class 0, 1, 4 and 6), we can observe an increase in the ROC areas. Class 6 has an obvious improvement from 0.75 to 1.00. Such improvements in the rare senses make a huge difference in the reported average accuracy and F1 score, since we have a nearly equal number of samples for each class in the testing data. Similarly, we show the plots for IM term in Figure FIGREF15(c) and FIGREF15(d). IM has only two classes, but they are very unbalanced in training set, as shown in Table TABREF16. The accuracy and F1 scores improved from 0.6667 and 0.6250 to 0.8667 and 0.8667 respectively. We observe improvements in the ROC areas for both classes. This observation further shows that our model is more sensitive to all the class samples compared to the base model, even for the terms that have only a few samples in the training set. Again, by plotting the ROC curves and comparing AUC areas, we show that our model, which applies ELMo and topic-attention, has a better representation ability under the setting of few-shot learning.
[ "support vector machine classifier (SVM), logistic regression classifier (LR), Naive Bayes classifier (NB), random forest (RF), CNN, LSTM , LSTM-soft, LSTM-self", "support vector machine classifier (SVM), logistic regression classifier (LR), Naive Bayes classifier (NB), random forest (RF), CNN, LSTM , LSTM-soft, LSTM-self" ]
2,895
qasper_e
en
null
c0f67594ed1d5ba890abd05585f849a4bdde9f30e9054ea8
How many electrodes were used on the subject in EEG sessions?
Introduction Decoding intended speech or motor activity from brain signals is one of the major research areas in Brain Computer Interface (BCI) systems BIBREF0 , BIBREF1 . In particular, speech-related BCI technologies attempt to provide effective vocal communication strategies for controlling external devices through speech commands interpreted from brain signals BIBREF2 . Not only do they provide neuro-prosthetic help for people with speaking disabilities and neuro-muscular disorders like locked-in-syndrome, nasopharyngeal cancer, and amytotropic lateral sclerosis (ALS), but also equip people with a better medium to communicate and express thoughts, thereby improving the quality of rehabilitation and clinical neurology BIBREF3 , BIBREF4 . Such devices also have applications in entertainment, preventive treatments, personal communication, games, etc. Furthermore, BCI technologies can be utilized in silent communication, as in noisy environments, or situations where any sort of audio-visual communication is infeasible. Among the various brain activity-monitoring modalities in BCI, electroencephalography (EEG) BIBREF5 , BIBREF6 has demonstrated promising potential to differentiate between various brain activities through measurement of related electric fields. EEG is non-invasive, portable, low cost, and provides satisfactory temporal resolution. This makes EEG suitable to realize BCI systems. EEG data, however, is challenging: these data are high dimensional, have poor SNR, and suffer from low spatial resolution and a multitude of artifacts. For these reasons, it is not particularly obvious how to decode the desired information from raw EEG signals. Although the area of BCI based speech intent recognition has received increasing attention among the research community in the past few years, most research has focused on classification of individual speech categories in terms of discrete vowels, phonemes and words BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 . This includes categorization of imagined EEG signal into binary vowel categories like /a/, /u/ and rest BIBREF7 , BIBREF8 , BIBREF9 ; binary syllable classes like /ba/ and /ku/ BIBREF1 , BIBREF10 , BIBREF11 , BIBREF12 ; a handful of control words like 'up', 'down', 'left', 'right' and 'select' BIBREF15 or others like 'water', 'help', 'thanks', 'food', 'stop' BIBREF13 , Chinese characters BIBREF14 , etc. Such works mostly involve traditional signal processing or manual feature handcrafting along with linear classifiers (e.g., SVMs). In our recent work BIBREF16 , we introduced deep learning models for classification of vowels and words that achieved 23.45% improvement of accuracy over the baseline. Production of articulatory speech is an extremely complicated process, thereby rendering understanding of the discriminative EEG manifold corresponding to imagined speech highly challenging. As a result, most of the existing approaches failed to achieve satisfactory accuracy on decoding speech tokens from the speech imagery EEG data. Perhaps, for these reasons, very little work has been devoted to relating the brain signals to the underlying articulation. The few exceptions include BIBREF17 , BIBREF18 . In BIBREF17 , Zhao et al. used manually handcrafted features from EEG data, combined with speech audio and facial features to achieve classification of the phonological categories varying based on the articulatory steps. However, the imagined speech classification accuracy based on EEG data alone, as reported in BIBREF17 , BIBREF18 , are not satisfactory in terms of accuracy and reliability. We now turn to describing our proposed models. Proposed Framework Cognitive learning process underlying articulatory speech production involves incorporation of intermediate feedback loops and utilization of past information stored in the form of memory as well as hierarchical combination of several feature extractors. To this end, we develop our mixed neural network architecture composed of three supervised and a single unsupervised learning step, discussed in the next subsections and shown in Fig. FIGREF1 . We formulate the problem of categorizing EEG data based on speech imagery as a non-linear mapping INLINEFORM0 of a multivariate time-series input sequence INLINEFORM1 to fixed output INLINEFORM2 , i.e, mathematically INLINEFORM3 : INLINEFORM4 , where c and t denote the EEG channels and time instants respectively. Preprocessing step We follow similar pre-processing steps on raw EEG data as reported in BIBREF17 (ocular artifact removal using blind source separation, bandpass filtering and subtracting mean value from each channel) except that we do not perform Laplacian filtering step since such high-pass filtering may decrease information content from the signals in the selected bandwidth. Joint variability of electrodes Multichannel EEG data is high dimensional multivariate time series data whose dimensionality depends on the number of electrodes. It is a major hurdle to optimally encode information from these EEG data into lower dimensional space. In fact, our investigation based on a development set (as we explain later) showed that well-known deep neural networks (e.g., fully connected networks such as convolutional neural networks, recurrent neural networks and autoencoders) fail to individually learn such complex feature representations from single-trial EEG data. Besides, we found that instead of using the raw multi-channel high-dimensional EEG requiring large training times and resource requirements, it is advantageous to first reduce its dimensionality by capturing the information transfer among the electrodes. Instead of the conventional approach of selecting a handful of channels as BIBREF17 , BIBREF18 , we address this by computing the channel cross-covariance, resulting in positive, semi-definite matrices encoding the connectivity of the electrodes. We define channel cross-covariance (CCV) between any two electrodes INLINEFORM0 and INLINEFORM1 as: INLINEFORM2 . Next, we reject the channels which have significantly lower cross-covariance than auto-covariance values (where auto-covariance implies CCV on same electrode). We found this measure to be essential as the higher cognitive processes underlying speech planning and synthesis involve frequent information exchange between different parts of the brain. Hence, such matrices often contain more discriminative features and hidden information than mere raw signals. This is essentially different than our previous work BIBREF16 where we extract per-channel 1-D covariance information and feed it to the networks. We present our sample 2-D EEG cross-covariance matrices (of two individuals) in Fig. FIGREF2 . CNN & LSTM In order to decode spatial connections between the electrodes from the channel covariance matrix, we use a CNN BIBREF19 , in particular a four-layered 2D CNN stacking two convolutional and two fully connected hidden layers. The INLINEFORM0 feature map at a given CNN layer with input INLINEFORM1 , weight matrix INLINEFORM2 and bias INLINEFORM3 is obtained as: INLINEFORM4 . At this first level of hierarchy, the network is trained with the corresponding labels as target outputs, optimizing a cross-entropy cost function. In parallel, we apply a four-layered recurrent neural network on the channel covariance matrices to explore the hidden temporal features of the electrodes. Namely, we exploit an LSTM BIBREF20 consisting of two fully connected hidden layers, stacked with two LSTM layers and trained in a similar manner as CNN. Deep autoencoder for spatio-temporal information As we found the individually-trained parallel networks (CNN and LSTM) to be useful (see Table TABREF12 ), we suspected the combination of these two networks could provide a more powerful discriminative spatial and temporal representation of the data than each independent network. As such, we concatenate the last fully-connected layer from the CNN with its counterpart in the LSTM to compose a single feature vector based on these two penultimate layers. Ultimately, this forms a joint spatio-temporal encoding of the cross-covariance matrix. In order to further reduce the dimensionality of the spatio-temporal encodings and cancel background noise effects BIBREF21 , we train an unsupervised deep autoenoder (DAE) on the fused heterogeneous features produced by the combined CNN and LSTM information. The DAE forms our second level of hierarchy, with 3 encoding and 3 decoding layers, and mean squared error (MSE) as the cost function. Classification with Extreme Gradient Boost At the third level of hierarchy, the discrete latent vector representation of the deep autoencoder is fed into an Extreme Gradient Boost based classification layer BIBREF22 , BIBREF23 motivated by BIBREF21 . It is a regularized gradient boosted decision tree that performs well on structured problems. Since our EEG-phonological pairwise classification has an internal structure involving individual phonemes and words, it seems to be a reasonable choice of classifier. The classifier receives its input from the latent vectors of the deep autoencoder and is trained in a supervised manner to output the final predicted classes corresponding to the speech imagery. Dataset We evaluate our model on a publicly available dataset, KARA ONE BIBREF17 , composed of multimodal data for stimulus-based, imagined and articulated speech state corresponding to 7 phonemic/syllabic ( /iy/, /piy/, /tiy/, /diy/, /uw/, /m/, /n/ ) as well as 4 words(pat, pot, knew and gnaw). The dataset consists of 14 participants, with each prompt presented 11 times to each individual. Since our intention is to classify the phonological categories from human thoughts, we discard the facial and audio information and only consider the EEG data corresponding to imagined speech. It is noteworthy that given the mixed nature of EEG signals, it is reportedly challenging to attain a pairwise EEG-phoneme mapping BIBREF18 . In order to explore the problem space, we thus specifically target five binary classification problems addressed in BIBREF17 , BIBREF18 , i.e presence/absence of consonants, phonemic nasal, bilabial, high-front vowels and high-back vowels. Training and hyperparameter selection We performed two sets of experiments with the single-trial EEG data. In PHASE-ONE, our goals was to identify the best architectures and hyperparameters for our networks with a reasonable number of runs. For PHASE-ONE, we randomly shuffled and divided the data (1913 signals from 14 individuals) into train (80%), development (10%) and test sets (10%). In PHASE-TWO, in order to perform a fair comparison with the previous methods reported on the same dataset, we perform a leave-one-subject out cross-validation experiment using the best settings we learn from PHASE-ONE. The architectural parameters and hyperparameters listed in Table TABREF6 were selected through an exhaustive grid-search based on the validation set of PHASE-ONE. We conducted a series of empirical studies starting from single hidden-layered networks for each of the blocks and, based on the validation accuracy, we increased the depth of each given network and selected the optimal parametric set from all possible combinations of parameters. For the gradient boosting classification, we fixed the maximum depth at 10, number of estimators at 5000, learning rate at 0.1, regularization coefficient at 0.3, subsample ratio at 0.8, and column-sample/iteration at 0.4. We did not find any notable change of accuracy while varying other hyperparameters while training gradient boost classifier. Performance analysis and discussion To demonstrate the significance of the hierarchical CNN-LSTM-DAE method, we conducted separate experiments with the individual networks in PHASE-ONE of experiments and summarized the results in Table TABREF12 From the average accuracy scores, we observe that the mixed network performs much better than individual blocks which is in agreement with the findings in BIBREF21 . A detailed analysis on repeated runs further shows that in most of the cases, LSTM alone does not perform better than chance. CNN, on the other hand, is heavily biased towards the class label which sees more training data corresponding to it. Though the situation improves with combined CNN-LSTM, our analysis clearly shows the necessity of a better encoding scheme to utilize the combined features rather than mere concatenation of the penultimate features of both networks. The very fact that our combined network improves the classification accuracy by a mean margin of 14.45% than the CNN-LSTM network indeed reveals that the autoencoder contributes towards filtering out the unrelated and noisy features from the concatenated penultimate feature set. It also proves that the combined supervised and unsupervised neural networks, trained hierarchically, can learn the discriminative manifold better than the individual networks and it is crucial for improving the classification accuracy. In addition to accuracy, we also provide the kappa coefficients BIBREF24 of our method in Fig. FIGREF14 . Here, a higher mean kappa value corresponding to a task implies that the network is able to find better discriminative information from the EEG data beyond random decisions. The maximum above-chance accuracy (75.92%) is recorded for presence/absence of the vowel task and the minimum (49.14%) is recorded for the INLINEFORM0 . To further investigate the feature representation achieved by our model, we plot T-distributed Stochastic Neighbor Embedding (tSNE) corresponding to INLINEFORM0 and V/C classification tasks in Fig. FIGREF8 . We particularly select these two tasks as our model exhibits respectively minimum and maximum performance for these two. The tSNE visualization reveals that the second set of features are more easily separable than the first one, thereby giving a rationale for our performance. Next, we provide performance comparison of the proposed approach with the baseline methods for PHASE-TWO of our study (cross-validation experiment) in Table TABREF15 . Since the model encounters the unseen data of a new subject for testing, and given the high inter-subject variability of the EEG data, a reduction in the accuracy was expected. However, our network still managed to achieve an improvement of 18.91, 9.95, 67.15, 2.83 and 13.70 % over BIBREF17 . Besides, our best model shows more reliability compared to previous works: The standard deviation of our model's classification accuracy across all the tasks is reduced from 22.59% BIBREF17 and 17.52% BIBREF18 to a mere 5.41%. Conclusion and future direction In an attempt to move a step towards understanding the speech information encoded in brain signals, we developed a novel mixed deep neural network scheme for a number of binary classification tasks from speech imagery EEG data. Unlike previous approaches which mostly deal with subject-dependent classification of EEG into discrete vowel or word labels, this work investigates a subject-invariant mapping of EEG data with different phonological categories, varying widely in terms of underlying articulator motions (eg: involvement or non-involvement of lips and velum, variation of tongue movements etc). Our model takes an advantage of feature extraction capability of CNN, LSTM as well as the deep learning benefit of deep autoencoders. We took BIBREF17 , BIBREF18 as the baseline works investigating the same problem and compared our performance with theirs. Our proposed method highly outperforms the existing methods across all the five binary classification tasks by a large average margin of 22.51%. Acknowledgments This work was funded by the Natural Sciences and Engineering Research Council (NSERC) of Canada and Canadian Institutes for Health Research (CIHR).
[ "1913 signals", "Unanswerable" ]
2,361
qasper_e
en
null
3e0ebe8cd7c6ad2ed6b011e47c1a06464ae4eed1b47c9d37
What are the different modules in Macaw?
Introduction The rapid growth in speech and small screen interfaces, particularly on mobile devices, has significantly influenced the way users interact with intelligent systems to satisfy their information needs. The growing interest in personal digital assistants, such as Amazon Alexa, Apple Siri, Google Assistant, and Microsoft Cortana, demonstrates the willingness of users to employ conversational interactions BIBREF0. As a result, conversational information seeking (CIS) has been recognized as a major emerging research area in the Third Strategic Workshop on Information Retrieval (SWIRL 2018) BIBREF1. Research progress in CIS relies on the availability of resources to the community. There have been recent efforts on providing data for various CIS tasks, such as the TREC 2019 Conversational Assistance Track (CAsT), MISC BIBREF2, Qulac BIBREF3, CoQA BIBREF4, QuAC BIBREF5, SCS BIBREF6, and CCPE-M BIBREF7. In addition, BIBREF8 have implemented a demonstration for conversational movie recommendation based on Google's DialogFlow. Despite all of these resources, the community still feels the lack of a suitable platform for developing CIS systems. We believe that providing such platform will speed up the progress in conversational information seeking research. Therefore, we developed a general framework for supporting CIS research. The framework is called Macaw. This paper describes the high-level architecture of Macaw, the supported functionality, and our future vision. Researchers working on various CIS tasks should be able to take advantage of Macaw in their projects. Macaw is designed based on a modular architecture to support different information seeking tasks, including conversational search, conversational question answering, conversational recommendation, and conversational natural language interface to structured and semi-structured data. Each interaction in Macaw (from both user and system) is a Message object, thus a conversation is a list of Messages. Macaw consists of multiple actions, each action is a module that can satisfy the information needs of users for some requests. For example, search and question answering can be two actions in Macaw. Even multiple search algorithms can be also seen as multiple actions. Each action can produce multiple outputs (e.g., multiple retrieved documents). For every user interaction, Macaw runs all actions in parallel. The actions' outputs produced within a predefined time interval (i.e., an interaction timeout constant) are then post-processed. Macaw can choose one or combine multiple of these outputs and prepare an output Message object as the user's response. The modular design of Macaw makes it relatively easy to configure a different user interface or add a new one. The current implementation of Macaw supports a command line interface as well as mobile, desktop, and web apps. In more detail, Macaw's interface can be a Telegram bot, which supports a wide range of devices and operating systems (see FIGREF4). This allows Macaw to support multi-modal interactions, such as text, speech, image, click, etc. A number of APIs for automatic speech recognition and generation have been employed to support speech interactions. Note that the Macaw's architecture and implementation allows mixed-initiative interactions. The research community can benefit from Macaw for the following purposes: [leftmargin=*] Developing algorithms, tools, and techniques for CIS. Studying user interactions with CIS systems. Performing CIS studies based on an intermediary person and wizard of oz. Preparing quick demonstration for a developed CIS model. Macaw Architecture Macaw has a modular design, with the goal of making it easy to configure and add new modules such as a different user interface or different retrieval module. The overall setup also follows a Model-View-Controller (MVC) like architecture. The design decisions have been made to smooth the Macaw's adoptions and extensions. Macaw is implemented in Python, thus machine learning models implemented using PyTorch, Scikit-learn, or TensorFlow can be easily integrated into Macaw. The high-level overview of Macaw is depicted in FIGREF8. The user interacts with the interface and the interface produces a Message object from the current interaction of user. The interaction can be in multi-modal form, such as text, speech, image, and click. Macaw stores all interactions in an “Interaction Database”. For every interaction, Macaw looks for most recent user-system interactions (including the system's responses) to create a list of Messages, called the conversation list. It is then dispatched to multiple information seeking (and related) actions. The actions run in parallel, and each should respond within a pre-defined time interval. The output selection component selects from (or potentially combines) the outputs generated by different actions and creates a Message object as the system's response. This message is logged into the interaction database and is sent to the interface to be presented to the user. Again, the response message can be multi-modal and include text, speech, link, list of options, etc. Macaw also supports Wizard of Oz studies or intermediary-based information seeking studies. The architecture of Macaw for such setup is presented in FIGREF16. As shown in the figure, the seeker interacts with a real conversational interface that supports multi-modal and mixed-initiative interactions in multiple devices. The intermediary (or the wizard) receives the seeker's message and performs different information seeking actions with Macaw. All seeker-intermediary and intermediary-system interactions will be logged for further analysis. This setup can simulate an ideal CIS system and thus is useful for collecting high-quality data from real users for CIS research. Retrieval and Question Answering in Macaw The overview of retrieval and question answering actions in Macaw is shown in FIGREF17. These actions consist of the following components: [leftmargin=*] Co-Reference Resolution: To support multi-turn interactions, it is sometimes necessary to use co-reference resolution techniques for effective retrieval. In Macaw, we identify all the co-references from the last request of user to the conversation history. The same co-reference resolution outputs can be used for different query generation components. This can be a generic or action-specific component. Query Generation: This component generates a query based on the past user-system interactions. The query generation component may take advantage of co-reference resolution for query expansion or re-writing. Retrieval Model: This is the core ranking component that retrieves documents or passages from a large collection. Macaw can retrieve documents from an arbitrary document collection using the Indri python interface BIBREF9, BIBREF10. We also provide the support for web search using the Bing Web Search API. Macaw also allows multi-stage document re-ranking. Result Generation: The retrieved documents can be too long to be presented using some interfaces. Result generation is basically a post-processing step ran on the retrieved result list. In case of question answering, it can employ answer selection or generation techniques, such as machine reading comprehension models. For example, Macaw features the DrQA model BIBREF11 for question answering. These components are implemented in a generic form, so researchers can easily replace them with their own favorite algorithms. User Interfaces We have implemented the following interfaces for Macaw: [leftmargin=*] File IO: This interface is designed for experimental purposes, such as evaluating the performance of a conversational search technique on a dataset with multiple queries. This is not an interactive interface. Standard IO: This interactive command line interface is designed for development purposes to interact with the system, see the logs, and debug or improve the system. Telegram: This interactive interface is designed for interaction with real users (see FIGREF4). Telegram is a popular instant messaging service whose client-side code is open-source. We have implemented a Telegram bot that can be used with different devices (personal computers, tablets, and mobile phones) and different operating systems (Android, iOS, Linux, Mac OS, and Windows). This interface allows multi-modal interactions (text, speech, click, image). It can be also used for speech-only interactions. For speech recognition and generation, Macaw relies on online APIs, e.g., the services provided by Google Cloud and Microsoft Azure. In addition, there exist multiple popular groups and channels in Telegram, which allows further integration of social networks with conversational systems. For example, see the Naseri and Zamani's study on news popularity in Telegram BIBREF12. Similar to the other modules, one can easily extend Macaw using other appropriate user interfaces. Limitations and Future Work The current implementation of Macaw lacks the following actions. We intend to incrementally improve Macaw by supporting more actions and even more advanced techniques for the developed actions. [leftmargin=*] Clarification and Preference Elicitation: Asking clarifying questions has been recently recognized as a necessary component in a conversational system BIBREF3, BIBREF7. The authors are not aware of a published solution for generating clarifying questions using public resources. Therefore, Macaw does not currently support clarification. Explanation: Despite its importance, result list explanation is also a relatively less explored topic. We intend to extend Macaw with result list explanation as soon as we find a stable and mature solution. Recommendation: In our first release, we focus on conversational search and question answering tasks. We intend to provide support for conversational recommendation, e.g., BIBREF13, BIBREF14, BIBREF15, and joint search and recommendation, e.g., BIBREF16, BIBREF17, in the future. Natural Language Interface: Macaw can potentially support access to structured data, such as knowledge graph. We would like to ease conversational natural language interface to structured and semi-structured data in our future releases. Contribution Macaw is distributed under the MIT License. We welcome contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com. This project has adopted the Microsoft Open Source Code of Conduct. When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA. Conclusions This paper described Macaw, an open-source platform for conversational information seeking research. Macaw supports multi-turn, multi-modal, and mixed-initiative interactions. It was designed based on a modular architecture that allows further improvements and extensions. Researchers can benefit from Macaw for developing algorithms and techniques for conversational information seeking research, for user studies with different interfaces, for data collection from real users, and for preparing a demonstration of a CIS model. Acknowledgements The authors wish to thank Ahmed Hassan Awadallah, Krisztian Balog, and Arjen P. de Vries for their invaluable feedback.
[ "Co-Reference Resolution, Query Generation, Retrieval Model, Result Generation", "Co-Reference Resolution, Query Generation, Retrieval Model, Result Generation" ]
1,701
qasper_e
en
null
76ade8dc03f520d725ee2b1decb2904f80becbc3dd6fb9b5
Can their indexing-based method be applied to create other QA datasets in other domains, and not just Wikipedia?
Introduction Question answering (QA) has been a blooming research field for the last decade. Selection-based QA implies a family of tasks that find answer contexts from large data given questions in natural language. Three tasks have been proposed for selection-based QA. Given a document, answer extraction BIBREF0 , BIBREF1 finds answer phrases whereas answer selection BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 and answer triggering BIBREF6 , BIBREF7 find answer sentences instead, although the presence of the answer context is not assumed within the provided document for answer triggering but it is for the other two tasks. Recently, various QA tasks that are not selection-based have been proposed BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 ; however, selection-based QA remains still important because of its practical value to real applications (e.g., IBM Watson, MIT Start). Several datasets have been released for selection-based QA. wang:07a created the QASent dataset consisting of 277 questions, which has been widely used for benchmarking the answer selection task. feng:15a presented InsuranceQA comprising 16K+ questions on insurance contexts. yang:15a introduced WikiQA for answer selection and triggering. jurczyk:16 created SelQA for large real-scale answer triggering. rajpurkar2016squad presented SQuAD for answer extraction and selection as well as for reading comprehension. Finally, morales-EtAl:2016:EMNLP2016 provided InfoboxQA for answer selection. These corpora make it possible to evaluate the robustness of statistical question answering learning. Although all of these corpora target on selection-based QA, they are designed for different purposes such that it is important to understand the nature of these corpora so a better use of them can be made. In this paper, we make both intrinsic and extrinsic analyses of four latest corpora based on Wikipedia, WikiQA, SelQA, SQuAD, and InfoboxQA. We first give a thorough intrinsic analysis regarding contextual similarities, question types, and answer categories (Section SECREF2 ). We then map questions in all corpora to the current version of English Wikipedia and benchmark another selection-based QA task, answer retrieval (Section SECREF3 ). Finally, we present an extrinsic analysis through a set of experiments cross-testing these corpora using a convolutional neural network architecture (Section SECREF4 ). Intrinsic Analysis Four publicly available corpora are selected for our analysis. These corpora are based on Wikipedia, so more comparable than the others, and have already been used for the evaluation of several QA systems. WikiQA BIBREF6 comprises questions selected from the Bing search queries, where user click data give the questions and their corresponding Wikipedia articles. The abstracts of these articles are then extracted to create answer candidates. The assumption is made that if many queries lead to the same article, it must contain the answer context; however, this assumption fails for some occasions, which makes this dataset more challenging. Since the existence of answer contexts is not guaranteed in this task, it is called answer triggering instead of answer selection. SelQA BIBREF7 is a product of five annotation tasks through crowdsourcing. It consists of about 8K questions where a half of the questions are paraphrased from the other half, aiming to reduce contextual similarities between questions and answers. Each question is associated with a section in Wikipedia where the answer context is guaranteed, and also with five sections selected from the entire Wikipedia where the selection is made by the Lucene search engine. This second dataset does not assume the existence of the answer context, so can be used for the evaluation of answer triggering. SQuAD BIBREF12 presents 107K+ crowdsourced questions on 536 Wikipedia articles, where the answer contexts are guaranteed to exist within the provided paragraph. It contains annotation of answer phrases as well as the pointers to the sentences including the answer phrases; thus, it can be used for both answer extraction and selection. This corpus also provides human accuracy on those questions, setting up a reasonable upper bound for machines. To avoid overfitting, the evaluation set is not publicly available although system outputs can be evaluated by their provided script. InfoboxQA BIBREF13 gives 15K+ questions based on the infoboxes from 150 articles in Wikipedia. Each question is crowdsourced and associated with an infobox, where each line of the infobox is considered an answer candidate. This corpus emphasizes the gravity of infoboxes, which summary arguably the most commonly asked information about those articles. Although the nature of this corpus is different from the others, it can also be used to evaluate answer selection. Analysis All corpora provide datasets/splits for answer selection, whereas only (WikiQA, SQuAD) and (WikiQA, SelQA) provide datasets for answer extraction and answer triggering, respectively. SQuAD is much larger in size although questions in this corpus are often paraphrased multiple times. On the contrary, SQuAD's average candidates per question ( INLINEFORM0 ) is the smallest because SQuAD extracts answer candidates from paragraphs whereas the others extract them from sections or infoboxes that consist of bigger contexts. Although InfoboxQA is larger than WikiQA or SelQA, the number of token types ( INLINEFORM1 ) in InfoboxQA is smaller than those two, due to the repetitive nature of infoboxes. All corpora show similar average answer candidate lengths ( INLINEFORM0 ), except for InfoboxQA where each line in the infobox is considered a candidate. SelQA and SQuAD show similar average question lengths ( INLINEFORM1 ) because of the similarity between their annotation schemes. It is not surprising that WikiQA's average question length is the smallest, considering their questions are taken from search queries. InfoboxQA's average question length is relatively small, due to the restricted information that can be asked from the infoboxes. InfoboxQA and WikiQA show the least question-answer word overlaps over questions and answers ( INLINEFORM2 and INLINEFORM3 in Table TABREF2 ), respectively. In terms of the F1-score for overlapping words ( INLINEFORM4 ), SQuAD gives the least portion of overlaps between question-answer pairs although WikiQA comes very close. Fig. FIGREF4 shows the distributions of seven question types grouped deterministically from the lexicons. Although these corpora have been independently developed, a general trend is found, where the what question type dominates, followed by how and who, followed by when and where, and so on. Fig. FIGREF6 shows the distributions of answer categories automatically classified by our Convolutional Neural Network model trained on the data distributed by li:02a. Interestingly, each corpus focuses on different categories, Numeric for WikiQA and SelQA, Entity for SQuAD, and Person for InfoboxQA, which gives enough diversities for statistical learning to build robust models. Answer Retrieval This section describes another selection-based QA task, called answer retrieval, that finds the answer context from a larger dataset, the entire Wikipedia. SQuAD provides no mapping of the answer contexts to Wikipedia, whereas WikiQA and SelQA provide mappings; however, their data do not come from the same version of Wikipedia. We propose an automatic way of mapping the answer contexts from all corpora to the same version of Wikipeda so they can be coherently used for answer retrieval. Each paragraph in Wikipedia is first indexed by Lucene using {1,2,3}-grams, where the paragraphs are separated by WikiExtractor and segmented by NLP4J (28.7M+ paragraphs are indexed). Each answer sentence from the corpora in Table TABREF3 is then queried to Lucene, and the top-5 ranked paragraphs are retrieved. The cosine similarity between each sentence in these paragraphs and the answer sentence is measured for INLINEFORM0 -grams, say INLINEFORM1 . A weight is assigned to each INLINEFORM2 -gram score, say INLINEFORM3 , and the weighted sum is measured: INLINEFORM4 . The fixed weights of INLINEFORM5 are used for our experiments, which can be improved. If there exists a sentence whose INLINEFORM0 , the paragraph consisting of that sentence is considered the silver-standard answer passage. Table TABREF3 shows how robust these silver-standard passages are based on human judgement ( INLINEFORM1 ) and how many passages are collected ( INLINEFORM2 ) for INLINEFORM3 , where the human judgement is performed on 50 random samples for each case. For answer retrieval, a dataset is created by INLINEFORM4 , which gives INLINEFORM5 accuracy and INLINEFORM6 coverage, respectively. Finally, each question is queried to Lucene and the top- INLINEFORM7 paragraphs are retrieved from the entire Wikipedia. If the answer sentence exists within those retrieved paragraphs according to the silver-standard, it is considered correct. Finding a paragraph that includes the answer context out of the entire Wikipedia is an extremely difficult task (128.7M). The last row of Table TABREF3 shows results from answer retrieval. Given INLINEFORM0 , SelQA and SQuAD show about 34% and 35% accuracy, which are reasonable. However, WikiQA shows a significantly lower accuracy of 12.47%; this is because the questions in WikiQA is about twice shorter than the questions in the other corpora such that not enough lexicons can be extracted from these questions for the Lucene search. Answer Selection Answer selection is evaluated by two metrics, mean average precision (MAP) and mean reciprocal rank (MRR). The bigram CNN introduced by yu:14a is used to generate all the results in Table TABREF11 , where models are trained on either single or combined datasets. Clearly, the questions in WikiQA are the most challenging, and adding more training data from the other corpora hurts accuracy due to the uniqueness of query-based questions in this corpus. The best model is achieved by training on W+S+Q for SelQA; adding InfoboxQA hurts accuracy for SelQA although it gives a marginal gain for SQuAD. Just like WikiQA, InfoboxQA performs the best when it is trained on only itself. From our analysis, we suggest that to use models trained on WikiQA and InfoboxQA for short query-like questions, whereas to use ones trained on SelQA and SQuAD for long natural questions. Answer Triggering The results of INLINEFORM0 from the answer retrieval task in Section SECREF13 are used to create the datasets for answer triggering, where about 65% of the questions are not expected to find their answer contexts from the provided paragraphs for SelQA and SQuAD and 87.5% are not expected for WikiQA. Answer triggering is evaluated by the F1 scores as presented in Table TABREF11 , where three corpora are cross validated. The results on WikiQA are pretty low as expected from the poor accuracy on the answer retrieval task. Training on SelQA gives the best models for both WikiQA and SelQA. Training on SQuAD gives the best model for SQuAD although the model trained on SelQA is comparable. Since the answer triggering datasets are about 5 times larger than the answer selection datasets, it is computationally too expensive to combine all data for training. We plan to find a strong machine to perform this experiment in near future. Related work Lately, several deep learning approaches have been proposed for question answering. yu:14a presented a CNN model that recognizes the semantic similarity between two sentences. wang-nyberg:2015:ACL-IJCNLP presented a stacked bidirectional LSTM approach to read words in sequence, then outputs their similarity scores. feng:15a applied a general deep learning framework to non-factoid question answering. santos:16a introduced an attentive pooling mechanism that led to further improvements in selection-based QA. Conclusion We present a comprehensive comparison study of the existing corpora for selection-based question answering. Our intrinsic analysis provides a better understanding of the uniqueness or similarity between these corpora. Our extrinsic analysis shows the strength or weakness of combining these corpora together for statistical learning. Additionally, we create a silver-standard dataset for answer retrieval and triggering, which will be publicly available. In the future, we will explore different ways of improving the quality of our silver-standard datasets by fine-tuning the hyper-parameters.
[ "Unanswerable" ]
1,913
qasper_e
en
null
987ab3762f78d7c33799631fec402ae2ed66ecbb0708347b
what accents are present in the corpus?
Introduction Nowadays deep learning techniques outperform the other conventional methods in most of the speech-related tasks. Training robust deep neural networks for each task depends on the availability of powerful processing GPUs, as well as standard and large scale datasets. In text-independent speaker verification, large-scale datasets are available, thanks to the NIST SRE evaluations and other data collection projects such as VoxCeleb BIBREF0. In text-dependent speaker recognition, experiments with end-to-end architectures conducted on large proprietary databases have demonstrated their superiority over traditional approaches BIBREF1. Yet, contrary to text-independent speaker recognition, text-dependent speaker recognition lacks large-scale publicly available databases. The two most well-known datasets are probably RSR2015 BIBREF2 and RedDots BIBREF3. The former contains speech data collected from 300 individuals in a controlled manner, while the latter is used primarily for evaluation rather than training, due to its small number of speakers (only 64). Motivated by this lack of large-scale dataset for text-dependent speaker verification, we chose to proceed with the collection of the DeepMine dataset, which we expect to become a standard benchmark for the task. Apart from speaker recognition, large amounts of training data are required also for training automatic speech recognition (ASR) systems. Such datasets should not only be large in size, they should also be characterized by high variability with respect to speakers, age and dialects. While several datasets with these properties are available for languages like English, Mandarin, French, this is not the case for several other languages, such as Persian. To this end, we proceeded with collecting a large-scale dataset, suitable for building robust ASR models in Persian. The main goal of the DeepMine project was to collect speech from at least a few thousand speakers, enabling research and development of deep learning methods. The project started at the beginning of 2017, and after designing the database and the developing Android and server applications, the data collection began in the middle of 2017. The project finished at the end of 2018 and the cleaned-up and final version of the database was released at the beginning of 2019. In BIBREF4, the running project and its data collection scenarios were described, alongside with some preliminary results and statistics. In this paper, we announce the final and cleaned-up version of the database, describe its different parts and provide various evaluation setups for each part. Finally, since the database was designed mainly for text-dependent speaker verification purposes, some baseline results are reported for this task on the official evaluation setups. Additional baseline results are also reported for Persian speech recognition. However, due to the space limitation in this paper, the baseline results are not reported for all the database parts and conditions. They will be defined and reported in the database technical documentation and in a future journal paper. Data Collection DeepMine is publicly available for everybody with a variety of licenses for different users. It was collected using crowdsourcing BIBREF4. The data collection was done using an Android application. Each respondent installed the application on his/her personal device and recorded several phrases in different sessions. The Android application did various checks on each utterance and if it passed all of them, the respondent was directed to the next phrase. For more information about data collection scenario, please refer to BIBREF4. Data Collection ::: Post-Processing In order to clean-up the database, the main post-processing step was to filter out problematic utterances. Possible problems include speaker word insertions (e.g. repeating some part of a phrase), deletions, substitutions, and involuntary disfluencies. To detect these, we implemented an alignment stage, similar to the second alignment stage in the LibriSpeech project BIBREF5. In this method, a custom decoding graph was generated for each phrase. The decoding graph allows for word skipping and word insertion in the phrase. For text-dependent and text-prompted parts of the database, such errors are not allowed. Hence, any utterances with errors were removed from the enrollment and test lists. For the speech recognition part, a sub-part of the utterance which is correctly aligned to the corresponding transcription is kept. After the cleaning step, around 190 thousand utterances with full transcription and 10 thousand with sub-part alignment have remained in the database. Data Collection ::: Statistics After processing the database and removing problematic respondents and utterances, 1969 respondents remained in the database, with 1149 of them being male and 820 female. 297 of the respondents could not read English and have therefore read only the Persian prompts. About 13200 sessions were recorded by females and similarly, about 9500 sessions by males, i.e. women are over-represented in terms of sessions, even though their number is 17% smaller than that of males. Other useful statistics related to the database are shown in Table TABREF4. The last status of the database, as well as other related and useful information about its availability can be found on its website, together with a limited number of samples. DeepMine Database Parts The DeepMine database consists of three parts. The first one contains fixed common phrases to perform text-dependent speaker verification. The second part consists of random sequences of words useful for text-prompted speaker verification, and the last part includes phrases with word- and phoneme-level transcription, useful for text-independent speaker verification using a random phrase (similar to Part4 of RedDots). This part can also serve for Persian ASR training. Each part is described in more details below. Table TABREF11 shows the number of unique phrases in each part of the database. For the English text-dependent part, the following phrases were selected from part1 of the RedDots database, hence the RedDots can be used as an additional training set for this part: “My voice is my password.” “OK Google.” “Artificial intelligence is for real.” “Actions speak louder than words.” “There is no such thing as a free lunch.” DeepMine Database Parts ::: Part1 - Text-dependent (TD) This part contains a set of fixed phrases which are used to verify speakers in text-dependent mode. Each speaker utters 5 Persian phrases, and if the speaker can read English, 5 phrases selected from Part1 of the RedDots database are also recorded. We have created three experimental setups with different numbers of speakers in the evaluation set. For each setup, speakers with more recording sessions are included in the evaluation set and the rest of the speakers are used for training in the background set (in the database, all background sets are basically training data). The rows in Table TABREF13 corresponds to the different experimental setups and shows the numbers of speakers in each set. Note that, for English, we have filtered the (Persian native) speakers by the ability to read English. Therefore, there are fewer speakers in each set for English than for Persian. There is a small “dev” set in each setup which can be used for parameter tuning to prevent over-tuning on the evaluation set. For each experimental setup, we have defined several official trial lists with different numbers of enrollment utterances per trial in order to investigate the effects of having different amounts of enrollment data. All trials in one trial list have the same number of enrollment utterances (3 to 6) and only one test utterance. All enrollment utterances in a trial are taken from different consecutive sessions and the test utterance is taken from yet another session. From all the setups and conditions, the 100-spk with 3-session enrollment (3-sess) is considered as the main evaluation condition. In Table TABREF14, the number of trials for Persian 3-sess are shown for the different types of trial in the text-dependent speaker verification (SV). Note that for Imposter-Wrong (IW) trials (i.e. imposter speaker pronouncing wrong phrase), we merely create one wrong trial for each Imposter-Correct (IC) trial to limit the huge number of possible trials for this case. So, the number of trials for IC and IW cases are the same. DeepMine Database Parts ::: Part2 - Text-prompted (TP) For this part, in each session, 3 random sequences of Persian month names are shown to the respondent in two modes: In the first mode, the sequence consists of all 12 months, which will be used for speaker enrollment. The second mode contains a sequence of 3 month names that will be used as a test utterance. In each 8 sessions received by a respondent from the server, there are 3 enrollment phrases of all 12 months (all in just one session), and $7 \times 3$ other test phrases, containing fewer words. For a respondent who can read English, 3 random sequences of English digits are also recorded in each session. In one of the sessions, these sequences contain all digits and the remaining ones contain only 4 digits. Similar to the text-dependent case, three experimental setups with different number of speaker in the evaluation set are defined (corresponding to the rows in Table TABREF16). However, different strategy is used for defining trials: Depending on the enrollment condition (1- to 3-sess), trials are enrolled on utterances of all words from 1 to 3 different sessions (i.e. 3 to 9 utterances). Further, we consider two conditions for test utterances: seq test utterance with only 3 or 4 words and full test utterances with all words (i.e. same words as in enrollment but in different order). From all setups an all conditions, the 100-spk with 1-session enrolment (1-sess) is considered as the main evaluation condition for the text-prompted case. In Table TABREF16, the numbers of trials (sum for both seq and full conditions) for Persian 1-sess are shown for the different types of trials in the text-prompted SV. Again, we just create one IW trial for each IC trial. DeepMine Database Parts ::: Part3 - Text-independent (TI) In this part, 8 Persian phrases that have already been transcribed on the phone level are displayed to the respondent. These phrases are chosen mostly from news and Persian Wikipedia. If the respondent is unable to read English, instead of 5 fixed phrases and 3 random digit strings, 8 other Persian phrases are also prompted to the respondent to have exactly 24 phrases in each recording session. This part can be useful at least for three potential applications. First, it can be used for text-independent speaker verification. The second application of this part (same as Part4 of RedDots) is text-prompted speaker verification using random text (instead of a random sequence of words). Finally, the third application is large vocabulary speech recognition in Persian (explained in the next sub-section). Based on the recording sessions, we created two experimental setups for speaker verification. In the first one, respondents with at least 17 recording sessions are included to the evaluation set, respondents with 16 sessions to the development and the rest of respondents to the background set (can be used as training data). In the second setup, respondents with at least 8 sessions are included to the evaluation set, respondents with 6 or 7 sessions to the development and the rest of respondents to the background set. Table TABREF18 shows numbers of speakers in each set of the database for text-independent SV case. For text-independent SV, we have considered 4 scenarios for enrollment and 4 scenarios for test. The speaker can be enrolled using utterances from 1, 2 or 3 consecutive sessions (1sess to 3sess) or using 8 utterances from 8 different sessions. The test speech can be one utterance (1utt) for short duration scenario or all utterances in one session (1sess) for long duration case. In addition, test speech can be selected from 5 English phrases for cross-language testing (enrollment using Persian utterances and test using English utterances). From all setups, 1sess-1utt and 1sess-1sess for 438-spk set are considered as the main evaluation setups for text-independent case. Table TABREF19 shows number of trials for these setups. For text-prompted SV with random text, the same setup as text-independent case together with corresponding utterance transcriptions can be used. DeepMine Database Parts ::: Part3 - Speech Recognition As explained before, Part3 of the DeepMine database can be used for Persian read speech recognition. There are only a few databases for speech recognition in Persian BIBREF6, BIBREF7. Hence, this part can at least partly address this problem and enable robust speech recognition applications in Persian. Additionally, it can be used for speaker recognition applications, such as training deep neural networks (DNNs) for extracting bottleneck features BIBREF8, or for collecting sufficient statistics using DNNs for i-vector training. We have randomly selected 50 speakers (25 for each gender) from the all speakers in the database which have net speech (without silence parts) between 25 minutes to 50 minutes as test speakers. For each speaker, the utterances in the first 5 sessions are included to (small) test-set and the other utterances of test speakers are considered as a large-test-set. The remaining utterances of the other speakers are included in the training set. The test-set, large-test-set and train-set contain 5.9, 28.5 and 450 hours of speech respectively. There are about 8300 utterances in Part3 which contain only Persian full names (i.e. first and family name pairs). Each phrase consists of several full names and their phoneme transcriptions were extracted automatically using a trained Grapheme-to-Phoneme (G2P). These utterances can be used to evaluate the performance of a systems for name recognition, which is usually more difficult than the normal speech recognition because of the lack of a reliable language model. Experiments and Results Due to the space limitation, we present results only for the Persian text-dependent speaker verification and speech recognition. Experiments and Results ::: Speaker Verification Experiments We conducted an experiment on text-dependent speaker verification part of the database, using the i-vector based method proposed in BIBREF9, BIBREF10 and applied it to the Persian portion of Part1. In this experiment, 20-dimensional MFCC features along with first and second derivatives are extracted from 16 kHz signals using HTK BIBREF11 with 25 ms Hamming windowed frames with 15 ms overlap. The reported results are obtained with a 400-dimensional gender independent i-vector based system. The i-vectors are first length-normalized and are further normalized using phrase- and gender-dependent Regularized Within-Class Covariance Normalization (RWCCN) BIBREF10. Cosine distance is used to obtain speaker verification scores and phrase- and gender-dependent s-norm is used for normalizing the scores. For aligning speech frames to Gaussian components, monophone HMMs with 3 states and 8 Gaussian components in each state are used BIBREF10. We only model the phonemes which appear in the 5 Persian text-dependent phrases. For speaker verification experiments, the results were reported in terms of Equal Error Rate (EER) and Normalized Detection Cost Function as defined for NIST SRE08 ($\mathrm {NDCF_{0.01}^{min}}$) and NIST SRE10 ($\mathrm {NDCF_{0.001}^{min}}$). As shown in Table TABREF22, in text-dependent SV there are 4 types of trials: Target-Correct and Imposter-Correct refer to trials when the pass-phrase is uttered correctly by target and imposter speakers respectively, and in same manner, Target-Wrong and Imposter-Wrong refer to trials when speakers uttered a wrong pass-phrase. In this paper, only the correct trials (i.e. Target-Correct as target trials vs Imposter-Correct as non-target trials) are considered for evaluating systems as it has been proved that these are the most challenging trials in text-dependent SV BIBREF8, BIBREF12. Table TABREF23 shows the results of text-dependent experiments using Persian 100-spk and 3-sess setup. For filtering trials, the respondents' mobile brand and model were used in this experiment. In the table, the first two letters in the filter notation relate to the target trials and the second two letters (i.e. right side of the colon) relate for non-target trials. For target trials, the first Y means the enrolment and test utterances were recorded using a device with the same brand by the target speaker. The second Y letter means both recordings were done using exactly the same device model. Similarly, the first Y for non-target trials means that the devices of target and imposter speakers are from the same brand (i.e. manufacturer). The second Y means that, in addition to the same brand, both devices have the same model. So, the most difficult target trials are “NN”, where the speaker has used different a device at the test time. In the same manner, the most difficult non-target trials which should be rejected by the system are “YY” where the imposter speaker has used the same device model as the target speaker (note that it does not mean physically the same device because each speaker participated in the project using a personal mobile device). Hence, the similarity in the recording channel makes rejection more difficult. The first row in Table TABREF23 shows the results for all trials. By comparing the results with the best published results on RSR2015 and RedDots BIBREF10, BIBREF8, BIBREF12, it is clear that the DeepMine database is more challenging than both RSR2015 and RedDots databases. For RSR2015, the same i-vector/HMM-based method with both RWCCN and s-norm has achieved EER less than 0.3% for both genders (Table VI in BIBREF10). The conventional Relevance MAP adaptation with HMM alignment without applying any channel-compensation techniques (i.e. without applying RWCCN and s-norm due to the lack of suitable training data) on RedDots Part1 for the male has achieved EER around 1.5% (Table XI in BIBREF10). It is worth noting that EERs for DeepMine database without any channel-compensation techniques are 2.1 and 3.7% for males and females respectively. One interesting advantage of the DeepMine database compared to both RSR2015 and RedDots is having several target speakers with more than one mobile device. This is allows us to analyse the effects of channel compensation methods. The second row in Table TABREF23 corresponds to the most difficult trials where the target trials come from mobile devices with different models while imposter trials come from the same device models. It is clear that severe degradation was caused by this kind of channel effects (i.e. decreasing within-speaker similarities while increasing between-speaker similarities), especially for females. The results in the third row show the condition when target speakers at the test time use exactly the same device that was used for enrollment. Comparing this row with the results in the first row proves how much improvement can be achieved when exactly the same device is used by the target speaker. The results in the fourth row show the condition when imposter speakers also use the same device model at test time to fool the system. So, in this case, there is no device mismatch in all trials. By comparing the results with the third row, we can see how much degradation is caused if we only consider the non-target trials with the same device. The fifth row shows similar results when the imposter speakers use device of the same brand as the target speaker but with a different model. Surprisingly, in this case, the degradation is negligible and it means that mobiles from a specific brand (manufacturer) have different recording channel properties. The degraded female results in the sixth row as compared to the third row show the effect of using a different device model from the same brand for target trials. For males, the filters brings almost the same subsets of trials, which explains the very similar results in this case. Looking at the first two and the last row of Table TABREF23, one can notice the significantly worse performance obtained for the female trials as compared to males. Note that these three rows include target trials where the devices used for enrollment do not necessarily match the devices used for recording test utterances. On the other hand, in rows 3 to 6, which exclude such mismatched trials, the performance for males and females is comparable. This suggest that the degraded results for females are caused by some problematic trials with device mismatch. The exact reason for this degradation is so far unclear and needs a further investigation. In the last row of the table, the condition of the second row is relaxed: the target device should have different model possibly from the same brand and imposter device only needs to be from the same brand. In this case, as was expected, the performance degradation is smaller than in the second row. Experiments and Results ::: Speech Recognition Experiments In addition to speaker verification, we present several speech recognition experiments on Part3. The experiments were performed with the Kaldi toolkit BIBREF13. For training HMM-based MonoPhone model, only 20 thousands of shortest utterances are used and for other models the whole training data is used. The DNN based acoustic model is a time-delay DNN with low-rank factorized layers and skip connections without i-vector adaptation (a modified network from one of the best performing LibriSpeech recipes). The network is shown in Table TABREF25: there are 16 F-TDNN layers, with dimension 1536 and linear bottleneck layers of dimension 256. The acoustic model is trained for 10 epochs using lattice-free maximum mutual information (LF-MMI) with cross-entropy regularization BIBREF14. Re-scoring is done using a pruned trigram language model and the size of the dictionary is around 90,000 words. Table TABREF26 shows the results in terms of word error rate (WER) for different evaluated methods. As can be seen, the created database can be used to train well performing and practically usable Persian ASR models. Conclusions In this paper, we have described the final version of a large speech corpus, the DeepMine database. It has been collected using crowdsourcing and, according to the best of our knowledge, it is the largest public text-dependent and text-prompted speaker verification database in two languages: Persian and English. In addition, it is the largest text-independent speaker verification evaluation database, making it suitable to robustly evaluate state-of-the-art methods on different conditions. Alongside these appealing properties, it comes with phone-level transcription, making it suitable to train deep neural network models for Persian speech recognition. We provided several evaluation protocols for each part of the database. The protocols allow researchers to investigate the performance of different methods in various scenarios and study the effects of channels, duration and phrase text on the performance. We also provide two test sets for speech recognition: One normal test set with a few minutes of speech for each speaker and one large test set with more (30 minutes on average) speech that can be used for any speaker adaptation method. As baseline results, we reported the performance of an i-vector/HMM based method on Persian text-dependent part. Moreover, we conducted speech recognition experiments using conventional HMM-based methods, as well as state-of-the-art deep neural network based method using Kaldi toolkit with promising performance. Text-dependent results have shown that the DeepMine database is more challenging than RSR2015 and RedDots databases. Acknowledgments The data collection project was mainly supported by Sharif DeepMine company. The work on the paper was supported by Czech National Science Foundation (GACR) project "NEUREM3" No. 19-26934X and the National Programme of Sustainability (NPU II) project "IT4Innovations excellence in science - LQ1602".
[ "Unanswerable", "Unanswerable" ]
3,794
qasper_e
en
null
f1d8b6d074397f7bead6a59be4c66dc5f1f1a058f74f1402
On what datasets are experiments performed?
Introduction Question Generation (QG) is the task of automatically creating questions from a range of inputs, such as natural language text BIBREF0, knowledge base BIBREF1 and image BIBREF2. QG is an increasingly important area in NLP with various application scenarios such as intelligence tutor systems, open-domain chatbots and question answering dataset construction. In this paper, we focus on question generation from reading comprehension materials like SQuAD BIBREF3. As shown in Figure FIGREF1, given a sentence in the reading comprehension paragraph and the text fragment (i.e., the answer) that we want to ask about, we aim to generate a question that is asked about the specified answer. Question generation for reading comprehension is firstly formalized as a declarative-to-interrogative sentence transformation problem with predefined rules or templates BIBREF4, BIBREF0. With the rise of neural models, Du2017LearningTA propose to model this task under the sequence-to-sequence (Seq2Seq) learning framework BIBREF5 with attention mechanism BIBREF6. However, question generation is a one-to-many sequence generation problem, i.e., several aspects can be asked given a sentence. Zhou2017NeuralQG propose the answer-aware question generation setting which assumes the answer, a contiguous span inside the input sentence, is already known before question generation. To capture answer-relevant words in the sentence, they adopt a BIO tagging scheme to incorporate the answer position embedding in Seq2Seq learning. Furthermore, Sun2018AnswerfocusedAP propose that tokens close to the answer fragments are more likely to be answer-relevant. Therefore, they explicitly encode the relative distance between sentence words and the answer via position embedding and position-aware attention. Although existing proximity-based answer-aware approaches achieve reasonable performance, we argue that such intuition may not apply to all cases especially for sentences with complex structure. For example, Figure FIGREF1 shows such an example where those approaches fail. This sentence contains a few facts and due to the parenthesis (i.e. “the area's coldest month”), some facts intertwine: “The daily mean temperature in January is 0.3$^\circ $C” and “January is the area's coldest month”. From the question generated by a proximity-based answer-aware baseline, we find that it wrongly uses the word “coldest” but misses the correct word “mean” because “coldest” has a shorter distance to the answer “0.3$^\circ $C”. In summary, their intuition that “the neighboring words of the answer are more likely to be answer-relevant and have a higher chance to be used in the question” is not reliable. To quantitatively show this drawback of these models, we implement the approach proposed by Sun2018AnswerfocusedAP and analyze its performance under different relative distances between the answer and other non-stop sentence words that also appear in the ground truth question. The results are shown in Table TABREF2. We find that the performance drops at most 36% when the relative distance increases from “$0\sim 10$” to “$>10$”. In other words, when the useful context is located far away from the answer, current proximity-based answer-aware approaches will become less effective, since they overly emphasize neighboring words of the answer. To address this issue, we extract the structured answer-relevant relations from sentences and propose a method to jointly model such structured relation and the unstructured sentence for question generation. The structured answer-relevant relation is likely to be to the point context and thus can help keep the generated question to the point. For example, Figure FIGREF1 shows our framework can extract the right answer-relevant relation (“The daily mean temperature in January”, “is”, “32.6$^\circ $F (0.3$^\circ $C)”) among multiple facts. With the help of such structured information, our model is less likely to be confused by sentences with a complex structure. Specifically, we firstly extract multiple relations with an off-the-shelf Open Information Extraction (OpenIE) toolbox BIBREF7, then we select the relation that is most relevant to the answer with carefully designed heuristic rules. Nevertheless, it is challenging to train a model to effectively utilize both the unstructured sentence and the structured answer-relevant relation because both of them could be noisy: the unstructured sentence may contain multiple facts which are irrelevant to the target question, while the limitation of the OpenIE tool may produce less accurate extracted relations. To explore their advantages simultaneously and avoid the drawbacks, we design a gated attention mechanism and a dual copy mechanism based on the encoder-decoder framework, where the former learns to control the information flow between the unstructured and structured inputs, while the latter learns to copy words from two sources to maintain the informativeness and faithfulness of generated questions. In the evaluations on the SQuAD dataset, our system achieves significant and consistent improvement as compared to all baseline methods. In particular, we demonstrate that the improvement is more significant with a larger relative distance between the answer and other non-stop sentence words that also appear in the ground truth question. Furthermore, our model is capable of generating diverse questions for a single sentence-answer pair where the sentence conveys multiple relations of its answer fragment. Framework Description In this section, we first introduce the task definition and our protocol to extract structured answer-relevant relations. Then we formalize the task under the encoder-decoder framework with gated attention and dual copy mechanism. Framework Description ::: Problem Definition We formalize our task as an answer-aware Question Generation (QG) problem BIBREF8, which assumes answer phrases are given before generating questions. Moreover, answer phrases are shown as text fragments in passages. Formally, given the sentence $S$, the answer $A$, and the answer-relevant relation $M$, the task of QG aims to find the best question $\overline{Q}$ such that, where $A$ is a contiguous span inside $S$. Framework Description ::: Answer-relevant Relation Extraction We utilize an off-the-shelf toolbox of OpenIE to the derive structured answer-relevant relations from sentences as to the point contexts. Relations extracted by OpenIE can be represented either in a triple format or in an n-ary format with several secondary arguments, and we employ the latter to keep the extractions as informative as possible and avoid extracting too many similar relations in different granularities from one sentence. We join all arguments in the extracted n-ary relation into a sequence as our to the point context. Figure FIGREF5 shows n-ary relations extracted from OpenIE. As we can see, OpenIE extracts multiple relations for complex sentences. Here we select the most informative relation according to three criteria in the order of descending importance: (1) having the maximal number of overlapped tokens between the answer and the relation; (2) being assigned the highest confidence score by OpenIE; (3) containing maximum non-stop words. As shown in Figure FIGREF5, our criteria can select answer-relevant relations (waved in Figure FIGREF5), which is especially useful for sentences with extraneous information. In rare cases, OpenIE cannot extract any relation, we treat the sentence itself as the to the point context. Table TABREF8 shows some statistics to verify the intuition that the extracted relations can serve as more to the point context. We find that the tokens in relations are 61% more likely to be used in the target question than the tokens in sentences, and thus they are more to the point. On the other hand, on average the sentences contain one more question token than the relations (1.86 v.s. 2.87). Therefore, it is still necessary to take the original sentence into account to generate a more accurate question. Framework Description ::: Our Proposed Model ::: Overview. As shown in Figure FIGREF10, our framework consists offour components (1) Sentence Encoder and Relation Encoder, (2) Decoder, (3) Gated Attention Mechanism and (4) Dual Copy Mechanism. The sentence encoder and relation encoder encode the unstructured sentence and the structured answer-relevant relation, respectively. To select and combine the source information from the two encoders, a gated attention mechanism is employed to jointly attend both contextualized information sources, and a dual copy mechanism copies words from either the sentence or the relation. Framework Description ::: Our Proposed Model ::: Answer-aware Encoder. We employ two encoders to integrate information from the unstructured sentence $S$ and the answer-relevant relation $M$ separately. Sentence encoder takes in feature-enriched embeddings including word embeddings $\mathbf {w}$, linguistic embeddings $\mathbf {l}$ and answer position embeddings $\mathbf {a}$. We follow BIBREF9 to transform POS and NER tags into continuous representation ($\mathbf {l}^p$ and $\mathbf {l}^n$) and adopt a BIO labelling scheme to derive the answer position embedding (B: the first token of the answer, I: tokens within the answer fragment except the first one, O: tokens outside of the answer fragment). For each word $w_i$ in the sentence $S$, we simply concatenate all features as input: $\mathbf {x}_i^s= [\mathbf {w}_i; \mathbf {l}^p_i; \mathbf {l}^n_i; \mathbf {a}_i]$. Here $[\mathbf {a};\mathbf {b}]$ denotes the concatenation of vectors $\mathbf {a}$ and $\mathbf {b}$. We use bidirectional LSTMs to encode the sentence $(\mathbf {x}_1^s, \mathbf {x}_2^s, ..., \mathbf {x}_n^s)$ to get a contextualized representation for each token: where $\overrightarrow{\mathbf {h}}^{s}_i$ and $\overleftarrow{\mathbf {h}}^{s}_i$ are the hidden states at the $i$-th time step of the forward and the backward LSTMs. The output state of the sentence encoder is the concatenation of forward and backward hidden states: $\mathbf {h}^{s}_i=[\overrightarrow{\mathbf {h}}^{s}_i;\overleftarrow{\mathbf {h}}^{s}_i]$. The contextualized representation of the sentence is $(\mathbf {h}^{s}_1, \mathbf {h}^{s}_2, ..., \mathbf {h}^{s}_n)$. For the relation encoder, we firstly join all items in the n-ary relation $M$ into a sequence. Then we only take answer position embedding as an extra feature for the sequence: $\mathbf {x}_i^m= [\mathbf {w}_i; \mathbf {a}_i]$. Similarly, we take another bidirectional LSTMs to encode the relation sequence and derive the corresponding contextualized representation $(\mathbf {h}^{m}_1, \mathbf {h}^{m}_2, ..., \mathbf {h}^{m}_n)$. Framework Description ::: Our Proposed Model ::: Decoder. We use an LSTM as the decoder to generate the question. The decoder predicts the word probability distribution at each decoding timestep to generate the question. At the t-th timestep, it reads the word embedding $\mathbf {w}_{t}$ and the hidden state $\mathbf {u}_{t-1}$ of the previous timestep to generate the current hidden state: Framework Description ::: Our Proposed Model ::: Gated Attention Mechanism. We design a gated attention mechanism to jointly attend the sentence representation and the relation representation. For sentence representation $(\mathbf {h}^{s}_1, \mathbf {h}^{s}_2, ..., \mathbf {h}^{s}_n)$, we employ the Luong2015EffectiveAT's attention mechanism to obtain the sentence context vector $\mathbf {c}^s_t$, where $\mathbf {W}_a$ is a trainable weight. Similarly, we obtain the vector $\mathbf {c}^m_t$ from the relation representation $(\mathbf {h}^{m}_1, \mathbf {h}^{m}_2, ..., \mathbf {h}^{m}_n)$. To jointly model the sentence and the relation, a gating mechanism is designed to control the information flow from two sources: where $\odot $ represents element-wise dot production and $\mathbf {W}_g, \mathbf {W}_h$ are trainable weights. Finally, the predicted probability distribution over the vocabulary $V$ is computed as: where $\mathbf {W}_V$ and $\mathbf {b}_V$ are parameters. Framework Description ::: Our Proposed Model ::: Dual Copy Mechanism. To deal with the rare and unknown words, the decoder applies the pointing method BIBREF10, BIBREF11, BIBREF12 to allow copying a token from the input sentence at the $t$-th decoding step. We reuse the attention score $\mathbf {\alpha }_{t}^s$ and $\mathbf {\alpha }_{t}^m$ to derive the copy probability over two source inputs: Different from the standard pointing method, we design a dual copy mechanism to copy from two sources with two gates. The first gate is designed for determining copy tokens from two sources of inputs or generate next word from $P_V$, which is computed as $g^v_t = \text{sigmoid}(\mathbf {w}^v_g \tilde{\mathbf {h}}_t + b^v_g)$. The second gate takes charge of selecting the source (sentence or relation) to copy from, which is computed as $g^c_t = \text{sigmoid}(\mathbf {w}^c_g [\mathbf {c}_t^s;\mathbf {c}_t^m] + b^c_g)$. Finally, we combine all probabilities $P_V$, $P_S$ and $P_M$ through two soft gates $g^v_t$ and $g^c_t$. The probability of predicting $w$ as the $t$-th token of the question is: Framework Description ::: Our Proposed Model ::: Training and Inference. Given the answer $A$, sentence $S$ and relation $M$, the training objective is to minimize the negative log-likelihood with regard to all parameters: where $\mathcal {\lbrace }Q\rbrace $ is the set of all training instances, $\theta $ denotes model parameters and $\text{log} P(Q|A,S,M;\theta )$ is the conditional log-likelihood of $Q$. In testing, our model targets to generate a question $Q$ by maximizing: Experimental Setting ::: Dataset & Metrics We conduct experiments on the SQuAD dataset BIBREF3. It contains 536 Wikipedia articles and 100k crowd-sourced question-answer pairs. The questions are written by crowd-workers and the answers are spans of tokens in the articles. We employ two different data splits by following Zhou2017NeuralQG and Du2017LearningTA . In Zhou2017NeuralQG, the original SQuAD development set is evenly divided into dev and test sets, while Du2017LearningTA treats SQuAD development set as its development set and splits original SQuAD training set into a training set and a test set. We also filter out questions which do not have any overlapped non-stop words with the corresponding sentences and perform some preprocessing steps, such as tokenization and sentence splitting. The data statistics are given in Table TABREF27. We evaluate with all commonly-used metrics in question generation BIBREF13: BLEU-1 (B1), BLEU-2 (B2), BLEU-3 (B3), BLEU-4 (B4) BIBREF17, METEOR (MET) BIBREF18 and ROUGE-L (R-L) BIBREF19. We use the evaluation script released by Chen2015MicrosoftCC. Experimental Setting ::: Baseline Models We compare with the following models. [leftmargin=*] s2s BIBREF13 proposes an attention-based sequence-to-sequence neural network for question generation. NQG++ BIBREF9 takes the answer position feature and linguistic features into consideration and equips the Seq2Seq model with copy mechanism. M2S+cp BIBREF14 conducts multi-perspective matching between the answer and the sentence to derive an answer-aware sentence representation for question generation. s2s+MP+GSA BIBREF8 introduces a gated self-attention into the encoder and a maxout pointer mechanism into the decoder. We report their sentence-level results for a fair comparison. Hybrid BIBREF15 is a hybrid model which considers the answer embedding for the question word generation and the position of context words for modeling the relative distance between the context words and the answer. ASs2s BIBREF16 replaces the answer in the sentence with a special token to avoid its appearance in the generated questions. Experimental Setting ::: Implementation Details We take the most frequent 20k words as our vocabulary and use the GloVe word embeddings BIBREF20 for initialization. The embedding dimensions for POS, NER, answer position are set to 20. We use two-layer LSTMs in both encoder and decoder, and the LSTMs hidden unit size is set to 600. We use dropout BIBREF21 with the probability $p=0.3$. All trainable parameters, except word embeddings, are randomly initialized with the Xavier uniform in $(-0.1, 0.1)$ BIBREF22. For optimization in the training, we use SGD as the optimizer with a minibatch size of 64 and an initial learning rate of 1.0. We train the model for 15 epochs and start halving the learning rate after the 8th epoch. We set the gradient norm upper bound to 3 during the training. We adopt the teacher-forcing for the training. In the testing, we select the model with the lowest perplexity and beam search with size 3 is employed for generating questions. All hyper-parameters and models are selected on the validation dataset. Results and Analysis ::: Main Results Table TABREF30 shows automatic evaluation results for our model and baselines (copied from their papers). Our proposed model which combines structured answer-relevant relations and unstructured sentences achieves significant improvements over proximity-based answer-aware models BIBREF9, BIBREF15 on both dataset splits. Presumably, our structured answer-relevant relation is a generalization of the context explored by the proximity-based methods because they can only capture short dependencies around answer fragments while our extractions can capture both short and long dependencies given the answer fragments. Moreover, our proposed framework is a general one to jointly leverage structured relations and unstructured sentences. All compared baseline models which only consider unstructured sentences can be further enhanced under our framework. Recall that existing proximity-based answer-aware models perform poorly when the distance between the answer fragment and other non-stop sentence words that also appear in the ground truth question is large (Table TABREF2). Here we investigate whether our proposed model using the structured answer-relevant relations can alleviate this issue or not, by conducting experiments for our model under the same setting as in Table TABREF2. The broken-down performances by different relative distances are shown in Table TABREF40. We find that our proposed model outperforms Hybrid (our re-implemented version for this experiment) on all ranges of relative distances, which shows that the structured answer-relevant relations can capture both short and long term answer-relevant dependencies of the answer in sentences. Furthermore, comparing the performance difference between Hybrid and our model, we find the improvements become more significant when the distance increases from “$0\sim 10$” to “$>10$”. One reason is that our model can extract relations with distant dependencies to the answer, which greatly helps our model ignore the extraneous information. Proximity-based answer-aware models may overly emphasize the neighboring words of answers and become less effective as the useful context becomes further away from the answer in the complex sentences. In fact, the breakdown intervals in Table TABREF40 naturally bound its sentence length, say for “$>10$”, the sentences in this group must be longer than 10. Thus, the length variances in these two intervals could be significant. To further validate whether our model can extract long term dependency words. We rerun the analysis of Table TABREF40 only for long sentences (length $>$ 20) of each interval. The improvement percentages over Hybrid are shown in Table TABREF40, which become more significant when the distance increases from “$0\sim 10$” to “$>10$”. Results and Analysis ::: Case Study Figure FIGREF42 provides example questions generated by crowd-workers (ground truth questions), the baseline Hybrid BIBREF15, and our model. In the first case, there are two subsequences in the input and the answer has no relation with the second subsequence. However, we see that the baseline model prediction copies irrelevant words “The New York Times” while our model can avoid using the extraneous subsequence “The New York Times noted ...” with the help of the structured answer-relevant relation. Compared with the ground truth question, our model cannot capture the cross-sentence information like “her fifth album”, where the techniques in paragraph-level QG models BIBREF8 may help. In the second case, as discussed in Section SECREF1, this sentence contains a few facts and some facts intertwine. We find that our model can capture distant answer-relevant dependencies such as “mean temperature” while the proximity-based baseline model wrongly takes neighboring words of the answer like “coldest” in the generated question. Results and Analysis ::: Diverse Question Generation Another interesting observation is that for the same answer-sentence pair, our model can generate diverse questions by taking different answer-relevant relations as input. Such capability improves the interpretability of our model because the model is given not only what to be asked (i.e., the answer) but also the related fact (i.e., the answer-relevant relation) to be covered in the question. In contrast, proximity-based answer-aware models can only generate one question given the sentence-answer pair regardless of how many answer-relevant relations in the sentence. We think such capability can also validate our motivation: questions should be generated according to the answer-aware relations instead of neighboring words of answer fragments. Figure FIGREF45 show two examples of diverse question generation. In the first case, the answer fragment `Hugh L. Dryden' is the appositive to `NASA Deputy Administrator' but the subject to the following tokens `announced the Apollo program ...'. Our framework can extract these two answer-relevant relations, and by feeding them to our model separately, we can receive two questions asking different relations with regard to the answer. Related Work The topic of question generation, initially motivated for educational purposes, is tackled by designing many complex rules for specific question types BIBREF4, BIBREF23. Heilman2010GoodQS improve rule-based question generation by introducing a statistical ranking model. First, they remove extraneous information in the sentence to transform it into a simpler one, which can be transformed easily into a succinct question with predefined sets of general rules. Then they adopt an overgenerate-and-rank approach to select the best candidate considering several features. With the rise of dominant neural sequence-to-sequence learning models BIBREF5, Du2017LearningTA frame question generation as a sequence-to-sequence learning problem. Compared with rule-based approaches, neural models BIBREF24 can generate more fluent and grammatical questions. However, question generation is a one-to-many sequence generation problem, i.e., several aspects can be asked given a sentence, which confuses the model during train and prevents concrete automatic evaluation. To tackle this issue, Zhou2017NeuralQG propose the answer-aware question generation setting which assumes the answer is already known and acts as a contiguous span inside the input sentence. They adopt a BIO tagging scheme to incorporate the answer position information as learned embedding features in Seq2Seq learning. Song2018LeveragingCI explicitly model the information between answer and sentence with a multi-perspective matching model. Kim2019ImprovingNQ also focus on the answer information and proposed an answer-separated Seq2Seq model by masking the answer with special tokens. All answer-aware neural models treat question generation as a one-to-one mapping problem, but existing models perform poorly for sentences with a complex structure (as shown in Table TABREF2). Our work is inspired by the process of extraneous information removing in BIBREF0, BIBREF25. Different from Heilman2010GoodQS which directly use the simplified sentence for generation and cao2018faithful which only consider aggregate two sources of information via gated attention in summarization, we propose to combine the structured answer-relevant relation and the original sentence. Factoid question generation from structured text is initially investigated by Serban2016GeneratingFQ, but our focus here is leveraging structured inputs to help question generation over unstructured sentences. Our proposed model can take advantage of unstructured sentences and structured answer-relevant relations to maintain informativeness and faithfulness of generated questions. The proposed model can also be generalized in other conditional sequence generation tasks which require multiple sources of inputs, e.g., distractor generation for multiple choice questions BIBREF26. Conclusions and Future Work In this paper, we propose a question generation system which combines unstructured sentences and structured answer-relevant relations for generation. The unstructured sentences maintain the informativeness of generated questions while structured answer-relevant relations keep the faithfulness of questions. Extensive experiments demonstrate that our proposed model achieves state-of-the-art performance across several metrics. Furthermore, our model can generate diverse questions with different structured answer-relevant relations. For future work, there are some interesting dimensions to explore, such as difficulty levels BIBREF27, paragraph-level information BIBREF8 and conversational question generation BIBREF28. Acknowledgments This work is supported by the Research Grants Council of the Hong Kong Special Administrative Region, China (No. CUHK 14208815 and No. CUHK 14210717 of the General Research Fund). We would like to thank the anonymous reviewers for their comments. We would also like to thank Department of Computer Science and Engineering, The Chinese University of Hong Kong for the conference grant support.
[ "SQuAD", "SQuAD" ]
3,757
qasper_e
en
null
fa1c557ea338532fb1c34125934ce83b2b70e02dd12931d3
Which dataset do they train their models on?
Introduction Relation classification is the task of assigning sentences with two marked entities to a predefined set of relations. The sentence “We poured the <e1>milk</e1> into the <e2>pumpkin mixture</e2>.”, for example, expresses the relation Entity-Destination(e1,e2). While early research mostly focused on support vector machines or maximum entropy classifiers BIBREF0 , BIBREF1 , recent research showed performance improvements by applying neural networks (NNs) BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 on the benchmark data from SemEval 2010 shared task 8 BIBREF8 . This study investigates two different types of NNs: recurrent neural networks (RNNs) and convolutional neural networks (CNNs) as well as their combination. We make the following contributions: (1) We propose extended middle context, a new context representation for CNNs for relation classification. The extended middle context uses all parts of the sentence (the relation arguments, left of the relation arguments, between the arguments, right of the arguments) and pays special attention to the middle part. (2) We present connectionist bi-directional RNN models which are especially suited for sentence classification tasks since they combine all intermediate hidden layers for their final decision. Furthermore, the ranking loss function is introduced for the RNN model optimization which has not been investigated in the literature for relation classification before. (3) Finally, we combine CNNs and RNNs using a simple voting scheme and achieve new state-of-the-art results on the SemEval 2010 benchmark dataset. Related Work In 2010, manually annotated data for relation classification was released in the context of a SemEval shared task BIBREF8 . Shared task participants used, i.a., support vector machines or maximum entropy classifiers BIBREF0 , BIBREF1 . Recently, their results on this data set were outperformed by applying NNs BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 . zeng2014 built a CNN based only on the context between the relation arguments and extended it with several lexical features. kim2014 and others used convolutional filters of different sizes for CNNs. nguyen applied this to relation classification and obtained improvements over single filter sizes. deSantos2015 replaced the softmax layer of the CNN with a ranking layer. They showed improvements and published the best result so far on the SemEval dataset, to our knowledge. socher used another NN architecture for relation classification: recursive neural networks that built recursive sentence representations based on syntactic parsing. In contrast, zhang investigated a temporal structured RNN with only words as input. They used a bi-directional model with a pooling layer on top. Convolutional Neural Networks (CNN) CNNs perform a discrete convolution on an input matrix with a set of different filters. For NLP tasks, the input matrix represents a sentence: Each column of the matrix stores the word embedding of the corresponding word. By applying a filter with a width of, e.g., three columns, three neighboring words (trigram) are convolved. Afterwards, the results of the convolution are pooled. Following collobertWeston, we perform max-pooling which extracts the maximum value for each filter and, thus, the most informative n-gram for the following steps. Finally, the resulting values are concatenated and used for classifying the relation expressed in the sentence. Input: Extended Middle Context One of our contributions is a new input representation especially designed for relation classification. The contexts are split into three disjoint regions based on the two relation arguments: the left context, the middle context and the right context. Since in most cases the middle context contains the most relevant information for the relation, we want to focus on it but not ignore the other regions completely. Hence, we propose to use two contexts: (1) a combination of the left context, the left entity and the middle context; and (2) a combination of the middle context, the right entity and the right context. Due to the repetition of the middle context, we force the network to pay special attention to it. The two contexts are processed by two independent convolutional and max-pooling layers. After pooling, the results are concatenated to form the sentence representation. Figure FIGREF3 depicts this procedure. It shows an examplary sentence: “He had chest pain and <e1>headaches</e1> from <e2>mold</e2> in the bedroom.” If we only considered the middle context “from”, the network might be tempted to predict a relation like Entity-Origin(e1,e2). However, by also taking the left and right context into account, the model can detect the relation Cause-Effect(e2,e1). While this could also be achieved by integrating the whole context into the model, using the whole context can have disadvantages for longer sentences: The max pooling step can easily choose a value from a part of the sentence which is far away from the mention of the relation. With splitting the context into two parts, we reduce this danger. Repeating the middle context increases the chance for the max pooling step to pick a value from the middle context. Convolutional Layer Following previous work (e.g., BIBREF5 , BIBREF6 ), we use 2D filters spanning all embedding dimensions. After convolution, a max pooling operation is applied that stores only the highest activation of each filter. We apply filters with different window sizes 2-5 (multi-windows) as in BIBREF5 , i.e. spanning a different number of input words. Recurrent Neural Networks (RNN) Traditional RNNs consist of an input vector, a history vector and an output vector. Based on the representation of the current input word and the previous history vector, a new history is computed. Then, an output is predicted (e.g., using a softmax layer). In contrast to most traditional RNN architectures, we use the RNN for sentence modeling, i.e., we predict an output vector only after processing the whole sentence and not after each word. Training is performed using backpropagation through time BIBREF9 which unfolds the recurrent computations of the history vector for a certain number of time steps. To avoid exploding gradients, we use gradient clipping with a threshold of 10 BIBREF10 . Input of the RNNs Initial experiments showed that using trigrams as input instead of single words led to superior results. Hence, at timestep INLINEFORM0 we do not only give word INLINEFORM1 to the model but the trigram INLINEFORM2 by concatenating the corresponding word embeddings. Connectionist Bi-directional RNNs Especially for relation classification, the processing of the relation arguments might be easier with knowledge of the succeeding words. Therefore in bi-directional RNNs, not only a history vector of word INLINEFORM0 is regarded but also a future vector. This leads to the following conditioned probability for the history INLINEFORM1 at time step INLINEFORM2 : DISPLAYFORM0 Thus, the network can be split into three parts: a forward pass which processes the original sentence word by word (Equation EQREF6 ); a backward pass which processes the reversed sentence word by word (Equation ); and a combination of both (Equation ). All three parts are trained jointly. This is also depicted in Figure FIGREF7 . Combining forward and backward pass by adding their hidden layer is similar to BIBREF7 . We, however, also add a connection to the previous combined hidden layer with weight INLINEFORM0 to be able to include all intermediate hidden layers into the final decision of the network (see Equation ). We call this “connectionist bi-directional RNN”. In our experiments, we compare this RNN with uni-directional RNNs and bi-directional RNNs without additional hidden layer connections. Word Representations Words are represented by concatenated vectors: a word embedding and a position feature vector. Pretrained word embeddings. In this study, we used the word2vec toolkit BIBREF11 to train embeddings on an English Wikipedia from May 2014. We only considered words appearing more than 100 times and added a special PADDING token for convolution. This results in an embedding training text of about 485,000 terms and INLINEFORM0 tokens. During model training, the embeddings are updated. Position features. We incorporate randomly initialized position embeddings similar to zeng2014, nguyen and deSantos2015. In our RNN experiments, we investigate different possibilities of integrating position information: position embeddings, position embeddings with entity presence flags (flags indicating whether the current word is one of the relation arguments), and position indicators BIBREF7 . Objective Function: Ranking Loss Ranking. We applied the ranking loss function proposed in deSantos2015 to train our models. It maximizes the distance between the true label INLINEFORM0 and the best competitive label INLINEFORM1 given a data point INLINEFORM2 . The objective function is DISPLAYFORM0 with INLINEFORM0 and INLINEFORM1 being the scores for the classes INLINEFORM2 and INLINEFORM3 respectively. The parameter INLINEFORM4 controls the penalization of the prediction errors and INLINEFORM5 and INLINEFORM6 are margins for the correct and incorrect classes. Following deSantos2015, we set INLINEFORM7 . We do not learn a pattern for the class Other but increase its difference to the best competitive label by using only the second summand in Equation EQREF10 during training. Experiments and Results We used the relation classification dataset of the SemEval 2010 task 8 BIBREF8 . It consists of sentences which have been manually labeled with 19 relations (9 directed relations and one artificial class Other). 8,000 sentences have been distributed as training set and 2,717 sentences served as test set. For evaluation, we applied the official scoring script and report the macro F1 score which also served as the official result of the shared task. RNN and CNN models were implemented with theano BIBREF12 , BIBREF13 . For all our models, we use L2 regularization with a weight of 0.0001. For CNN training, we use mini batches of 25 training examples while we perform stochastic gradient descent for the RNN. The initial learning rates are 0.2 for the CNN and 0.01 for the RNN. We train the models for 10 (CNN) and 50 (RNN) epochs without early stopping. As activation function, we apply tanh for the CNN and capped ReLU for the RNN. For tuning the hyperparameters, we split the training data into two parts: 6.5k (training) and 1.5k (development) sentences. We also tuned the learning rate schedule on dev. Beside of training single models, we also report ensemble results for which we combined the presented single models with a voting process. Performance of CNNs As a baseline system, we implemented a CNN similar to the one described by zeng2014. It consists of a standard convolutional layer with filters with only one window size, followed by a softmax layer. As input it uses the middle context. In contrast to zeng2014, our CNN does not have an additional fully connected hidden layer. Therefore, we increased the number of convolutional filters to 1200 to keep the number of parameters comparable. With this, we obtain a baseline result of 73.0. After including 5 dimensional position features, the performance was improved to 78.6 (comparable to 78.9 as reported by zeng2014 without linguistic features). In the next step, we investigate how this result changes if we successively add further features to our CNN: multi-windows for convolution (window sizes: 2,3,4,5 and 300 feature maps each), ranking layer instead of softmax and our proposed extended middle context. Table TABREF12 shows the results. Note that all numbers are produced by CNNs with a comparable number of parameters. We also report F1 for increasing the word embedding dimensionality from 50 to 400. The position embedding dimensionality is 5 in combination with 50 dimensional word embeddings and 35 with 400 dimensional word embeddings. Our results show that especially the ranking layer and the embedding size have an important impact on the performance. Performance of RNNs As a baseline for the RNN models, we apply a uni-directional RNN which predicts the relation after processing the whole sentence. With this model, we achieve an F1 score of 61.2 on the SemEval test set. Afterwards, we investigate the impact of different position features on the performance of uni-directional RNNs (position embeddings, position embeddings concatenated with a flag indicating whether the current word is an entity or not, and position indicators BIBREF7 ). The results indicate that position indicators (i.e. artificial words that indicate the entity presence) perform the best on the SemEval data. We achieve an F1 score of 73.4 with them. However, the difference to using position embeddings with entity flags is not statistically significant. Similar to our CNN experiments, we successively vary the RNN models by using bi-directionality, by adding connections between the hidden layers (“connectionist”), by applying ranking instead of softmax to predict the relation and by increasing the word embedding dimension to 400. The results in Table TABREF14 show that all of these variations lead to statistically significant improvements. Especially the additional hidden layer connections and the integration of the ranking layer have a large impact on the performance. Combination of CNNs and RNNs Finally, we combine our CNN and RNN models using a voting process. For each sentence in the test set, we apply several CNN and RNN models presented in Tables TABREF12 and TABREF14 and predict the class with the most votes. In case of a tie, we pick one of the most frequent classes randomly. The combination achieves an F1 score of 84.9 which is better than the performance of the two NN types alone. It, thus, confirms our assumption that the networks provide complementary information: while the RNN computes a weighted combination of all words in the sentence, the CNN extracts the most informative n-grams for the relation and only considers their resulting activations. Comparison with State of the Art Table TABREF16 shows the results of our models ER-CNN (extended ranking CNN) and R-RNN (ranking RNN) in the context of other state-of-the-art models. Our proposed models obtain state-of-the-art results on the SemEval 2010 task 8 data set without making use of any linguistic features. Conclusion In this paper, we investigated different features and architectural choices for convolutional and recurrent neural networks for relation classification without using any linguistic features. For convolutional neural networks, we presented a new context representation for relation classification. Furthermore, we introduced connectionist recurrent neural networks for sentence classification tasks and performed the first experiments with ranking recurrent neural networks. Finally, we showed that even a simple combination of convolutional and recurrent neural networks improved results. With our neural models, we achieved new state-of-the-art results on the SemEval 2010 task 8 benchmark data. Acknowledgments Heike Adel is a recipient of the Google European Doctoral Fellowship in Natural Language Processing and this research is supported by this fellowship. This research was also supported by Deutsche Forschungsgemeinschaft: grant SCHU 2246/4-2.
[ "relation classification dataset of the SemEval 2010 task 8", "SemEval 2010 task 8 BIBREF8" ]
2,393
qasper_e
en
null
d95b33cbe3496fb98638b3c3b1a956847b6472aa8d18a5cc
How big was the corpora they trained ELMo on?
Introduction Deep contextualised representations of linguistic entities (words and/or sentences) are used in many current state-of-the-art NLP systems. The most well-known examples of such models are arguably ELMo BIBREF0 and BERT BIBREF1. A long-standing tradition if the field of applying deep learning to NLP tasks can be summarised as follows: as minimal pre-processing as possible. It is widely believed that lemmatization or other text input normalisation is not necessary. Advanced neural architectures based on character input (CNNs, BPE, etc) are supposed to be able to learn how to handle spelling and morphology variations themselves, even for languages with rich morphology: `just add more layers!'. Contextualised embedding models follow this tradition: as a rule, they are trained on raw text collections, with minimal linguistic pre-processing. Below, we show that this is not entirely true. It is known that for the previous generation of word embedding models (`static' ones like word2vec BIBREF2, where a word always has the same representation regardless of the context in which it occurs), lemmatization of the training and testing data improves their performance. BIBREF3 showed that this is true at least for semantic similarity and analogy tasks. In this paper, we describe our experiments in finding out whether lemmatization helps modern contextualised embeddings (on the example of ELMo). We compare the performance of ELMo models trained on the same corpus before and after lemmatization. It is impossible to evaluate contextualised models on `static' tasks like lexical semantic similarity or word analogies. Because of this, we turned to word sense disambiguation in context (WSD) as an evaluation task. In brief, we use contextualised representations of ambiguous words from the top layer of an ELMo model to train word sense classifiers and find out whether using lemmas instead of tokens helps in this task (see Section SECREF5). We experiment with the English and Russian languages and show that they differ significantly in the influence of lemmatization on the WSD performance of ELMo models. Our findings and the contributions of this paper are: Linguistic text pre-processing still matters in some tasks, even for contemporary deep representation learning algorithms. For the Russian language, with its rich morphology, lemmatizing the training and testing data for ELMo representations yields small but consistent improvements in the WSD task. This is unlike English, where the differences are negligible. Related work ELMo contextual word representations are learned in an unsupervised way through language modelling BIBREF0. The general architecture consists of a two-layer BiLSTM on top of a convolutional layer which takes character sequences as its input. Since the model uses fully character-based token representations, it avoids the problem of out-of-vocabulary words. Because of this, the authors explicitly recommend not to use any normalisation except tokenization for the input text. However, as we show below, while this is true for English, for other languages feeding ELMo with lemmas instead of raw tokens can improve WSD performance. Word sense disambiguation or WSD BIBREF4 is the NLP task consisting of choosing a word sense from a pre-defined sense inventory, given the context in which the word is used. WSD fits well into our aim to intrinsically evaluate ELMo models, since solving the problem of polysemy and homonymy was one of the original promises of contextualised embeddings: their primary difference from the previous generation of word embedding models is that contextualised approaches generate different representations for homographs depending on the context. We use two lexical sample WSD test sets, further described in Section SECREF4. Training ELMo For the experiments described below, we trained our own ELMo models from scratch. For English, the training corpus consisted of the English Wikipedia dump from February 2017. For Russian, it was a concatenation of the Russian Wikipedia dump from December 2018 and the full Russian National Corpus (RNC). The RNC texts were added to the Russian Wikipedia dump so as to make the Russian training corpus more comparable in size to the English one (Wikipedia texts would comprise only half of the size). As Table TABREF3 shows, the English Wikipedia is still two times larger, but at least the order is the same. The texts were tokenized and lemmatized with the UDPipe models for the respective languages trained on the Universal Dependencies 2.3 treebanks BIBREF5. UDPipe yields lemmatization accuracy about 96% for English and 97% for Russian; thus for the task at hand, we considered it to be gold and did not try to further improve the quality of normalisation itself (although it is not entirely error-free, see Section SECREF4). ELMo models were trained on these corpora using the original TensorFlow implementation, for 3 epochs with batch size 192, on two GPUs. To train faster, we decreased the dimensionality of the LSTM layers from the default 4096 to 2048 for all the models. Word sense disambiguation test sets We used two WSD datasets for evaluation: Senseval-3 for English BIBREF6 RUSSE'18 for Russian BIBREF7 The Senseval-3 dataset consists of lexical samples for nouns, verbs and adjectives; we used only noun target words: argument arm atmosphere audience bank degree difference difficulty disc image interest judgement organization paper party performance plan shelter sort source An example for the ambiguous word argument is given below: In some situations Postscript can be faster than the escape sequence type of printer control file. It uses post fix notation, where arguments come first and operators follow. This is basically the same as Reverse Polish Notation as used on certain calculators, and follows directly from the stack based approach. It this sentence, the word `argument' is used in the sense of a mathematical operator. The RUSSE'18 dataset was created in 2018 for the shared task in Russian word sense induction. This dataset contains only nouns; the list of words with their English translations is given in Table TABREF30. Originally, it includes also the words russianбайка `tale/fleece' and russianгвоздика 'clove/small nail', but their senses are ambiguous only in some inflectional forms (not in lemmas), therefore we decided to exclude these words from evaluation. The Russian dataset is more homogeneous compared to the English one, as for all the target words there is approximately the same number of context words in the examples. This is achieved by applying the lexical window (25 words before and after the target word) and cropping everything that falls outside of that window. In the English dataset, on the contrary, the whole paragraph with the target word is taken into account. We have tried cropping the examples for English as well, but it did not result in any change in the quality of classification. In the end, we decided not to apply the lexical window to the English dataset so as not to alter it and rather use it in the original form. Here is an example from the RUSSE'18 for the ambiguous word russianмандарин `mandarin' in the sense `Chinese official title': russian“...дипломатического корпуса останкам богдыхана и императрицы обставлено было с необычайной торжественностью. Тысячи мандаринов и других высокопоставленных лиц разместились шпалерами на трех мраморных террасах ведущих к...” `...the diplomatic bodies of the Bogdikhan and the Empress was furnished with extraordinary solemnity. Thousands of mandarins and other dignitaries were placed on three marble terraces leading to...'. Table TABREF31 compares both datasets. Before usage, they were pre-processed in the same way as the training corpora for ELMo (see Section SECREF3), thus producing a lemmatized and a non-lemmatized versions of each. As we can see from Table TABREF31, for 20 target words in English there are 24 lemmas, and for 18 target words in Russian there are 36 different lemmas. These numbers are explained by occasional errors in the UDPipe lemmatization. Another interesting thing to observe is the number of distinct word forms for every language. For English, there are 39 distinct forms for 20 target nouns: singular and plural for every noun, except `atmosphere' which is used only in the singular form. Thus, inflectional variability of English nouns is covered by the dataset almost completely. For Russian, we observe 132 distinct forms for 18 target nouns, giving more than 7 inflectional forms per each word. Note that this still covers only half of all the inflectional variability of Russian: this language features 12 distinct forms for each noun (6 cases and 2 numbers). To sum up, the RUSSE'18 dataset is morphologically far more complex than the Senseval3, reflecting the properties of the respective languages. In the next section we will see that this leads to substantial differences regarding comparisons between token-based and lemma-based ELMo models. Experiments Following BIBREF8, we decided to avoid using any standard train-test splits for our WSD datasets. Instead, we rely on per-word random splits and 5-fold cross-validation. This means that for each target word we randomly generate 5 different divisions of its context sentences list into train and test sets, and then train and test 5 different classifier models on this data. The resulting performance score for each target word is the average of 5 macro-F1 scores produced by these classifiers. ELMo models can be employed for the WSD task in two different ways: either by fine-tuning the model or by extracting word representations from it and then using them as features in a downstream classifier. We decided to stick to the second (feature extraction) approach, since it is conceptually and computationally simpler. Additionally, BIBREF9 showed that for most NLP tasks (except those focused on sentence pairs) the performance of feature extraction and fine-tuning is nearly the same. Thus we extracted the single vector of the target word from the ELMo top layer (`target' rows in Table TABREF32) or the averaged ELMo top layer vectors of all words in the context sentence (`averaged' rows in Table TABREF32). For comparison, we also report the scores of the `averaged vectors' representations with Continuous Skipgram BIBREF2 embedding models trained on the English or Russian Wikipedia dumps (`SGNS' rows): before the advent of contextualised models, this was one of the most widely used ways to `squeeze' the meaning of a sentence into a fixed-size vector. Of course it does not mean that the meaning of a sentence always determines the senses all its words are used in. However, averaging representations of words in contexts as a proxy to the sense of one particular word is a long established tradition in WSD, starting at least from BIBREF10. Also, since SGNS is a `static' embedding model, it is of course not possible to use only target word vectors as features: they would be identical whatever the context is. Simple logistic regression was used as a classification algorithm. We also tested a multi-layer perceptron (MLP) classifier with 200-neurons hidden layer, which yielded essentially the same results. This leads us to believe that our findings are not classifier-dependent. Table TABREF32 shows the results, together with the random and most frequent sense (MFS) baselines for each dataset. First, ELMo outperforms SGNS for both languages, which comes as no surprise. Second, the approach with averaging representations from all words in the sentence is not beneficial for WSD with ELMo: for English data, it clearly loses to a single target word representation, and for Russian there are no significant differences (and using a single target word is preferable from the computational point of view, since it does not require the averaging operation). Thus, below we discuss only the single target word usage mode of ELMo. But the most important part is the comparison between using tokens or lemmas in the train and test data. For the `static' SGNS embeddings, it does not significantly change the WSD scores for both languages. The same is true for English ELMo models, where differences are negligible and seem to be simple fluctuations. However, for Russian, ELMo (target) on lemmas outperforms ELMo on tokens, with small but significant improvement. The most plausible explanation for this is that (despite of purely character-based input of ELMo) the model does not have to learn idiosyncrasies of a particular language morphology. Instead, it can use its (limited) capacity to better learn lexical semantic structures, leading to better WSD performance. The box plots FIGREF33 and FIGREF35 illustrate the scores dispersion across words in the test sets for English and Russian correspondingly (orange lines are medians). In the next section SECREF6 we analyse the results qualitatively. Qualitative analysis In this section we focus on the comparison of scores for the Russian dataset. The classifier for Russian had to choose between fewer classes (two or three), which made the scores higher and more consistent than for the English dataset. Overall, we see improvements in the scores for the majority of words, which proves that lemmatization for morphologically rich languages is beneficial. We decided to analyse more closely those words for which the difference in the scores between lemma-based and token-based models was statistically significant. By `significant' we mean that the scores differ by more that one standard deviation (the largest standard deviation value in the two sets was taken). The resulting list of targets words with significant difference in scores is given in Table TABREF36. We can see that among 18 words in the dataset only 3 exhibit significant improvement in their scores when moving from tokens to lemmas in the input data. It shows that even though the overall F1 scores for the Russian data have shown the plausibility of lemmatization, this improvement is mostly driven by a few words. It should be noted that these words' scores feature very low standard deviation values (for other words, standard deviation values were above 0.1, making F1 differences insignificant). Such a behaviour can be caused by more consistent differentiation of context for various senses of these 3 words. For example, with the word russianкабачок `squash / small restaurant', the contexts for both senses can be similar, since they are all related to food. This makes the WSD scores unstable. On the other hand, for russianакция `stock, share / event', russianкрона `crown (tree / coin)' or russianкруп `croup (horse body part / illness)', their senses are not related, which resulted in more stable results and significant difference in the scores (see Table TABREF36). There is only one word in the RUSSE'18 dataset for which the score has strongly decreased when moving to lemma-based models: russianдомино `domino (game / costume)'. In fact, the score difference here lies on the border of one standard deviation, so strictly speaking it is not really significant. However, the word still presents an interesting phenomenon. russianДомино is the only target noun in the RUSSE'18 that has no inflected forms, since it is a borrowed word. This leaves no room for improvement when using lemma-based ELMo models: all tokens of this word are already identical. At the same time, some information about inflected word forms in the context can be useful, but it is lost during lemmatization, and this leads to the decreased score. Arguably, this means that lemmatization brings along both advantages and disadvantages for WSD with ELMo. For inflected words (which constitute the majority of Russian vocabulary) profits outweigh the losses, but for atypical non-changeable words it can be the opposite. The scores for the excluded target words russianбайка `tale / fleece' and russianгвоздика 'clove / small nail' are given in Table TABREF37 (recall that they were excluded because of being ambiguous only in some inflectional forms). For these words we can see a great improvement with lemma-based models. This, of course stems from the fact that these words in different senses have different lemmas. Therefore, the results are heavily dependent on the quality of lemmatization. Conclusion We evaluated how the ability of ELMo contextualised word embedding models to disambiguate word senses depends on the nature of the training data. In particular, we compared the models trained on raw tokenized corpora and those trained on the corpora with word tokens replaced by their normal forms (lemmas). The models we trained are publicly available via the NLPL word embeddings repository BIBREF3. In the majority of research papers on deep learning approaches to NLP, it is assumed that lemmatization is not necessary, especially when using powerful contextualised embeddings. Our experiments show that this is indeed true for languages with simple morphology (like English). However, for rich-morphology languages (like Russian), using lemmatized training data yields small but consistent improvements in the word sense disambiguation task. These improvements are not observed for rare words which lack inflected forms; this further supports our hypothesis that better WSD scores of lemma-based models are related to them better handling multiple word forms in morphology-rich languages. Of course, lemmatization is by all means not a silver bullet. In other tasks, where inflectional properties of words are important, it can even hurt the performance. But this is true for any NLP systems, not only deep learning based ones. The take-home message here is twofold: first, text pre-processing still matters for contemporary deep learning algorithms. Their impressive learning abilities do not always allow them to infer normalisation rules themselves, from simply optimising the language modelling task. Second, the nature of language at hand matters as well, and differences in this nature can result in different decisions being optimal or sub-optimal at the stage of deep learning models training. The simple truth `English is not representative of all languages on Earth' still holds here. In the future, we plan to extend our work by including more languages into the analysis. Using Russian and English allowed us to hypothesise about the importance of morphological character of a language. But we only scratched the surface of the linguistic diversity. To verify this claim, it is necessary to analyse more strongly inflected languages like Russian as well as more weakly inflected (analytical) languages similar to English. This will help to find out if the inflection differences are important for training deep learning models across human languages in general.
[ "2174000000, 989000000", "2174 million tokens for English and 989 million tokens for Russian" ]
2,958
qasper_e
en
null
704c0c46089ffbd8f6c94c06071a440d930d82a59a1d4edc
What are the qualitative experiments performed on benchmark datasets?
Introduction Language modelling in its inception had one-hot vector encoding of words. However, it captures only alphabetic ordering but not the word semantic similarity. Vector space models helps to learn word representations in a lower dimensional space and also captures semantic similarity. Learning word embedding aids in natural language processing tasks such as question answering and reasoning BIBREF0, stance detection BIBREF1, claim verification BIBREF2. Recent models BIBREF3, BIBREF4 work on the basis that words with similar context share semantic similarity. BIBREF4 proposes a neural probabilistic model which models the target word probability conditioned on the previous words using a recurrent neural network. Word2Vec models BIBREF3 such as continuous bag-of-words (CBOW) predict the target word given the context, and skip-gram model works in reverse of predicting the context given the target word. While, GloVe embeddings were based on a Global matrix factorization on local contexts BIBREF5. However, the aforementioned models do not handle words with multiple meanings (polysemies). BIBREF6 proposes a neural network approach considering both local and global contexts in learning word embeddings (point estimates). Their multiple prototype model handles polysemous words by providing apriori heuristics about word senses in the dataset. BIBREF7 proposes an alternative to handle polysemous words by a modified skip-gram model and EM algorithm. BIBREF8 presents a non-parametric based alternative to handle polysemies. However, these approaches fail to consider entailment relations among the words. BIBREF9 learn a Gaussian distribution per word using the expected likelihood kernel. However, for polysemous words, this may lead to word distributions with larger variances as it may have to cover various senses. BIBREF10 proposes multimodal word distribution approach. It captures polysemy. However, the energy based objective function fails to consider asymmetry and hence entailment. Textual entailment recognition is necessary to capture lexical inference relations such as causality (for example, mosquito $\rightarrow $ malaria), hypernymy (for example, dog $\models $ animal) etc. In this paper, we propose to obtain multi-sense word embedding distributions by using a variant of max margin objective based on the asymmetric KL divergence energy function to capture textual entailment. Multi-sense distributions are advantageous in capturing polysemous nature of words and in reducing the uncertainty per word by distributing it across senses. However, computing KL divergence between mixtures of Gaussians is intractable, and we use a KL divergence approximation based on stricter upper and lower bounds. While capturing textual entailment (asymmetry), we have also not compromised on capturing symmetrical similarity between words (for example, funny and hilarious) which will be elucidated in Section $3.1$. We also show the effectiveness of the proposed approach on the benchmark word similarity and entailment datasets in the experimental section. Methodology ::: Word Representation Probabilistic representation of words helps one model uncertainty in word representation, and polysemy. Given a corpus $V$, containing a list of words each represented as $w$, the probability density for a word $w$ can be represented as a mixture of Gaussians with $C$ components BIBREF10. Here, $p_{w,j}$ represents the probability of word $w$ belonging to the component $j$, $\operatorname{\mathbf {\mu }}_{w,j}$ represents $D$ dimensional word representation corresponding to the $j^{th}$ component sense of the word $w$, and $\Sigma _{w,j}$ represents the uncertainty in representation for word $w$ belonging to component $j$. Objective function The model parameters (means, covariances and mixture weights) $\theta $ can be learnt using a variant of max-margin objective BIBREF11. Here $E_\theta (\cdot , \cdot )$ represents an energy function which assigns a score to the pair of words, $w$ is a particular word under consideration, $c$ its positive context (same context), and $c^{\prime }$ the negative context. The objective aims to push the margin of the difference between the energy function of a word $w$ to its positive context $c$ higher than its negative context $c$ by a threshold of $m$. Thus, word pairs in the same context gets a higher energy than the word pairs in the dissimilar context. BIBREF10 consider the energy function to be an expected likelihood kernel which is defined as follows. This is similar to the cosine similarity metric over vectors and the energy between two words is maximum when they have similar distributions. But, the expected likelihood kernel is a symmetric metric which will not be suitable for capturing ordering among words and hence entailment. Objective function ::: Proposed Energy function As each word is represented by a mixture of Gaussian distributions, KL divergence is a better choice of energy function to capture distance between distributions. Since, KL divergence is minimum when the distributions are similar and maximum when they are dissimilar, energy function is taken as exponentiated negative KL divergence. However, computing KL divergence between Gaussian mixtures is intractable and obtaining exact KL value is not possible. One way of approximating the KL is by Monte-Carlo approximation but it requires large number of samples to get a good approximation and is computationally expensive on high dimensional embedding space. Alternatively, BIBREF12 presents a KL approximation between Gaussian mixtures where they obtain an upper bound through product of Gaussian approximation method and a lower bound through variational approximation method. In BIBREF13, the authors combine the lower and upper bounds from approximation methods of BIBREF12 to provide a stricter bound on KL between Gaussian mixtures. Lets consider Gaussian mixtures for the words $w$ and $v$ as follows. The approximate KL divergence between the Gaussian mixture representations over the words $w$ and $v$ is shown in equation DISPLAY_FORM8. More details on approximation is included in the Supplementary Material. where $EL_{ik}(w,w) = \int f_{w,i} (\operatorname{\mathbf {x}}) f_{w,k} (\operatorname{\mathbf {x}}) d\operatorname{\mathbf {x}}$ and $EL_{ij}(w,v) = \int f_{w,i} (\operatorname{\mathbf {x}}) f_{v,k} (\operatorname{\mathbf {x}}) d\operatorname{\mathbf {x}}$. Note that the expected likelihood kernel appears component wise inside the approximate KL divergence derivation. One advantage of using KL as energy function is that it enables to capture asymmetry in entailment datasets. For eg., let us consider the words 'chair' with two senses as 'bench' and 'sling', and 'wood' with two senses as 'trees' and 'furniture'. The word chair ($w$) is entailed within wood ($v$), i.e. chair $\models $ wood. Now, minimizing the KL divergence necessitates maximizing $\log {\sum _j p_{v,j} \exp ({-KL(f_{w,i} (\operatorname{\mathbf {x}})||f_{v,j}(\operatorname{\mathbf {x}}))})}$ which in turn minimizes $KL(f_{w,i}(\operatorname{\mathbf {x}})||f_{v,j}(\operatorname{\mathbf {x}}))$. This will result in the support of the $i^{th}$ component of $w$ to be within the $j^{th}$ component of $v$, and holds for all component pairs leading to the entailment of $w$ within $v$. Consequently, we can see that bench $\models $ trees, bench $\models $ furniture, sling $\models $ trees, and sling $\models $ furniture. Thus, it introduces lexical relationship between the senses of child word and that of the parent word. Minimizing the KL also necessitates maximizing $\log {\sum _j {p_{v,j}} EL_{ij}(w,v)}$ term for all component pairs among $w$ and $v$. This is similar to maximizing expected likelihood kernel, which brings the means of $f_{w,i}(\operatorname{\mathbf {x}})$ and $f_{v,j}(\operatorname{\mathbf {x}})$ closer (weighted by their co-variances) as discussed in BIBREF10. Hence, the proposed approach captures the best of both worlds, thereby catering to both word similarity and entailment. We also note that minimizing the KL divergence necessitates minimizing $\log {\sum _k p_{w,k} \exp ({-KL(f_{w,i}||f_{w,k})})}$ which in turn maximizes $KL(f_{w,i}||f_{w,k})$. This prevents the different mixture components of a word converging to single Gaussian and encourages capturing different possible senses of the word. The same is also achieved by minimizing $\sum _k {p_{w,k}} EL_{ik}(w,w)$ term and act as a regularization term which promotes diversity in learning senses of a word. Experimentation and Results We train our proposed model GM$\_$KL (Gaussian Mixture using KL Divergence) on the Text8 dataset BIBREF14 which is a pre-processed data of $17M$ words from wikipedia. Of which, 71290 unique and frequent words are chosen using the subsampling trick in BIBREF15. We compare GM$\_$KL with the previous approaches w2g BIBREF9 ( single Gaussian model) and w2gm BIBREF10 (mixture of Gaussian model with expected likelihood kernel). For all the models used for experimentation, the embedding size ($D$) was set to 50, number of mixtures to 2, context window length to 10, batch size to 128. The word embeddings were initialized using a uniform distribution in the range of $[-\sqrt{\frac{3}{D}}$, $\sqrt{\frac{3}{D}}]$ such that the expectation of variance is 1 and mean 0 BIBREF16. One could also consider initializing the word embeddings using other contextual representations such as BERT BIBREF17 and ELMo BIBREF18 in the proposed approach. In order to purely analyze the performance of $\emph {GM\_KL}$ over the other models, we have chosen initialization using uniform distribution for experiments. For computational benefits, diagonal covariance is used similar to BIBREF10. Each mixture probability is constrained in the range $[0,1]$, summing to 1 by optimizing over unconstrained scores in the range $(-\infty ,\infty )$ and converting scores to probability using softmax function. The mixture scores are initialized to 0 to ensure fairness among all the components. The threshold for negative sampling was set to $10^{-5}$, as recommended in BIBREF3. Mini-batch gradient descent with Adagrad optimizer BIBREF19 was used with initial learning rate set to $0.05$. Table TABREF9 shows the qualitative results of GM$\_$KL. Given a query word and component id, the set of nearest neighbours along with their respective component ids are listed. For eg., the word `plane' in its 0th component captures the `geometry' sense and so are its neighbours, and its 1st component captures `vehicle' sense and so are its corresponding neighbours. Other words such as `rock' captures both the `metal' and `music' senses, `star' captures `celebrity' and `astronomical' senses, and `phone' captures `telephony' and `internet' senses. We quantitatively compare the performance of the GM$\_$KL, w2g, and w2gm approaches on the SCWS dataset BIBREF6. The dataset consists of 2003 word pairs of polysemous and homonymous words with labels obtained by an average of 10 human scores. The Spearman correlation between the human scores and the model scores are computed. To obtain the model score, the following metrics are used: MaxCos: Maximum cosine similarity among all component pairs of words $w$ and $v$: AvgCos: Average component-wise cosine similarity between the words $w$ and $v$. KL$\_$approx: Formulated as shown in (DISPLAY_FORM8) between the words $w$ and $v$. KL$\_$comp: Maximum component-wise negative KL between words $w$ and $v$: Table TABREF17 compares the performance of the approaches on the SCWS dataset. It is evident from Table TABREF17 that GM$\_$KL achieves better correlation than existing approaches for various metrics on SCWS dataset. Table TABREF18 shows the Spearman correlation values of GM$\_$KL model evaluated on the benchmark word similarity datasets: SL BIBREF20, WS, WS-R, WS-S BIBREF21, MEN BIBREF22, MC BIBREF23, RG BIBREF24, YP BIBREF25, MTurk-287 and MTurk-771 BIBREF26, BIBREF27, and RW BIBREF28. The metric used for comparison is 'AvgCos'. It can be seen that for most of the datasets, GM$\_$KL achieves significantly better correlation score than w2g and w2gm approaches. Other datasets such as MC and RW consist of only a single sense, and hence w2g model performs better and GM$\_$KL achieves next better performance. The YP dataset have multiple senses but does not contain entailed data and hence could not make use of entailment benefits of GM$\_$KL. Table TABREF19 shows the evaluation results of GM$\_$KL model on the entailment datasets such as entailment pairs dataset BIBREF29 created from WordNet with both positive and negative labels, a crowdsourced dataset BIBREF30 of 79 semantic relations labelled as entailed or not and annotated distributionally similar nouns dataset BIBREF31. The 'MaxCos' similarity metric is used for evaluation and the best precision and best F1-score is shown, by picking the optimal threshold. Overall, GM$\_$KL performs better than both w2g and w2gm approaches. Conclusion We proposed a KL divergence based energy function for learning multi-sense word embedding distributions modelled as Gaussian mixtures. Due to the intractability of the Gaussian mixtures for the KL divergence measure, we use an approximate KL divergence function. We also demonstrated that the proposed GM$\_$KL approaches performed better than other approaches on the benchmark word similarity and entailment datasets. tocsectionAppendices Approximation for KL divergence between mixtures of gaussians KL between gaussian mixtures $f_{w}(\operatorname{\mathbf {x}})$ and $f_{v}(\operatorname{\mathbf {x}})$ can be decomposed as: BIBREF12 presents KL approximation between gaussian mixtures using product of gaussian approximation method where KL is approximated using product of component gaussians and variational approximation method where KL is approximated by introducing some variational parameters. The product of component gaussian approximation method using Jensen's inequality provides upper bounds as shown in equations DISPLAY_FORM23 and . The variational approximation method provides lower bounds as shown in equations DISPLAY_FORM24 and DISPLAY_FORM25. where $H$ represents the entropy term and the entropy of $i^{th}$ component of word $w$ with dimension $D$ is given as In BIBREF13, the authors combine the lower and upper bounds from approximation methods of BIBREF12 to formulate a stricter bound on KL between gaussian mixtures. From equations DISPLAY_FORM23 and DISPLAY_FORM25, a stricter lower bound for KL between gaussian mixtures is obtained as shown in equation DISPLAY_FORM26 From equations and DISPLAY_FORM24, a stricter upper bound for KL between gaussian mixtures is obtained as shown in equation DISPLAY_FORM27 Finally, the KL between gaussian mixtures is taken as the mean of KL upper and lower bounds as shown in equation DISPLAY_FORM28.
[ "Spearman correlation values of GM_KL model evaluated on the benchmark word similarity datasets.\nEvaluation results of GM_KL model on the entailment datasets such as entailment pairs dataset created from WordNet, crowdsourced dataset of 79 semantic relations labelled as entailed or not and annotated distributionally similar nouns dataset.", "Given a query word and component id, the set of nearest neighbours along with their respective component ids are listed" ]
2,220
qasper_e
en
null
c32bcab75bbb243cbd0fa7d6cea385a9aa1220c9fb455f09
What are method improvements of F1 for paraphrase identification?
Introduction Data imbalance is a common issue in a variety of NLP tasks such as tagging and machine reading comprehension. Table TABREF3 gives concrete examples: for the Named Entity Recognition (NER) task BIBREF2, BIBREF3, most tokens are backgrounds with tagging class $O$. Specifically, the number of tokens tagging class $O$ is 5 times as many as those with entity labels for the CoNLL03 dataset and 8 times for the OntoNotes5.0 dataset; Data-imbalanced issue is more severe for MRC tasks BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8 with the value of negative-positive ratio being 50-200. Data imbalance results in the following two issues: (1) the training-test discrepancy: Without balancing the labels, the learning process tends to converge to a point that strongly biases towards class with the majority label. This actually creates a discrepancy between training and test: at training time, each training instance contributes equally to the objective function while at test time, F1 score concerns more about positive examples; (2) the overwhelming effect of easy-negative examples. As pointed out by meng2019dsreg, significantly large number of negative examples also means that the number of easy-negative example is large. The huge number of easy examples tends to overwhelm the training, making the model not sufficiently learned to distinguish between positive examples and hard-negative examples. The cross-entropy objective (CE for short) or maximum likelihood (MLE) objective, which is widely adopted as the training objective for data-imbalanced NLP tasks BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15, handles neither of the issues. To handle the first issue, we propose to replace CE or MLE with losses based on the Sørensen–Dice coefficient BIBREF0 or Tversky index BIBREF1. The Sørensen–Dice coefficient, dice loss for short, is the harmonic mean of precision and recall. It attaches equal importance to false positives (FPs) and false negatives (FNs) and is thus more immune to data-imbalanced datasets. Tversky index extends dice loss by using a weight that trades precision and recall, which can be thought as the approximation of the $F_{\beta }$ score, and thus comes with more flexibility. Therefore, We use dice loss or Tversky index to replace CE loss to address the first issue. Only using dice loss or Tversky index is not enough since they are unable to address the dominating influence of easy-negative examples. This is intrinsically because dice loss is actually a hard version of the F1 score. Taking the binary classification task as an example, at test time, an example will be classified as negative as long as its probability is smaller than 0.5, but training will push the value to 0 as much as possible. This gap isn't a big issue for balanced datasets, but is extremely detrimental if a big proportion of training examples are easy-negative ones: easy-negative examples can easily dominate training since their probabilities can be pushed to 0 fairly easily. Meanwhile, the model can hardly distinguish between hard-negative examples and positive ones. Inspired by the idea of focal loss BIBREF16 in computer vision, we propose a dynamic weight adjusting strategy, which associates each training example with a weight in proportion to $(1-p)$, and this weight dynamically changes as training proceeds. This strategy helps to deemphasize confident examples during training as their $p$ approaches the value of 1, makes the model attentive to hard-negative examples, and thus alleviates the dominating effect of easy-negative examples. Combing both strategies, we observe significant performance boosts on a wide range of data imbalanced NLP tasks. Notably, we are able to achieve SOTA results on CTB5 (97.92, +1.86), CTB6 (96.57, +1.80) and UD1.4 (96.98, +2.19) for the POS task; SOTA results on CoNLL03 (93.33, +0.29), OntoNotes5.0 (92.07, +0.96)), MSRA 96.72(+0.97) and OntoNotes4.0 (84.47,+2.36) for the NER task; along with competitive results on the tasks of machine reading comprehension and paraphrase identification. The rest of this paper is organized as follows: related work is presented in Section 2. We describe different training objectives in Section 3. Experimental results are presented in Section 4. We perform ablation studies in Section 5, followed by a brief conclusion in Section 6. Related Work ::: Data Resample The idea of weighting training examples has a long history. Importance sampling BIBREF17 assigns weights to different samples and changes the data distribution. Boosting algorithms such as AdaBoost BIBREF18 select harder examples to train subsequent classifiers. Similarly, hard example mining BIBREF19 downsamples the majority class and exploits the most difficult examples. Oversampling BIBREF20, BIBREF21 is used to balance the data distribution. Another line of data resampling is to dynamically control the weights of examples as training proceeds. For example, focal loss BIBREF16 used a soft weighting scheme that emphasizes harder examples during training. In self-paced learning BIBREF22, example weights are obtained through optimizing the weighted training loss which encourages learning easier examples first. At each training step, self-paced learning algorithm optimizes model parameters and example weights jointly. Other works BIBREF23, BIBREF24 adjusted the weights of different training examples based on training loss. Besides, recent work BIBREF25, BIBREF26 proposed to learn a separate network to predict sample weights. Related Work ::: Data Imbalance Issue in Object Detection The background-object label imbalance issue is severe and thus well studied in the field of object detection BIBREF27, BIBREF28, BIBREF29, BIBREF30, BIBREF31. The idea of hard negative mining (HNM) BIBREF30 has gained much attention recently. shrivastava2016ohem proposed the online hard example mining (OHEM) algorithm in an iterative manner that makes training progressively more difficult, and pushes the model to learn better. ssd2016liu sorted all of the negative samples based on the confidence loss and picking the training examples with the negative-positive ratio at 3:1. pang2019rcnn proposed a novel method called IoU-balanced sampling and aploss2019chen designed a ranking model to replace the conventional classification task with a average-precision loss to alleviate the class imbalance issue. The efforts made on object detection have greatly inspired us to solve the data imbalance issue in NLP. Losses ::: Notation For illustration purposes, we use the binary classification task to demonstrate how different losses work. The mechanism can be easily extended to multi-class classification. Let $\lbrace x_i\rbrace $ denote a set of instances. Each $x_i$ is associated with a golden label vector $y_i = [y_{i0},y_{i1} ]$, where $y_{i1}\in \lbrace 0,1\rbrace $ and $y_{i0}\in \lbrace 0,1\rbrace $ respectively denote the positive and negative classes, and thus $y_i$ can be either $[0,1]$ or $[0,1]$. Let $p_i = [p_{i0},p_{i1} ]$ denote the probability vector, and $p_{i1}$ and $p_{i0}$ respectively denote the probability that a model assigns the positive and negative label to $x_i$. Losses ::: Cross Entropy Loss The vanilla cross entropy (CE) loss is given by: As can be seen from Eq.DISPLAY_FORM8, each $x_i$ contributes equally to the final objective. Two strategies are normally used to address the the case where we wish that not all $x_i$ are treated equal: associating different classes with different weighting factor $\alpha $ or resampling the datasets. For the former, Eq.DISPLAY_FORM8 is adjusted as follows: where $\alpha _i\in [0,1]$ may be set by the inverse class frequency or treated as a hyperparameter to set by cross validation. In this work, we use $\lg (\frac{n-n_t}{n_t}+K)$ to calculate the coefficient $\alpha $, where $n_t$ is the number of samples with class $t$ and $n$ is the total number of samples in the training set. $K$ is a hyperparameter to tune. The data resampling strategy constructs a new dataset by sampling training examples from the original dataset based on human-designed criteria, e.g., extract equal training samples from each class. Both strategies are equivalent to changing the data distribution and thus are of the same nature. Empirically, these two methods are not widely used due to the trickiness of selecting $\alpha $ especially for multi-class classification tasks and that inappropriate selection can easily bias towards rare classes BIBREF32. Losses ::: Dice coefficient and Tversky index Sørensen–Dice coefficient BIBREF0, BIBREF33, dice coefficient (DSC) for short, is a F1-oriented statistic used to gauge the similarity of two sets. Given two sets $A$ and $B$, the dice coefficient between them is given as follows: In our case, $A$ is the set that contains of all positive examples predicted by a specific model, and $B$ is the set of all golden positive examples in the dataset. When applied to boolean data with the definition of true positive (TP), false positive (FP), and false negative (FN), it can be then written as follows: For an individual example $x_i$, its corresponding DSC loss is given as follows: As can be seen, for a negative example with $y_{i1}=0$, it does not contribute to the objective. For smoothing purposes, it is common to add a $\gamma $ factor to both the nominator and the denominator, making the form to be as follows: As can be seen, negative examples, with $y_{i1}$ being 0 and DSC being $\frac{\gamma }{ p_{i1}+\gamma }$, also contribute to the training. Additionally, milletari2016v proposed to change the denominator to the square form for faster convergence, which leads to the following dice loss (DL): Another version of DL is to directly compute set-level dice coefficient instead of the sum of individual dice coefficient. We choose the latter due to ease of optimization. Tversky index (TI), which can be thought as the approximation of the $F_{\beta }$ score, extends dice coefficient to a more general case. Given two sets $A$ and $B$, tversky index is computed as follows: Tversky index offers the flexibility in controlling the tradeoff between false-negatives and false-positives. It degenerates to DSC if $\alpha =\beta =0.5$. The Tversky loss (TL) for the training set $\lbrace x_i,y_i\rbrace $ is thus as follows: Losses ::: Self-adusting Dice Loss Consider a simple case where the dataset consists of only one example $x_i$, which is classified as positive as long as $p_{i1}$ is larger than 0.5. The computation of $F1$ score is actually as follows: Comparing Eq.DISPLAY_FORM14 with Eq.DISPLAY_FORM22, we can see that Eq.DISPLAY_FORM14 is actually a soft form of $F1$, using a continuous $p$ rather than the binary $\mathbb {I}( p_{i1}>0.5)$. This gap isn't a big issue for balanced datasets, but is extremely detrimental if a big proportion of training examples are easy-negative ones: easy-negative examples can easily dominate training since their probabilities can be pushed to 0 fairly easily. Meanwhile, the model can hardly distinguish between hard-negative examples and positive ones, which has a huge negative effect on the final F1 performance. To address this issue, we propose to multiply the soft probability $p$ with a decaying factor $(1-p)$, changing Eq.DISPLAY_FORM22 to the following form: One can think $(1-p_{i1})$ as a weight associated with each example, which changes as training proceeds. The intuition of changing $p_{i1}$ to $(1-p_{i1}) p_{i1}$ is to push down the weight of easy examples. For easy examples whose probability are approaching 0 or 1, $(1-p_{i1}) p_{i1}$ makes the model attach significantly less focus to them. Figure FIGREF23 gives gives an explanation from the perspective in derivative: the derivative of $\frac{(1-p)p}{1+(1-p)p}$ with respect to $p$ approaches 0 immediately after $p$ approaches 0, which means the model attends less to examples once they are correctly classified. A close look at Eq.DISPLAY_FORM14 reveals that it actually mimics the idea of focal loss (FL for short) BIBREF16 for object detection in vision. Focal loss was proposed for one-stage object detector to handle foreground-background tradeoff encountered during training. It down-weights the loss assigned to well-classified examples by adding a $(1-p)^{\beta }$ factor, leading the final loss to be $(1-p)^{\beta }\log p$. In Table TABREF18, we show the losses used in our experiments, which is described in the next section. Experiments We evaluate the proposed method on four NLP tasks: part-of-speech tagging, named entity recognition, machine reading comprehension and paraphrase identification. Baselines in our experiments are optimized by using the standard cross-entropy training objective. Experiments ::: Part-of-Speech Tagging Part-of-speech tagging (POS) is the task of assigning a label (e.g., noun, verb, adjective) to each word in a given text. In this paper, we choose BERT as the backbone and conduct experiments on three Chinese POS datasets. We report the span-level micro-averaged precision, recall and F1 for evaluation. Hyperparameters are tuned on the corresponding development set of each dataset. Experiments ::: Part-of-Speech Tagging ::: Datasets We conduct experiments on the widely used Chinese Treebank 5.0, 6.0 as well as UD1.4. CTB5 is a Chinese dataset for tagging and parsing, which contains 507,222 words, 824,983 characters and 18,782 sentences extracted from newswire sources. CTB6 is an extension of CTB5, containing 781,351 words, 1,285,149 characters and 28,295 sentences. UD is the abbreviation of Universal Dependencies, which is a framework for consistent annotation of grammar (parts of speech, morphological features, and syntactic dependencies) across different human languages. In this work, we use UD1.4 for Chinese POS tagging. Experiments ::: Part-of-Speech Tagging ::: Baselines We use the following baselines: Joint-POS: shao2017character jointly learns Chinese word segmentation and POS. Lattice-LSTM: lattice2018zhang constructs a word-character lattice. Bert-Tagger: devlin2018bert treats part-of-speech as a tagging task. Experiments ::: Part-of-Speech Tagging ::: Results Table presents the experimental results on the POS task. As can be seen, the proposed DSC loss outperforms the best baseline results by a large margin, i.e., outperforming BERT-tagger by +1.86 in terms of F1 score on CTB5, +1.80 on CTB6 and +2.19 on UD1.4. As far as we are concerned, we are achieving SOTA performances on the three datasets. Weighted cross entropy and focal loss only gain a little performance improvement on CTB5 and CTB6, and the dice loss obtains huge gain on CTB5 but not on CTB6, which indicates the three losses are not consistently robust in resolving the data imbalance issue. The proposed DSC loss performs robustly on all the three datasets. Experiments ::: Named Entity Recognition Named entity recognition (NER) refers to the task of detecting the span and semantic category of entities from a chunk of text. Our implementation uses the current state-of-the-art BERT-MRC model proposed by xiaoya2019ner as a backbone. For English datasets, we use BERT$_\text{Large}$ English checkpoints, while for Chinese we use the official Chinese checkpoints. We report span-level micro-averaged precision, recall and F1-score. Hyperparameters are tuned on the development set of each dataset. Experiments ::: Named Entity Recognition ::: Datasets For the NER task, we consider both Chinese datasets, i.e., OntoNotes4.0 BIBREF34 and MSRA BIBREF35, and English datasets, i.e., CoNLL2003 BIBREF36 and OntoNotes5.0 BIBREF37. CoNLL2003 is an English dataset with 4 entity types: Location, Organization, Person and Miscellaneous. We followed data processing protocols in BIBREF14. English OntoNotes5.0 consists of texts from a wide variety of sources and contains 18 entity types. We use the standard train/dev/test split of CoNLL2012 shared task. Chinese MSRA performs as a Chinese benchmark dataset containing 3 entity types. Data in MSRA is collected from news domain. Since the development set is not provided in the original MSRA dataset, we randomly split the training set into training and development splits by 9:1. We use the official test set for evaluation. Chinese OntoNotes4.0 is a Chinese dataset and consists of texts from news domain, which has 18 entity types. In this paper, we take the same data split as wu2019glyce did. Experiments ::: Named Entity Recognition ::: Baselines We use the following baselines: ELMo: a tagging model from peters2018deep. Lattice-LSTM: lattice2018zhang constructs a word-character lattice, only used in Chinese datasets. CVT: from kevin2018cross, which uses Cross-View Training(CVT) to improve the representations of a Bi-LSTM encoder. Bert-Tagger: devlin2018bert treats NER as a tagging task. Glyce-BERT: wu2019glyce combines glyph information with BERT pretraining. BERT-MRC: The current SOTA model for both Chinese and English NER datasets proposed by xiaoya2019ner, which formulate NER as machine reading comprehension task. Experiments ::: Named Entity Recognition ::: Results Table shows experimental results on NER datasets. For English datasets including CoNLL2003 and OntoNotes5.0, our proposed method outperforms BERT-MRCBIBREF38 by +0.29 and +0.96 respectively. We observe huge performance boosts on Chinese datasets, achieving F1 improvements by +0.97 and +2.36 on MSRA and OntoNotes4.0, respectively. As far as we are concerned, we are setting new SOTA performances on all of the four NER datasets. Experiments ::: Machine Reading Comprehension Machine reading comprehension (MRC) BIBREF39, BIBREF40, BIBREF41, BIBREF40, BIBREF42, BIBREF15 has become a central task in natural language understanding. MRC in the SQuAD-style is to predict the answer span in the passage given a question and the passage. In this paper, we choose the SQuAD-style MRC task and report Extract Match (EM) in addition to F1 score on validation set. All hyperparameters are tuned on the development set of each dataset. Experiments ::: Machine Reading Comprehension ::: Datasets The following five datasets are used for MRC task: SQuAD v1.1, SQuAD v2.0 BIBREF4, BIBREF6 and Quoref BIBREF8. SQuAD v1.1 and SQuAD v2.0 are the most widely used QA benchmarks. SQuAD1.1 is a collection of 100K crowdsourced question-answer pairs, and SQuAD2.0 extends SQuAD1.1 allowing no short answer exists in the provided passage. Quoref is a QA dataset which tests the coreferential reasoning capability of reading comprehension systems, containing 24K questions over 4.7K paragraphs from Wikipedia. Experiments ::: Machine Reading Comprehension ::: Baselines We use the following baselines: QANet: qanet2018 builds a model based on convolutions and self-attention. Convolution to model local interactions and self-attention to model global interactions. BERT: devlin2018bert treats NER as a tagging task. XLNet: xlnet2019 proposes a generalized autoregressive pretraining method that enables learning bidirectional contexts. Experiments ::: Machine Reading Comprehension ::: Results Table shows the experimental results for MRC tasks. With either BERT or XLNet, our proposed DSC loss obtains significant performance boost on both EM and F1. For SQuADv1.1, our proposed method outperforms XLNet by +1.25 in terms of F1 score and +0.84 in terms of EM and achieves 87.65 on EM and 89.51 on F1 for SQuAD v2.0. Moreover, on QuoRef, the proposed method surpasses XLNet results by +1.46 on EM and +1.41 on F1. Another observation is that, XLNet outperforms BERT by a huge margin, and the proposed DSC loss can obtain further performance improvement by an average score above 1.0 in terms of both EM and F1, which indicates the DSC loss is complementary to the model structures. Experiments ::: Paraphrase Identification Paraphrases are textual expressions that have the same semantic meaning using different surface words. Paraphrase identification (PI) is the task of identifying whether two sentences have the same meaning or not. We use BERT BIBREF11 and XLNet BIBREF43 as backbones and report F1 score for comparison. Hyperparameters are tuned on the development set of each dataset. Experiments ::: Paraphrase Identification ::: Datasets We conduct experiments on two widely used datasets for PI task: MRPC BIBREF44 and QQP. MRPC is a corpus of sentence pairs automatically extracted from online news sources, with human annotations of whether the sentence pairs are semantically equivalent. The MRPC dataset has imbalanced classes (68% positive, 32% for negative). QQP is a collection of question pairs from the community question-answering website Quora. The class distribution in QQP is also unbalanced (37% positive, 63% negative). Experiments ::: Paraphrase Identification ::: Results Table shows the results for PI task. We find that replacing the training objective with DSC introduces performance boost for both BERT and XLNet. Using DSC loss improves the F1 score by +0.58 for MRPC and +0.73 for QQP. Ablation Studies ::: The Effect of Dice Loss on Accuracy-oriented Tasks We argue that the most commonly used cross-entropy objective is actually accuracy-oriented, whereas the proposed dice loss (DL) performs as a hard version of F1-score. To explore the effect of the dice loss on accuracy-oriented tasks such as text classification, we conduct experiments on the Stanford Sentiment Treebank sentiment classification datasets including SST-2 and SST-5. We fine-tune BERT$_\text{Large}$ with different training objectives. Experiment results for SST are shown in . For SST-5, BERT with CE achieves 55.57 in terms of accuracy, with DL and DSC losses slightly degrade the accuracy performance and achieve 54.63 and 55.19, respectively. For SST-2, BERT with CE achieves 94.9 in terms of accuracy. The same as SST-5, we observe a slight performance drop with DL and DSC, which means that the dice loss actually works well for F1 but not for accuracy. Ablation Studies ::: The Effect of Hyperparameters in Tversky index As mentioned in Section SECREF10, Tversky index (TI) offers the flexibility in controlling the tradeoff between false-negatives and false-positives. In this subsection, we explore the effect of hyperparameters (i.e., $\alpha $ and $\beta $) in TI to test how they manipulate the tradeoff. We conduct experiments on the Chinese OntoNotes4.0 NER dataset and English QuoRef MRC dataset to examine the influence of tradeoff between precision and recall. Experiment results are shown in Table . The highest F1 for Chinese OntoNotes4.0 is 84.67 when $\alpha $ is set to 0.6 while for QuoRef, the highest F1 is 68.44 when $\alpha $ is set to 0.4. In addition, we can observe that the performance varies a lot as $\alpha $ changes in distinct datasets, which shows that the hyperparameters $\alpha ,\beta $ play an important role in the proposed method. Conclusion In this paper, we alleviate the severe data imbalance issue in NLP tasks. We propose to use dice loss in replacement of the standard cross-entropy loss, which performs as a soft version of F1 score. Using dice loss can help narrow the gap between training objectives and evaluation metrics. Empirically, we show that the proposed training objective leads to significant performance boost for part-of-speech, named entity recognition, machine reading comprehension and paraphrase identification tasks.
[ "Using DSC loss improves the F1 score by +0.58 for MRPC and +0.73 for QQP", "+0.58" ]
3,566
qasper_e
en
null
b959a2ca0687fbe1c4f11a6fda00eb9341525f1502c9afc6
How do they calculate variance from the model outputs?
Introduction State-of-the-art automatic speech recognition (ASR) systems BIBREF0 have large model capacities and require significant quantities of training data to generalize. Labeling thousands of hours of audio, however, is expensive and time-consuming. A natural question to ask is how to achieve better generalization with fewer training examples. Active learning studies this problem by identifying and labeling only the most informative data, potentially reducing sample complexity. How much active learning can help in large-scale, end-to-end ASR systems, however, is still an open question. The speech recognition community has generally identified the informativeness of samples by calculating confidence scores. In particular, an utterance is considered informative if the most likely prediction has small probability BIBREF1 , or if the predictions are distributed very uniformly over the labels BIBREF2 . Though confidence-based measures work well in practice, less attention has been focused on gradient-based methods like Expected Gradient Length (EGL) BIBREF3 , where the informativeness is measured by the norm of the gradient incurred by the instance. EGL has previously been justified as intuitively measuring the expected change in a model's parameters BIBREF3 .We formalize this intuition from the perspective of asymptotic variance reduction, and experimentally, we show EGL to be superior to confidence-based methods on speech recognition tasks. Additionally, we observe that the ranking of samples scored by EGL is not correlated with that of confidence scoring, suggesting EGL identifies aspects of an instance that confidence scores cannot capture. In BIBREF3 , EGL was applied to active learning on sequence labeling tasks, but our work is the first we know of to apply EGL to speech recognition in particular. Gradient-based methods have also found applications outside active learning. For example, BIBREF4 suggests that in stochastic gradient descent, sampling training instances with probabilities proportional to their gradient lengths can speed up convergence. From the perspective of variance reduction, this importance sampling problem shares many similarities to problems found in active learning. Problem Formulation Denote INLINEFORM0 as an utterance and INLINEFORM1 the corresponding label (transcription). A speech recognition system models the conditional distribution INLINEFORM2 , where INLINEFORM3 are the parameters in the model, and INLINEFORM4 is typically implemented by a Recurrent Neural Network (RNN). A training set is a collection of INLINEFORM5 pairs, denoted as INLINEFORM6 . The parameters of the model are estimated by minimizing the negative log-likelihood on the training set: DISPLAYFORM0 Active learning seeks to augment the training set with a new set of utterances and labels INLINEFORM0 in order to achieve good generalization on a held-out test dataset. In many applications, there is an unlabeled pool INLINEFORM1 which is costly to label in its entirety. INLINEFORM2 is queried for the “most informative” instance(s) INLINEFORM3 , for which the label(s) INLINEFORM4 are then obtained. We discuss several such query strategies below. Confidence Scores Confidence scoring has been used extensively as a proxy for the informativeness of training samples. Specifically, an INLINEFORM0 is considered informative if the predictions are uniformly distributed over all the labels BIBREF2 , or if the best prediction of its label is with low probability BIBREF1 . By taking the instances which “confuse” the model, these methods may effectively explore under-sampled regions of the input space. Expected Gradient Length Intuitively, an instance can be considered informative if it results in large changes in model parameters. A natural measure of the change is gradient length, INLINEFORM0 . Motivated by this intuition, Expected Gradient Length (EGL) BIBREF3 picks the instances expected to have the largest gradient length. Since labels are unknown on INLINEFORM1 , EGL computes the expectation of the gradient norm over all possible labelings. BIBREF3 interprets EGL as “expected model change”. In the following section, we formalize the intuition for EGL and show that it follows naturally from reducing the variance of an estimator. Variance in the Asymptote Assume the joint distribution of INLINEFORM0 has the following form, DISPLAYFORM0 where INLINEFORM0 is the true parameter, and INLINEFORM1 is independent of INLINEFORM2 . By selecting a subset of the training data, we are essentially choosing another distribution INLINEFORM3 so that the INLINEFORM4 pairs are drawn from INLINEFORM5 Statistical signal processing theory BIBREF5 states the following asymptotic distribution of INLINEFORM0 , DISPLAYFORM0 where INLINEFORM0 is the Fisher Information Matrix with respect to INLINEFORM1 . Using first order approximation at INLINEFORM2 , we have asymptotically, DISPLAYFORM0 Eq. ( EQREF7 ) indicates that to reduce INLINEFORM0 on test data, we need to minimize the expected variance INLINEFORM1 over the test set. This is called Fisher Information Ratio criteria in BIBREF6 , which itself is hard to optimize. An easier surrogate is to maximize INLINEFORM2 . Substituting Eq. ( EQREF5 ) into INLINEFORM3 , we have INLINEFORM4 which is equivalent to INLINEFORM0 A practical issue is that we do not know INLINEFORM0 in advance. We could instead substitute an estimate INLINEFORM1 from a pre-trained model, where it is reasonable to assume the INLINEFORM2 to be close to the true INLINEFORM3 . The batch selection then works by taking the samples that have largest gradient norms, DISPLAYFORM0 For RNNs, the gradients for each potential label can be obtained by back-propagation. Another practical issue is that EGL marginalizes over all possible labelings, but in speech recognition, the number of labelings scales exponentially in the number of timesteps. Therefore, we only marginalize over the INLINEFORM0 most probable labelings. They are obtained by beam search decoding, as in BIBREF7 . The EGL method in BIBREF3 is almost the same as Eq. ( EQREF8 ), except the gradient's norm is not squared in BIBREF3 . Here we have provided a more formal characterization of EGL to complement its intuitive interpretation as “expected model change” in BIBREF3 . For notational convenience, we denote Eq. ( EQREF8 ) as EGL in subsequent sections. Experiments We empirically validate EGL on speech recognition tasks. In our experiments, the RNN takes in spectrograms of utterances, passing them through two 2D-convolutional layers, followed by seven bi-directional recurrent layers and a fully-connected layer with softmax activation. All recurrent layers are batch normalized. At each timestep, the softmax activations give a probability distribution over the characters. CTC loss BIBREF8 is then computed from the timestep-wise probabilities. A base model, INLINEFORM0 , is trained on 190 hours ( INLINEFORM1 100K instances) of transcribed speech data. Then, it selects a subset of a 1,700-hour ( INLINEFORM2 1.1M instances) unlabeled dataset. We query labels for the selected subset and incorporate them into training. Learning rates are tuned on a small validation set of 2048 instances. The trained model is then tested on a 156-hour ( INLINEFORM3 100K instances) test set and we report CTC loss, Character Error Rate (CER) and Word Error Rate (WER). The confidence score methods BIBREF1 , BIBREF2 can be easily extended to our setup. Specifically, from the probabilities over the characters, we can compute an entropy per timestep and then average them. This method is denoted as entropy. We could also take the most likely prediction and calculate its CTC loss, normalized by number of timesteps. This method is denoted as pCTC (predicted CTC) in the following sections. We implement EGL by marginalizing over the most likely 100 labels, and compare it with: 1) a random selection baseline, 2) entropy, and 3) pCTC. Using the same base model, each method queries a variable percentage of the unlabeled dataset. The queries are then included into training set, and the model continues training until convergence. Fig. FIGREF9 reports the metrics (Exact values are reported in Table TABREF12 in the Appendix) on the test set as the query percentage varies. All the active learning methods outperform the random baseline. Moreover, EGL shows a steeper, more rapid reduction in error than all other approaches. Specifically, when querying 20% of the unlabeled dataset, EGL has 11.58% lower CER and 11.09% lower WER relative to random. The performance of EGL at querying 20% is on par with random at 40%, suggesting that using EGL can lead to an approximate 50% decrease in data labeling. Similarity between Query Methods It is useful to understand how the three active learning methods differ in measuring the informativeness of an instance. To compare any two methods, we take rankings of informativeness given by these two methods, and plot them in a 2-D ranking-vs-ranking coordinate system. A plot close to the diagonal implies that these two methods evaluate informativeness in a very similar way. Fig. FIGREF11 shows the ranking-vs-ranking plots between pCTC and entropy, EGL and entropy. We observe that pCTC rankings and entropy rankings (Fig. FIGREF11 ) are very correlated. This is likely because they are both related to model uncertainty. In contrast, EGL gives very different rankings from entropy (Fig. FIGREF11 ). This suggests EGL is able to identify aspects of an instance that uncertainty-based measurements cannot capture. We further investigate the samples for which EGL and entropy yield vastly different estimates of informativeness, e.g., the elements in the red circle in Fig. FIGREF11 . These particular samples consist of short utterances containing silence (with background noise) or filler words. Further investigation is required to understand whether these samples are noisy outliers or whether they are in fact important for training end-to-end speech recognition systems. Conclusion and Future Work We formally explained EGL from a variance reduction perspective and experimentally tested its performance on end-to-end speech recognition systems. Initial experiments show a notable gain over random selection, and that it outperforms confidence score methods used in the ASR community. We also show EGL measures sample informativeness in a very different way from confidence scores, giving rise to open research questions. All the experiments reported here query all samples in a single batch. It is also worth considering the effects of querying samples in a sequential manner. In the future, we will further validate the approach with sequential queries and seek to make the informativeness measure robust to outliers.
[ "reducing the variance of an estimator, EGL method in BIBREF3 is almost the same as Eq. ( EQREF8 ), except the gradient's norm is not squared in BIBREF3", " Fisher Information Ratio" ]
1,671
qasper_e
en
null
83246d680e84d965de12202eeca2955546e9261630d543c5
What architecture does the encoder have?
Introduction This paper describes our approach and results for Task 2 of the CoNLL–SIGMORPHON 2018 shared task on universal morphological reinflection BIBREF0 . The task is to generate an inflected word form given its lemma and the context in which it occurs. Morphological (re)inflection from context is of particular relevance to the field of computational linguistics: it is compelling to estimate how well a machine-learned system can capture the morphosyntactic properties of a word given its context, and map those properties to the correct surface form for a given lemma. There are two tracks of Task 2 of CoNLL–SIGMORPHON 2018: in Track 1 the context is given in terms of word forms, lemmas and morphosyntactic descriptions (MSD); in Track 2 only word forms are available. See Table TABREF1 for an example. Task 2 is additionally split in three settings based on data size: high, medium and low, with high-resource datasets consisting of up to 70K instances per language, and low-resource datasets consisting of only about 1K instances. The baseline provided by the shared task organisers is a seq2seq model with attention (similar to the winning system for reinflection in CoNLL–SIGMORPHON 2016, BIBREF1 ), which receives information about context through an embedding of the two words immediately adjacent to the target form. We use this baseline implementation as a starting point and achieve the best overall accuracy of 49.87 on Task 2 by introducing three augmentations to the provided baseline system: (1) We use an LSTM to encode the entire available context; (2) We employ a multi-task learning approach with the auxiliary objective of MSD prediction; and (3) We train the auxiliary component in a multilingual fashion, over sets of two to three languages. In analysing the performance of our system, we found that encoding the full context improves performance considerably for all languages: 11.15 percentage points on average, although it also highly increases the variance in results. Multi-task learning, paired with multilingual training and subsequent monolingual finetuning, scored highest for five out of seven languages, improving accuracy by another 9.86% on average. System Description Our system is a modification of the provided CoNLL–SIGMORPHON 2018 baseline system, so we begin this section with a reiteration of the baseline system architecture, followed by a description of the three augmentations we introduce. Baseline The CoNLL–SIGMORPHON 2018 baseline is described as follows: The system is an encoder-decoder on character sequences. It takes a lemma as input and generates a word form. The process is conditioned on the context of the lemma [...] The baseline treats the lemma, word form and MSD of the previous and following word as context in track 1. In track 2, the baseline only considers the word forms of the previous and next word. [...] The baseline system concatenates embeddings for context word forms, lemmas and MSDs into a context vector. The baseline then computes character embeddings for each character in the input lemma. Each of these is concatenated with a copy of the context vector. The resulting sequence of vectors is encoded using an LSTM encoder. Subsequently, an LSTM decoder generates the characters in the output word form using encoder states and an attention mechanism. To that we add a few details regarding model size and training schedule: the number of LSTM layers is one; embedding size, LSTM layer size and attention layer size is 100; models are trained for 20 epochs; on every epoch, training data is subsampled at a rate of 0.3; LSTM dropout is applied at a rate 0.3; context word forms are randomly dropped at a rate of 0.1; the Adam optimiser is used, with a default learning rate of 0.001; and trained models are evaluated on the development data (the data for the shared task comes already split in train and dev sets). Our system Here we compare and contrast our system to the baseline system. A diagram of our system is shown in Figure FIGREF4 . The idea behind this modification is to provide the encoder with access to all morpho-syntactic cues present in the sentence. In contrast to the baseline, which only encodes the immediately adjacent context of a target word, we encode the entire context. All context word forms, lemmas, and MSD tags (in Track 1) are embedded in their respective high-dimensional spaces as before, and their embeddings are concatenated. However, we now reduce the entire past context to a fixed-size vector by encoding it with a forward LSTM, and we similarly represent the future context by encoding it with a backwards LSTM. We introduce an auxiliary objective that is meant to increase the morpho-syntactic awareness of the encoder and to regularise the learning process—the task is to predict the MSD tag of the target form. MSD tag predictions are conditioned on the context encoding, as described in UID15 . Tags are generated with an LSTM one component at a time, e.g. the tag PRO;NOM;SG;1 is predicted as a sequence of four components, INLINEFORM0 PRO, NOM, SG, 1 INLINEFORM1 . For every training instance, we backpropagate the sum of the main loss and the auxiliary loss without any weighting. As MSD tags are only available in Track 1, this augmentation only applies to this track. The parameters of the entire MSD (auxiliary-task) decoder are shared across languages. Since a grouping of the languages based on language family would have left several languages in single-member groups (e.g. Russian is the sole representative of the Slavic family), we experiment with random groupings of two to three languages. Multilingual training is performed by randomly alternating between languages for every new minibatch. We do not pass any information to the auxiliary decoder as to the source language of the signal it is receiving, as we assume abstract morpho-syntactic features are shared across languages. After 20 epochs of multilingual training, we perform 5 epochs of monolingual finetuning for each language. For this phase, we reduce the learning rate to a tenth of the original learning rate, i.e. 0.0001, to ensure that the models are indeed being finetuned rather than retrained. We keep all hyperparameters the same as in the baseline. Training data is split 90:10 for training and validation. We train our models for 50 epochs, adding early stopping with a tolerance of five epochs of no improvement in the validation loss. We do not subsample from the training data. We train models for 50 different random combinations of two to three languages in Track 1, and 50 monolingual models for each language in Track 2. Instead of picking the single model that performs best on the development set and thus risking to select a model that highly overfits that data, we use an ensemble of the five best models, and make the final prediction for a given target form with a majority vote over the five predictions. Results and Discussion Test results are listed in Table TABREF17 . Our system outperforms the baseline for all settings and languages in Track 1 and for almost all in Track 2—only in the high resource setting is our system not definitively superior to the baseline. Interestingly, our results in the low resource setting are often higher for Track 2 than for Track 1, even though contextual information is less explicit in the Track 2 data and the multilingual multi-tasking approach does not apply to this track. We interpret this finding as an indicator that a simpler model with fewer parameters works better in a setting of limited training data. Nevertheless, we focus on the low resource setting in the analysis below due to time limitations. As our Track 1 results are still substantially higher than the baseline results, we consider this analysis valid and insightful. Ablation Study We analyse the incremental effect of the different features in our system, focusing on the low-resource setting in Track 1 and using development data. Encoding the entire context with an LSTM highly increases the variance of the observed results. So we trained fifty models for each language and each architecture. Figure FIGREF23 visualises the means and standard deviations over the trained models. In addition, we visualise the average accuracy for the five best models for each language and architecture, as these are the models we use in the final ensemble prediction. Below we refer to these numbers only. The results indicate that encoding the full context with an LSTM highly enhances the performance of the model, by 11.15% on average. This observation explains the high results we obtain also for Track 2. Adding the auxiliary objective of MSD prediction has a variable effect: for four languages (de, en, es, and sv) the effect is positive, while for the rest it is negative. We consider this to be an issue of insufficient data for the training of the auxiliary component in the low resource setting we are working with. We indeed see results improving drastically with the introduction of multilingual training, with multilingual results being 7.96% higher than monolingual ones on average. We studied the five best models for each language as emerging from the multilingual training (listed in Table TABREF27 ) and found no strong linguistic patterns. The en–sv pairing seems to yield good models for these languages, which could be explained in terms of their common language family and similar morphology. The other natural pairings, however, fr–es, and de–sv, are not so frequent among the best models for these pairs of languages. Finally, monolingual finetuning improves accuracy across the board, as one would expect, by 2.72% on average. The final observation to be made based on this breakdown of results is that the multi-tasking approach paired with multilingual training and subsequent monolingual finetuning outperforms the other architectures for five out of seven languages: de, en, fr, ru and sv. For the other two languages in the dataset, es and fi, the difference between this approach and the approach that emerged as best for them is less than 1%. The overall improvement of the multilingual multi-tasking approach over the baseline is 18.30%. Error analysis Here we study the errors produced by our system on the English test set to better understand the remaining shortcomings of the approach. A small portion of the wrong predictions point to an incorrect interpretation of the morpho-syntactic conditioning of the context, e.g. the system predicted plan instead of plans in the context Our _ include raising private capital. The majority of wrong predictions, however, are nonsensical, like bomb for job, fify for fixing, and gnderrate for understand. This observation suggests that generally the system did not learn to copy the characters of lemma into inflected form, which is all it needs to do in a large number of cases. This issue could be alleviated with simple data augmentation techniques that encourage autoencoding BIBREF2 . MSD prediction Figure FIGREF32 summarises the average MSD-prediction accuracy for the multi-tasking experiments discussed above. Accuracy here is generally higher than on the main task, with the multilingual finetuned setup for Spanish and the monolingual setup for French scoring best: 66.59% and 65.35%, respectively. This observation illustrates the added difficulty of generating the correct surface form even when the morphosyntactic description has been identified correctly. We observe some correlation between these numbers and accuracy on the main task: for de, en, ru and sv, the brown, pink and blue bars here pattern in the same way as the corresponding INLINEFORM0 's in Figure FIGREF23 . One notable exception to this pattern is fr where inflection gains a lot from multilingual training, while MSD prediction suffers greatly. Notice that the magnitude of change is not always the same, however, even when the general direction matches: for ru, for example, multilingual training benefits inflection much more than in benefits MSD prediction, even though the MSD decoder is the only component that is actually shared between languages. This observation illustrates the two-fold effect of multi-task training: an auxiliary task can either inform the main task through the parameters the two tasks share, or it can help the main task learning through its regularising effect. Related Work Our system is inspired by previous work on multi-task learning and multi-lingual learning, mainly building on two intuitions: (1) jointly learning related tasks tends to be beneficial BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 ; and (2) jointly learning related languages in an MTL-inspired framework tends to be beneficial BIBREF8 , BIBREF9 , BIBREF10 . In the context of computational morphology, multi-lingual approaches have previously been employed for morphological reinflection BIBREF2 and for paradigm completion BIBREF11 . In both of these cases, however, the available datasets covered more languages, 40 and 21, respectively, which allowed for linguistically-motivated language groupings and for parameter sharing directly on the level of characters. BIBREF10 explore parameter sharing between related languages for dependency parsing, and find that sharing is more beneficial in the case of closely related languages. Conclusions In this paper we described our system for the CoNLL–SIGMORPHON 2018 shared task on Universal Morphological Reinflection, Task 2, which achieved the best performance out of all systems submitted, an overall accuracy of 49.87. We showed in an ablation study that this is due to three core innovations, which extend a character-based encoder-decoder model: (1) a wide context window, encoding the entire available context; (2) multi-task learning with the auxiliary task of MSD prediction, which acts as a regulariser; (3) a multilingual approach, exploiting information across languages. In future work we aim to gain better understanding of the increase in variance of the results introduced by each of our modifications and the reasons for the varying effect of multi-task learning for different languages. Acknowledgements We gratefully acknowledge the support of the NVIDIA Corporation with the donation of the Titan Xp GPU used for this research.
[ "LSTM", "LSTM" ]
2,289
qasper_e
en
null
181a2d144f719517ed2033408c263c328c3c7eb9cddc78c6
What kind of questions are present in the dataset?
Introduction A large majority of the human knowledge is recorded through text documents. That is why ability for a system to automatically infer information from text without any structured data has become a major challenge. Answering questions about a given document is a relevant proxy task that has been proposed as a way to evaluate the reading ability of a given model. In this configuration, a text document such as a news article, a document from Wikipedia or any type of text is presented to a machine with an associated set of questions. The system is then expected to answer these questions and evaluated by its accuracy on this task. The machine reading framework is very general and we can imagine a large panel of questions that can possibly handle most of the standard natural language processing tasks. For example, the task of named entities recognition can be formulated as a machine reading one where your document is the sentence and the question would be 'What are the named entities mentioned in this sentence?'. These natural language interactions are an important objective for reading systems. Recently, many datasets have been proposed to build and evaluate reading models BIBREF0 , BIBREF1 . From cloze style questions BIBREF2 to open questions BIBREF3 , from synthetic data BIBREF4 to human written articles BIBREF5 , many styles of documents and questions have been proposed to challenge reading models. The correct answer to the questions proposed in most of these datasets is a span of text of the source document, which can be restricted to a single word in several cases. It means that the answer should explicitly be present in the source document and that the model should be able to locate it. Different models have already shown superhuman performance on several of these datasets and particularly on the SQuAD dataset composed of Wikipedia articles BIBREF6 , BIBREF7 . However, some limits of such models have been highlighted when they encounter perturbations into the input documents BIBREF8 . Indeed almost all of the state of the art models on the SQuAD dataset suffer from a lack of robustness against adversarial examples. Once the model is trained, a meaningless sentence added at the end of the text document can completely disturb the reading system. Conversely, these adversarial examples do not seem to fool a human reader who will be capable of answering the questions as well as without this perturbation. One possible explanation of this phenomenon is that computers are good at extracting patterns in the document that match the representation of the question. If multiple spans of the documents look similar to the questions, the reader might not be able to decide which one is relevant. Moreover, Wikipedia articles tend to be written with the same standard writing style, factual, unambiguous. Such writing style tends to favor the pattern matching between the questions and the documents. This format of documents/questions has certainly influenced the design of the comprehension models that have been proposed so far. Most of them are composed of stacked attention layers that match question and document representations. Following concepts proposed in the 20 bAbI tasks BIBREF4 or in the visual question-answering dataset CLEVR BIBREF9 , we think that the challenge, limited to the detection of relevant passages in a document, is only the first step in building systems that truly understand text. The second step is the ability of reasoning with the relevant information extracted from a document. To set up this challenge, we propose to leverage on a hotel reviews corpus that requires reasoning skills to answer natural language questions. The reviews we used have been extracted from TripAdvisor and originally proposed in BIBREF10 , BIBREF11 . In the original data, each review comes with a set of rated aspects among the seventh available: Business service, Check in / Front Desk, Cleanliness, Location, Room, Sleep Quality, Value and for all the reviews an Overall rating. In this articles we propose to exploit these data to create a dataset of question-answering that will challenge 8 competencies of the reader. Our contributions can be summarized as follow: Machine comprehension datasets ReviewQA is proposed as a novel dataset regarding the collection of the existing ones. Indeed a large panel of available datasets, that evaluate models on different types of documents, can only be valuable for designing efficient models and learning protocols. In this following part, we describe several of these datasets. SQuAD: The Standford Question Answering Dataset (SQuAD) introduced in BIBREF0 is a large dataset of natural questions over the 500 most popular articles of Wikipedia. All the questions have been crowdsourced and answers are spans of text extracted from source documents. This dataset has been very popular these last two years and the performance of the architectures that have been proposed have rapidly increased until several models surpass the human score. Indeed, in the original paper human performance has been measured at 82.304 points for the exact match metric and at the time we are writing this paper four models have already a higher score. In another hand BIBREF8 has shown that these models suffer from a lack of robustness against adversarial examples that are meaningless from a human point of view. This suggests the need for a more challenging dataset that will allow developing strongest reasoning architectures. NewsQA: NewsQA BIBREF1 is a dataset very similar to SQuAD. It contains 120.000 human generated questions over 12.000 articles form CNN originally introduced in BIBREF5 . It has been designed to be more challenging than SQuAD with questions that might require to extract multiple spans of text or not be answerable. WikiHop and MedHop: These are two recent datasets introduced in BIBREF13 . Unlike SQuAD and NewsQA, important facts are spread out across multiple documents and, in order to answer a question, it is necessary to jump over a set of passages to collect the required information. The relevant passages are not explicitly mentioned in the data so this dataset measures the ability that a model has to navigate across multiple documents. The questions come with a set of candidates which are all present in the text. MS Marco: This dataset has been released in BIBREF14 . The documents come from the internet and the questions are real user queries asked through the bing search engine. The dataset contains around 100.000 queries and each of them comes with a set of approximatively 10 relevant passages. Like in SQuAD, several models are already doing superhuman performances on this dataset. Facebook bAbI tasks: This is a set of 20 toy tasks proposed in BIBREF4 and designed to measure text understanding. Each task requires a certain capability to be completed like induction, deduction and more. Documents are synthetic stories, composed of few sentences that describe a set of actions. This dataset was one of the first attempt to introduce a general set of prerequisite capabilities required for the reading task. Although it has been a very challenging framework, beneficial to the emergence of the attention mechanism inside the reading architectures, a Gated end-to-end memory network BIBREF15 now succeed in almost all of the 20 tasks. One of the possible reason is that the data are synthetic data, without noise or ambiguity. We propose a comparable framework with understanding and reasoning tasks based on user-generated comments that are much more realistic and that required language competencies to be understood. CLEVR: Beyond textual question-answering, Visual Question-Answering (VQA) has been largely studied during the last couple of years. More recently, the problem of relational reasoning has been introduced through this dataset BIBREF9 . The main original idea was to introduce relational reasoning questions over object shapes and placements. This dataset has already motivated the development of original deep models. To the best of our knowledge, no natural language question-answering corpus has been designed to investigate such capabilities. As we will present in the following of this paper, we think sentiment analysis is particularly suited for this task and we will introduce a novel machine reading corpus with such capability requirements. Attention-based models for aspect-based sentiment analysis Sentiment analysis is one of the historical tasks of Natural Language Processing. It is an important challenge for companies, restaurants, hotels that aim to analyze customer satisfaction regarding products and quality of services. Given a text document, the objective is to predict its overall polarity. Generally, it can be positive, negative or neutral. This analysis gives a quick overview of a general sentiment over a set of documents, but this framework tends to be restrictive. Indeed, one document tends to express multiple opinions of different aspects. For instance, in the sentence: The fish was very good but the service was terrible, there is not a general dominant sentiment, and a finer analysis is needed. The task of aspect-based sentiment analysis aims to predict a polarity of a sentence regarding a given aspect. In the previous example a positive polarity should be associated to the aspect food, and on the contrary, a negative sentiment is expressed regarding the quality of the service. The idea of using models originally designed for question-answering, for the sentiment analysis task has been introduced in BIBREF16 , BIBREF17 . In these papers, several adaptations of the end-to-end memory network (MemN2N) BIBREF18 are used to predict the polarity of a review regarding a given aspect. In that configuration, the review is encoded into the memory cells and the controller, usually initialized with a representation of the question, is initialized with a representation of the aspect. The analysis of the attention between the values of the controller and the document has shown interesting results, by highlighting relevant part of a document regarding an aspect. ReviewQA dataset We think that evaluating the task of sentiment analysis through the setup of question-answering is a relevant playground for machine reading research. Indeed natural language questions about the different aspects of the targeted venues are typical kind of questions we want to be able to ask to a system. In this context, we introduce a set of reasoning questions types over the relationships between aspects. We propose ReviewQA, a dataset of natural language questions over hotel reviews. These questions are divided into 8 groups, regarding the competency required to be answered. In this section, we describe each task and the process followed to generate this dataset. Original data We used a set of reviews extracted from the TripAdvisor website and originally proposed in BIBREF10 and BIBREF11 . This corpus is available at http://www.cs.virginia.edu/~hw5x/Data/LARA/TripAdvisor/TripAdvisorJson.tar.bz2. Each review comes with the name of the associated hotel, a title, an overall rating, a comment and a list of rated aspects. From 0 to 7 aspects, among value, room, location, cleanliness, check-in/front desk, service, business service, can possibly be rated in a review. Figure FIGREF8 displays a review extracted from this dataset. Relational reasoning competencies Objective: Starting with the original corpus, we aim at building a machine reading task where natural language questions will challenge the model on its understanding of the reviews. Indeed learning relational reasoning competencies over natural language documents is a major challenge of the current reading models. These original raw data allow us to generate relational questions that can possibly require a global understanding of the comment and reasoning skills to be treated. For example, asking a question like What is the best aspect rated in this comment ? is not an easy question that can be answered without a deep understanding of the review. It is necessary to capture all the aspects mentioned in the text, to predict their rating and finally to select the best one. The tasks and the dataset we propose are publicly available at http://www.europe.naverlabs.com/Blog/ReviewQA-A-novel-relational-aspect-based-opinion-dataset-for-machine-reading We introduce a list of 8 different competencies that a reading system should master in order to process reviews and text documents in general. These 8 tasks require different competencies and a different level of understanding of the document to be well answered. For instance, detecting if an aspect is mentioned in a review will require less understanding of the review than predicting explicitly the rating of this aspect. Table TABREF10 presents the 8 tasks we have introduced in this dataset with an example of a question that corresponds to each task. We also provide the expected type of the answer (Yes/No question, rating question...). It can be an additional tool to analyze the errors of the readers. Construction of the dataset We sample 100.000 reviews from the original corpus. Figure FIGREF12 presents the distribution of the number of words of the reviews in the dataset. We explicitly favor reviews which contain an important number of words. In average, a review contains 200 words. Indeed these long reviews are most likely to contain challenging relations between different aspects. A short review which deals with only a few aspects is more likely to not be very relevant to the challenge we want to propose in this dataset. Figure FIGREF14 displays the distribution of the ratings per aspects in the 100.000 reviews we based our dataset. We can see that the average values of these ratings tend to be quite high. It could have introduced bias if it was not the case for all the aspects. For example, we do not want that the model learns that in general, the service is rated better than the location and them answer without looking at the document. Since this situation is the same for all the aspects, the relational tasks introduced in this dataset remains extremely relevant. Then we randomly select 6 tasks for each review (the same task can be selected multiple times) and randomly select a natural language question that corresponds to this task. The questions are human-generated patterns that we have crowdsourced in order to produce a dataset as rich as possible. To this end, we have generated several patterns that correspond to the capabilities we wanted to express in a given question and we have crowdsourced rephrasing of these patterns. The final dataset we propose is composed of more than 500.000 questions about 100.000 reviews. Table TABREF13 shows the repartition of the documents and queries into the train and test set. Each review contains a maximum of 6 questions. Sometimes less when it is not possible to generate all. For example, if only two or three aspects are mentioned in a review, we will be able to generate only a little set of relational questions. Figure FIGREF15 depicts the repartition of the answers in the generated dataset. A majority of the tasks we introduced, even if they possibly require a high level of understanding of the document and the question, are binary questions. It means that in the generated dataset the answers yes and no tend to be more present than the others. To balance in a better way the distribution of the answers, we chose to affect a higher probability of sampling to the task 5, 6, 7.1, 8. Indeed, these tasks are not binary questions and required an aspect name as the answer. Figure FIGREF17 represents the repartition of question types in our dataset. Finally, figure FIGREF15 shows the repartition of the answers in the dataset. Paraphrase augmentation using backtranslation In order to generate more paraphrases of the questions, we used a backtranslation method to enrich them. The idea is to use a translation model that will translate our human-generated questions into another language, and then translate them back to English. This double translation will introduce rewordings of the questions that we will be able to integrate into this dataset. This approach has been used in BIBREF7 to perform data augmentation on the training set. For this purpose, we have trained a fairseq BIBREF19 model to translate sentences from English to French and for French to English. In order to preserve the quality of the sentences we have so far, we only keep the most probable translation of each original sentence. Indeed a beam search is used during the translation to predict the most probable translations which mean that we each translation comes with an associated probability. By selecting only the first translations, we almost double the number of questions without degrading the quality of the questions proposed in the dataset. Models In this section, we present the performance of four different models on our dataset: a logistic regression and three neural models. The first one is a basic LSTM BIBREF20 , the second a MemN2N BIBREF18 and the third one is a model of our own design. This fourth model reuses the encoding layers of the R-net BIBREF12 and we modify the final layers with a projection layer that will be able to select the answer among the set of candidates instead of pointing the answerer directly into the source document. Logistic regression: To produce the representation of the input, we concatenate the Bag-Of-Words representation of the document with the Bag-Of-Words representation of the question. It produces an array of size INLINEFORM0 where INLINEFORM1 is the vocabulary size. Then we use a logistic regression to select the most probable answer among the INLINEFORM2 possibilities. LSTM: We start with a concatenation of the sequence of indexes of the document with the sequence of indexes of the question. Them we feed an LSTM network with this vector and use the final state as the representation of the input. Finally, we apply a logistic regression over this representation to produce the final decision. End-to-end memory networks: This architecture is based on two different memory cells (input and output) that contain a representation of the document. A controller, initialized with the encoding of the question, is used to calculate an attention between this controller and the representation of the document in the input memory. This attention is them used to re-weight the representation of the document in the output memory. This response from the output memory is them utilized to update the controller. After that, either a matrix is used to project this representation into the answer space either the controller is used to go through an over hop of memory. This architecture allows the model to sequentially look into the initial document seeking for important information regarding the current state of its controller. This model achieves very good performances on the 20 bAbI tasks dataset. Deep projective reader: This is a model of our own design, largely inspired by the efficient R-net reader BIBREF12 . The overall architecture is composed of 4 stacked layers: an encoding layer, a question/document attention, a self-attention layer and a projection layer. The following paragraphs briefly describe the overall utility of each of these layers. Encoding: The sentence is tokenized by words. Each token is represented by the concatenation of its embedding vector and the final state of a bidirectional recurrent network over the characters of this word. Finally, another bidirectional RNN on the top of this representation produce the encoding of the document and the question. Question/document attention: We apply a question/document attention layer that matches the representation of the question with each token of the document individually to output an attention that gives more weight to the important tokens of the document regarding the question. Self-attention layer: The previous layer has built a question-aware representation of the document. One problem with such representation is that form the moment each token has only a good knowledge of its closest neighbors. To tackle this problem, BIBREF12 have proposed to use a self-attention layer that matches each individual token with all the other tokens of the document. Doing that, each token is now aware of a larger context. Output layer: A bidirectional RNN is applied on the top of the last layer and we use its final state as the representation of the input. We use a projection matrix to project this representation into the answer space and select the most probable one Training details We propose to train these models on the entire set of tasks and them to measure the overall performance and the accuracy of each individual task. In all the models, we use the Adam optimizer BIBREF21 with a learning rate of 0.01 and the batch size is set to 64. All the parameter are initialized from a Gaussian distribution with mean 0 and a standard deviation of 0.01. The dimension of the word embeddings in the projective deep reading model and the LSTM model is 300 and we use Glove pre-trained vectors ( BIBREF22 ). We use a MemN2N with 5 memory hops and a linear start of 5 epochs. The reviews are split by sentence and each memory block corresponds to one sentence. Each sentence is represented by its bag-of-word representation augmented with temporal encoding as it is suggested in BIBREF18 . Model performance Table TABREF19 displays the performance of the 4 baselines on the ReviewQA's test set. These results are the performance achieved by our own implementation of these 4 models. According to our results, the simple LSTM network and the MemN2N perform very poorly on this dataset. Especially on the most advanced reasoning tasks. Indeed, the task 5 which corresponds to the prediction of the exact rating of an aspect seems to be very challenging for these model. Maybe the tokenization by sentence to create the memory blocks of the MemN2N, which is appropriated in the case of the bAbI tasks, is not a good representation of the documents when it has to handle human generated comments. However, the logistic regression achieves reasonable performance on these tasks, and do not suffer from catastrophic performance on any tasks. Its worst result comes on task 6 and one of the reason is probably that this architecture is not designed to predict a list of answers. On the contrary, the deep projective reader achieves encouraging on this dataset. It outperforms all the other baselines, with very good scores on the first fourth tasks. The question/document and document/document attention layers proposed in BIBREF12 seem once again to produce rich encodings of the inputs which are relevant for our projection layer. Conclusion In this paper, we formalize the sentiment analysis task through the framework of machine reading and release ReviewQA, a relational question-answering corpus. This dataset allows evaluating a set of relational reasoning skills through natural language questions. It is composed of a large panel of human-generated questions. Moreover, we propose to augment the dataset with backtranslated reformulations of these questions. Finally, we evaluate 4 models on this dataset, including a projective model of our own design that seems to be a strong baseline for this dataset. We expect that this large dataset will encourage the research community to develop reasoning models and evaluate their models on this set of tasks. Acknowledgment We thank Vassilina Nikoulina and Stéphane Clinchant for the help regarding the backtranslation rewording of the questions.
[ "These 8 tasks require different competencies and a different level of understanding of the document to be well answered" ]
3,817
qasper_e
en
null
006bf0292d7a46cfbe4bc50f63e92d34411ea060f5b547ec
Were other baselines tested to compare with the neural baseline?
Introduction Privacy policies are the documents which disclose the ways in which a company gathers, uses, shares and manages a user's data. As legal documents, they function using the principle of notice and choice BIBREF0, where companies post their policies, and theoretically, users read the policies and decide to use a company's products or services only if they find the conditions outlined in its privacy policy acceptable. Many legal jurisdictions around the world accept this framework, including the United States and the European Union BIBREF1, BIBREF2. However, the legitimacy of this framework depends upon users actually reading and understanding privacy policies to determine whether company practices are acceptable to them BIBREF3. In practice this is seldom the case BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10. This is further complicated by the highly individual and nuanced compromises that users are willing to make with their data BIBREF11, discouraging a `one-size-fits-all' approach to notice of data practices in privacy documents. With devices constantly monitoring our environment, including our personal space and our bodies, lack of awareness of how our data is being used easily leads to problematic situations where users are outraged by information misuse, but companies insist that users have consented. The discovery of increasingly egregious uses of data by companies, such as the scandals involving Facebook and Cambridge Analytica BIBREF12, have further brought public attention to the privacy concerns of the internet and ubiquitous computing. This makes privacy a well-motivated application domain for NLP researchers, where advances in enabling users to quickly identify the privacy issues most salient to them can potentially have large real-world impact. [1]https://play.google.com/store/apps/details?id=com.gotokeep.keep.intl [2]https://play.google.com/store/apps/details?id=com.viber.voip [3]A question might not have any supporting evidence for an answer within the privacy policy. Motivated by this need, we contribute PrivacyQA, a corpus consisting of 1750 questions about the contents of privacy policies, paired with over 3500 expert annotations. The goal of this effort is to kickstart the development of question-answering methods for this domain, to address the (unrealistic) expectation that a large population should be reading many policies per day. In doing so, we identify several understudied challenges to our ability to answer these questions, with broad implications for systems seeking to serve users' information-seeking intent. By releasing this resource, we hope to provide an impetus to develop systems capable of language understanding in this increasingly important domain. Related Work Prior work has aimed to make privacy policies easier to understand. Prescriptive approaches towards communicating privacy information BIBREF21, BIBREF22, BIBREF23 have not been widely adopted by industry. Recently, there have been significant research effort devoted to understanding privacy policies by leveraging NLP techniques BIBREF24, BIBREF25, BIBREF26, BIBREF27, BIBREF28, especially by identifying specific data practices within a privacy policy. We adopt a personalized approach to understanding privacy policies, that allows users to query a document and selectively explore content salient to them. Most similar is the PolisisQA corpus BIBREF29, which examines questions users ask corporations on Twitter. Our approach differs in several ways: 1) The PrivacyQA dataset is larger, containing 10x as many questions and answers. 2) Answers are formulated by domain experts with legal training. 3) PrivacyQA includes diverse question types, including unanswerable and subjective questions. Our work is also related to reading comprehension in the open domain, which is frequently based upon Wikipedia passages BIBREF16, BIBREF17, BIBREF15, BIBREF30 and news articles BIBREF20, BIBREF31, BIBREF32. Table.TABREF4 presents the desirable attributes our dataset shares with past approaches. This work is also tied into research in applying NLP approaches to legal documents BIBREF33, BIBREF34, BIBREF35, BIBREF36, BIBREF37, BIBREF38, BIBREF39. While privacy policies have legal implications, their intended audience consists of the general public rather than individuals with legal expertise. This arrangement is problematic because the entities that write privacy policies often have different goals than the audience. feng2015applying, tan-EtAl:2016:P16-1 examine question answering in the insurance domain, another specialized domain similar to privacy, where the intended audience is the general public. Data Collection We describe the data collection methodology used to construct PrivacyQA. With the goal of achieving broad coverage across application types, we collect privacy policies from 35 mobile applications representing a number of different categories in the Google Play Store. One of our goals is to include both policies from well-known applications, which are likely to have carefully-constructed privacy policies, and lesser-known applications with smaller install bases, whose policies might be considerably less sophisticated. Thus, setting 5 million installs as a threshold, we ensure each category includes applications with installs on both sides of this threshold. All policies included in the corpus are in English, and were collected before April 1, 2018, predating many companies' GDPR-focused BIBREF41 updates. We leave it to future studies BIBREF42 to look at the impact of the GDPR (e.g., to what extent GDPR requirements contribute to making it possible to provide users with more informative answers, and to what extent their disclosures continue to omit issues that matter to users). Data Collection ::: Crowdsourced Question Elicitation The intended audience for privacy policies consists of the general public. This informs the decision to elicit questions from crowdworkers on the contents of privacy policies. We choose not to show the contents of privacy policies to crowdworkers, a procedure motivated by a desire to avoid inadvertent biases BIBREF43, BIBREF44, BIBREF45, BIBREF46, BIBREF47, and encourage crowdworkers to ask a variety of questions beyond only asking questions based on practices described in the document. Instead, crowdworkers are presented with public information about a mobile application available on the Google Play Store including its name, description and navigable screenshots. Figure FIGREF9 shows an example of our user interface. Crowdworkers are asked to imagine they have access to a trusted third-party privacy assistant, to whom they can ask any privacy question about a given mobile application. We use the Amazon Mechanical Turk platform and recruit crowdworkers who have been conferred “master” status and are located within the United States of America. Turkers are asked to provide five questions per mobile application, and are paid $2 per assignment, taking ~eight minutes to complete the task. Data Collection ::: Answer Selection To identify legally sound answers, we recruit seven experts with legal training to construct answers to Turker questions. Experts identify relevant evidence within the privacy policy, as well as provide meta-annotation on the question's relevance, subjectivity, OPP-115 category BIBREF49, and how likely any privacy policy is to contain the answer to the question asked. Data Collection ::: Analysis Table.TABREF17 presents aggregate statistics of the PrivacyQA dataset. 1750 questions are posed to our imaginary privacy assistant over 35 mobile applications and their associated privacy documents. As an initial step, we formulate the problem of answering user questions as an extractive sentence selection task, ignoring for now background knowledge, statistical data and legal expertise that could otherwise be brought to bear. The dataset is partitioned into a training set featuring 27 mobile applications and 1350 questions, and a test set consisting of 400 questions over 8 policy documents. This ensures that documents in training and test splits are mutually exclusive. Every question is answered by at least one expert. In addition, in order to estimate annotation reliability and provide for better evaluation, every question in the test set is answered by at least two additional experts. Table TABREF14 describes the distribution over first words of questions posed by crowdworkers. We also observe low redundancy in the questions posed by crowdworkers over each policy, with each policy receiving ~49.94 unique questions despite crowdworkers independently posing questions. Questions are on average 8.4 words long. As declining to answer a question can be a legally sound response but is seldom practically useful, answers to questions where a minority of experts abstain to answer are filtered from the dataset. Privacy policies are ~3000 words long on average. The answers to the question asked by the users typically have ~100 words of evidence in the privacy policy document. Data Collection ::: Analysis ::: Categories of Questions Questions are organized under nine categories from the OPP-115 Corpus annotation scheme BIBREF49: First Party Collection/Use: What, why and how information is collected by the service provider Third Party Sharing/Collection: What, why and how information shared with or collected by third parties Data Security: Protection measures for user information Data Retention: How long user information will be stored User Choice/Control: Control options available to users User Access, Edit and Deletion: If/how users can access, edit or delete information Policy Change: Informing users if policy information has been changed International and Specific Audiences: Practices pertaining to a specific group of users Other: General text, contact information or practices not covered by other categories. For each question, domain experts indicate one or more relevant OPP-115 categories. We mark a category as relevant to a question if it is identified as such by at least two annotators. If no such category exists, the category is marked as `Other' if atleast one annotator has identified the `Other' category to be relevant. If neither of these conditions is satisfied, we label the question as having no agreement. The distribution of questions in the corpus across OPP-115 categories is as shown in Table.TABREF16. First party and third party related questions are the largest categories, forming nearly 66.4% of all questions asked to the privacy assistant. Data Collection ::: Analysis ::: Answer Validation When do experts disagree? We would like to analyze the reasons for potential disagreement on the annotation task, to ensure disagreements arise due to valid differences in opinion rather than lack of adequate specification in annotation guidelines. It is important to note that the annotators are experts rather than crowdworkers. Accordingly, their judgements can be considered valid, legally-informed opinions even when their perspectives differ. For the sake of this question we randomly sample 100 instances in the test data and analyze them for likely reasons for disagreements. We consider a disagreement to have occurred when more than one expert does not agree with the majority consensus. By disagreement we mean there is no overlap between the text identified as relevant by one expert and another. We find that the annotators agree on the answer for 74% of the questions, even if the supporting evidence they identify is not identical i.e full overlap. They disagree on the remaining 26%. Sources of apparent disagreement correspond to situations when different experts: have differing interpretations of question intent (11%) (for example, when a user asks 'who can contact me through the app', the questions admits multiple interpretations, including seeking information about the features of the app, asking about first party collection/use of data or asking about third party collection/use of data), identify different sources of evidence for questions that ask if a practice is performed or not (4%), have differing interpretations of policy content (3%), identify a partial answer to a question in the privacy policy (2%) (for example, when the user asks `who is allowed to use the app' a majority of our annotators decline to answer, but the remaining annotators highlight partial evidence in the privacy policy which states that children under the age of 13 are not allowed to use the app), and other legitimate sources of disagreement (6%) which include personal subjective views of the annotators (for example, when the user asks `is my DNA information used in any way other than what is specified', some experts consider the boilerplate text of the privacy policy which states that it abides to practices described in the policy document as sufficient evidence to answer this question, whereas others do not). Experimental Setup We evaluate the ability of machine learning methods to identify relevant evidence for questions in the privacy domain. We establish baselines for the subtask of deciding on the answerability (§SECREF33) of a question, as well as the overall task of identifying evidence for questions from policies (§SECREF37). We describe aspects of the question that can render it unanswerable within the privacy domain (§SECREF41). Experimental Setup ::: Answerability Identification Baselines We define answerability identification as a binary classification task, evaluating model ability to predict if a question can be answered, given a question in isolation. This can serve as a prior for downstream question-answering. We describe three baselines on the answerability task, and find they considerably improve performance over a majority-class baseline. SVM: We define 3 sets of features to characterize each question. The first is a simple bag-of-words set of features over the question (SVM-BOW), the second is bag-of-words features of the question as well as length of the question in words (SVM-BOW + LEN), and lastly we extract bag-of-words features, length of the question in words as well as part-of-speech tags for the question (SVM-BOW + LEN + POS). This results in vectors of 200, 201 and 228 dimensions respectively, which are provided to an SVM with a linear kernel. CNN: We utilize a CNN neural encoder for answerability prediction. We use GloVe word embeddings BIBREF50, and a filter size of 5 with 64 filters to encode questions. BERT: BERT BIBREF51 is a bidirectional transformer-based language-model BIBREF52. We fine-tune BERT-base on our binary answerability identification task with a learning rate of 2e-5 for 3 epochs, with a maximum sequence length of 128. Experimental Setup ::: Privacy Question Answering Our goal is to identify evidence within a privacy policy for questions asked by a user. This is framed as an answer sentence selection task, where models identify a set of evidence sentences from all candidate sentences in each policy. Experimental Setup ::: Privacy Question Answering ::: Evaluation Metric Our evaluation metric for answer-sentence selection is sentence-level F1, implemented similar to BIBREF30, BIBREF16. Precision and recall are implemented by measuring the overlap between predicted sentences and sets of gold-reference sentences. We report the average of the maximum F1 from each n$-$1 subset, in relation to the heldout reference. Experimental Setup ::: Privacy Question Answering ::: Baselines We describe baselines on this task, including a human performance baseline. No-Answer Baseline (NA) : Most of the questions we receive are difficult to answer in a legally-sound way on the basis of information present in the privacy policy. We establish a simple baseline to quantify the effect of identifying every question as unanswerable. Word Count Baseline : To quantify the effect of using simple lexical matching to answer the questions, we retrieve the top candidate policy sentences for each question using a word count baseline BIBREF53, which counts the number of question words that also appear in a sentence. We include the top 2, 3 and 5 candidates as baselines. BERT: We implement two BERT-based baselines BIBREF51 for evidence identification. First, we train BERT on each query-policy sentence pair as a binary classification task to identify if the sentence is evidence for the question or not (Bert). We also experiment with a two-stage classifier, where we separately train the model on questions only to predict answerability. At inference time, if the answerable classifier predicts the question is answerable, the evidence identification classifier produces a set of candidate sentences (Bert + Unanswerable). Human Performance: We pick each reference answer provided by an annotator, and compute the F1 with respect to the remaining references, as described in section 4.2.1. Each reference answer is treated as the prediction, and the remaining n-1 answers are treated as the gold reference. The average of the maximum F1 across all reference answers is computed as the human baseline. Results and Discussion The results of the answerability baselines are presented in Table TABREF31, and on answer sentence selection in Table TABREF32. We observe that bert exhibits the best performance on a binary answerability identification task. However, most baselines considerably exceed the performance of a majority-class baseline. This suggests considerable information in the question, indicating it's possible answerability within this domain. Table.TABREF32 describes the performance of our baselines on the answer sentence selection task. The No-answer (NA) baseline performs at 28 F1, providing a lower bound on performance at this task. We observe that our best-performing baseline, Bert + Unanswerable achieves an F1 of 39.8. This suggest that bert is capable of making some progress towards answering questions in this difficult domain, while still leaving considerable headroom for improvement to reach human performance. Bert + Unanswerable performance suggests that incorporating information about answerability can help in this difficult domain. We examine this challenging phenomena of unanswerability further in Section . Results and Discussion ::: Error Analysis Disagreements are analyzed based on the OPP-115 categories of each question (Table.TABREF34). We compare our best performing BERT variant against the NA model and human performance. We observe significant room for improvement across all categories of questions but especially for first party, third party and data retention categories. We analyze the performance of our strongest BERT variant, to identify classes of errors and directions for future improvement (Table.8). We observe that a majority of answerability mistakes made by the BERT model are questions which are in fact answerable, but are identified as unanswerable by BERT. We observe that BERT makes 124 such mistakes on the test set. We collect expert judgments on relevance, subjectivity , silence and information about how likely the question is to be answered from the privacy policy from our experts. We find that most of these mistakes are relevant questions. However many of them were identified as subjective by the annotators, and at least one annotator marked 19 of these questions as having no answer within the privacy policy. However, only 6 of these questions were unexpected or do not usually have an answer in privacy policies. These findings suggest that a more nuanced understanding of answerability might help improve model performance in his challenging domain. Results and Discussion ::: What makes Questions Unanswerable? We further ask legal experts to identify potential causes of unanswerability of questions. This analysis has considerable implications. While past work BIBREF17 has treated unanswerable questions as homogeneous, a question answering system might wish to have different treatments for different categories of `unanswerable' questions. The following factors were identified to play a role in unanswerability: Incomprehensibility: If a question is incomprehensible to the extent that its meaning is not intelligible. Relevance: Is this question in the scope of what could be answered by reading the privacy policy. Ill-formedness: Is this question ambiguous or vague. An ambiguous statement will typically contain expressions that can refer to multiple potential explanations, whereas a vague statement carries a concept with an unclear or soft definition. Silence: Other policies answer this type of question but this one does not. Atypicality: The question is of a nature such that it is unlikely for any policy policy to have an answer to the question. Our experts attempt to identify the different `unanswerable' factors for all 573 such questions in the corpus. 4.18% of the questions were identified as being incomprehensible (for example, `any difficulties to occupy the privacy assistant'). Amongst the comprehendable questions, 50% were identified as likely to have an answer within the privacy policy, 33.1% were identified as being privacy-related questions but not within the scope of a privacy policy (e.g., 'has Viber had any privacy breaches in the past?') and 16.9% of questions were identified as completely out-of-scope (e.g., `'will the app consume much space?'). In the questions identified as relevant, 32% were ill-formed questions that were phrased by the user in a manner considered vague or ambiguous. Of the questions that were both relevant as well as `well-formed', 95.7% of the questions were not answered by the policy in question but it was reasonable to expect that a privacy policy would contain an answer. The remaining 4.3% were described as reasonable questions, but of a nature generally not discussed in privacy policies. This suggests that the answerability of questions over privacy policies is a complex issue, and future systems should consider each of these factors when serving user's information seeking intent. We examine a large-scale dataset of “natural” unanswerable questions BIBREF54 based on real user search engine queries to identify if similar unanswerability factors exist. It is important to note that these questions have previously been filtered, according to a criteria for bad questions defined as “(questions that are) ambiguous, incomprehensible, dependent on clear false presuppositions, opinion-seeking, or not clearly a request for factual information.” Annotators made the decision based on the content of the question without viewing the equivalent Wikipedia page. We randomly sample 100 questions from the development set which were identified as unanswerable, and find that 20% of the questions are not questions (e.g., “all I want for christmas is you mariah carey tour”). 12% of questions are unlikely to ever contain an answer on Wikipedia, corresponding closely to our atypicality category. 3% of questions are unlikely to have an answer anywhere (e.g., `what guides Santa home after he has delivered presents?'). 7% of questions are incomplete or open-ended (e.g., `the south west wind blows across nigeria between'). 3% of questions have an unresolvable coreference (e.g., `how do i get to Warsaw Missouri from here'). 4% of questions are vague, and a further 7% have unknown sources of error. 2% still contain false presuppositions (e.g., `what is the only fruit that does not have seeds?') and the remaining 42% do not have an answer within the document. This reinforces our belief that though they have been understudied in past work, any question answering system interacting with real users should expect to receive such unanticipated and unanswerable questions. Conclusion We present PrivacyQA, the first significant corpus of privacy policy questions and more than 3500 expert annotations of relevant answers. The goal of this work is to promote question-answering research in the specialized privacy domain, where it can have large real-world impact. Strong neural baselines on PrivacyQA achieve a performance of only 39.8 F1 on this corpus, indicating considerable room for future research. Further, we shed light on several important considerations that affect the answerability of questions. We hope this contribution leads to multidisciplinary efforts to precisely understand user intent and reconcile it with information in policy documents, from both the privacy and NLP communities. Acknowledgements This research was supported in part by grants from the National Science Foundation Secure and Trustworthy Computing program (CNS-1330596, CNS-1330214, CNS-15-13957, CNS-1801316, CNS-1914486, CNS-1914444) and a DARPA Brandeis grant on Personalized Privacy Assistants (FA8750-15-2-0277). The US Government is authorized to reproduce and distribute reprints for Governmental purposes not withstanding any copyright notation. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the NSF, DARPA, or the US Government. The authors would like to extend their gratitude to Elias Wright, Gian Mascioli, Kiara Pillay, Harrison Kay, Eliel Talo, Alexander Fagella and N. Cameron Russell for providing their valuable expertise and insight to this effort. The authors are also grateful to Eduard Hovy, Lorrie Cranor, Florian Schaub, Joel Reidenberg, Aditya Potukuchi and Igor Shalyminov for helpful discussions related to this work, and to the three anonymous reviewers of this draft for their constructive feedback. Finally, the authors would like to thank all crowdworkers who consented to participate in this study.
[ "SVM, No-Answer Baseline (NA) , Word Count Baseline, Human Performance", "No-Answer Baseline (NA), Word Count Baseline, Human Performance" ]
3,855
qasper_e
en
null
35be0474e7f491742c4f764a3ed5fc174465bf0ba26ebff0
How many documents are in the new corpus?
Introduction The automatic processing of medical texts and documents plays an increasingly important role in the recent development of the digital health area. To enable dedicated Natural Language Processing (NLP) that is highly accurate with respect to medically relevant categories, manually annotated data from this domain is needed. One category of high interest and relevance are medical entities. Only very few annotated corpora in the medical domain exist. Many of them focus on the relation between chemicals and diseases or proteins and diseases, such as the BC5CDR corpus BIBREF0, the Comparative Toxicogenomics Database BIBREF1, the FSU PRotein GEne corpus BIBREF2 or the ADE (adverse drug effect) corpus BIBREF3. The NCBI Disease Corpus BIBREF4 contains condition mention annotations along with annotations of symptoms. Several new corpora of annotated case reports were made available recently. grouin-etal-2019-clinical presented a corpus with medical entity annotations of clinical cases written in French, copdPhenotype presented a corpus focusing on phenotypic information for chronic obstructive pulmonary disease while 10.1093/database/bay143 presented a corpus focusing on identifying main finding sentences in case reports. The corpus most comparable to ours is the French corpus of clinical case reports by grouin-etal-2019-clinical. Their annotations are based on UMLS semantic types. Even though there is an overlap in annotated entities, semantic classes are not the same. Lab results are subsumed under findings in our corpus and are not annotated as their own class. Factors extend beyond gender and age and describe any kind of risk factor that contributes to a higher probability of having a certain disease. Our corpus includes additional entity types. We annotate conditions, findings (including medical findings such as blood values), factors, and also modifiers which indicate the negation of other entities as well as case entities, i. e., entities specific to one case report. An overview is available in Table TABREF3. A Corpus of Medical Case Reports with Medical Entity Annotation ::: Annotation tasks Case reports are standardized in the CARE guidelines BIBREF5. They represent a detailed description of the symptoms, signs, diagnosis, treatment, and follow-up of an individual patient. We focus on documents freely available through PubMed Central (PMC). The presentation of the patient's case can usually be found in a dedicated section or the abstract. We perform a manual annotation of all mentions of case entities, conditions, findings, factors and modifiers. The scope of our manual annotation is limited to the presentation of a patient's signs and symptoms. In addition, we annotate the title of the case report. A Corpus of Medical Case Reports with Medical Entity Annotation ::: Annotation Guidelines We annotate the following entities: case entity marks the mention of a patient. A case report can contain more than one case description. Therefore, all the findings, factors and conditions related to one patient are linked to the respective case entity. Within the text, this entity is often represented by the first mention of the patient and overlaps with the factor annotations which can, e. g., mark sex and age (cf. Figure FIGREF12). condition marks a medical disease such as pneumothorax or dislocation of the shoulder. factor marks a feature of a patient which might influence the probability for a specific diagnosis. It can be immutable (e. g., sex and age), describe a specific medical history (e. g., diabetes mellitus) or a behaviour (e. g., smoking). finding marks a sign or symptom a patient shows. This can be visible (e. g., rash), described by a patient (e. g., headache) or measurable (e. g., decreased blood glucose level). negation modifier explicitly negate the presence of a certain finding usually setting the case apart from common cases. We also annotate relations between these entities, where applicable. Since we work on case descriptions, the anchor point of these relations is the case that is described. The following relations are annotated: has relations exist between a case entity and factor, finding or condition entities. modifies relations exist between negation modifiers and findings. causes relations exist between conditions and findings. Example annotations are shown in Figure FIGREF16. A Corpus of Medical Case Reports with Medical Entity Annotation ::: Annotators We asked medical doctors experienced in extracting knowledge related to medical entities from texts to annotate the entities described above. Initially, we asked four annotators to test our guidelines on two texts. Subsequently, identified issues were discussed and resolved. Following this pilot annotation phase, we asked two different annotators to annotate two case reports according to our guidelines. The same annotators annotated an overall collection of 53 case reports. Inter-annotator agreement is calculated based on two case reports. We reach a Cohen's kappa BIBREF6 of 0.68. Disagreements mainly appear for findings that are rather unspecific such as She no longer eats out with friends which can be seen as a finding referring to “avoidance behaviour”. A Corpus of Medical Case Reports with Medical Entity Annotation ::: Annotation Tools and Format The annotation was performed using WebAnno BIBREF7, a web-based tool for linguistic annotation. The annotators could choose between a pre-annotated version or a blank version of each text. The pre-annotated versions contained suggested entity spans based on string matches from lists of conditions and findings synonym lists. Their quality varied widely throughout the corpus. The blank version was preferred by the annotators. We distribute the corpus in BioC JSON format. BioC was chosen as it allows us to capture the complexities of the annotations in the biomedical domain. It represented each documents properties ranging from full text, individual passages/sentences along with captured annotations and relationships in an organized manner. BioC is based on character offsets of annotations and allows the stacking of different layers. A Corpus of Medical Case Reports with Medical Entity Annotation ::: Corpus Overview The corpus consists of 53 documents, which contain an average number of 156.1 sentences per document, each with 19.55 tokens on average. The corpus comprises 8,275 sentences and 167,739 words in total. However, as mentioned above, only case presentation sections, headings and abstracts are annotated. The numbers of annotated entities are summarized in Table TABREF24. Findings are the most frequently annotated type of entity. This makes sense given that findings paint a clinical picture of the patient's condition. The number of tokens per entity ranges from one token for all types to 5 tokens for cases (average length 3.1), nine tokens for conditions (average length 2.0), 16 tokens for factors (average length 2.5), 25 tokens for findings (average length 2.6) and 18 tokens for modifiers (average length 1.4) (cf. Table TABREF24). Examples of rather long entities are given in Table TABREF25. Entities can appear in a discontinuous way. We model this as a relation between two spans which we call “discontinuous” (cf. Figure FIGREF26). Especially findings often appear as discontinuous entities, we found 543 discontinuous finding relations. The numbers for conditions and factors are lower with seven and two, respectively. Entities can also be nested within one another. This happens either when the span of one annotation is completely embedded in the span of another annotation (fully-nested; cf. Figure FIGREF12), or when there is a partial overlapping between the spans of two different entities (partially-nested; cf. Figure FIGREF12). There is a high number of inter-sentential relations in the corpus (cf. Table TABREF27). This can be explained by the fact that the case entity occurs early in each document; furthermore, it is related to finding and factor annotations that are distributed across different sentences. The most frequently annotated relation in our corpus is the has-relation between a case entity and the findings related to that case. This correlates with the high number of finding entities. The relations contained in our corpus are summarized in Table TABREF27. Baseline systems for Named Entity Recognition in medical case reports We evaluate the corpus using Named Entity Recognition (NER), i. e., the task of finding mentions of concepts of interest in unstructured text. We focus on detecting cases, conditions, factors, findings and modifiers in case reports (cf. Section SECREF6). We approach this as a sequence labeling problem. Four systems were developed to offer comparable robust baselines. The original documents are pre-processed (sentence splitting and tokenization with ScispaCy). We do not perform stop word removal or lower-casing of the tokens. The BIO labeling scheme is used to capture the order of tokens belonging to the same entity type and enable span-level detection of entities. Detection of nested and/or discontinuous entities is not supported. The annotated corpus is randomized and split in five folds using scikit-learn BIBREF9. Each fold has a train, test and dev split with the test split defined as .15% of the train split. This ensures comparability between the presented systems. Baseline systems for Named Entity Recognition in medical case reports ::: Conditional Random Fields Conditional Random Fields (CRF) BIBREF10 are a standard approach when dealing with sequential data in the context of sequence labeling. We use a combination of linguistic and semantic features, with a context window of size five, to describe each of the tokens and the dependencies between them. Hyper-parameter optimization is performed using randomized search and cross validation. Span-based F1 score is used as the optimization metric. Baseline systems for Named Entity Recognition in medical case reports ::: BiLSTM-CRF Prior to the emergence of deep neural language models, BiLSTM-CRF models BIBREF11 had achieved state-of-the-art results for the task of sequence labeling. We use a BiLSTM-CRF model with both word-level and character-level input. BioWordVec BIBREF12 pre-trained word embeddings are used in the embedding layer for the input representation. A bidirectional LSTM layer is applied to a multiplication of the two input representations. Finally, a CRF layer is applied to predict the sequence of labels. Dropout and L1/L2 regularization is used where applicable. He (uniform) initialization BIBREF13 is used to initialize the kernels of the individual layers. As the loss metric, CRF-based loss is used, while optimizing the model based on the CRF Viterbi accuracy. Additionally, span-based F1 score is used to serialize the best performing model. We train for a maximum of 100 epochs, or until an early stopping criterion is reached (no change in validation loss value grater than 0.01 for ten consecutive epochs). Furthermore, Adam BIBREF14 is used as the optimizer. The learning rate is reduced by a factor of 0.3 in case no significant increase of the optimization metric is achieved in three consecutive epochs. Baseline systems for Named Entity Recognition in medical case reports ::: Multi-Task Learning Multi-Task Learning (MTL) BIBREF15 has become popular with the progress in deep learning. This model family is characterized by simultaneous optimization of multiple loss functions and transfer of knowledge achieved this way. The knowledge is transferred through the use of one or multiple shared layers. Through finding supporting patterns in related tasks, MTL provides better generalization on unseen cases and the main tasks we are trying to solve. We rely on the model presented by bekoulis2018joint and reuse the implementation provided by the authors. The model jointly trains two objectives supported by the dataset: the main task of NER and a supporting task of Relation Extraction (RE). Two separate models are developed for each of the tasks. The NER task is solved with the help of a BiLSTM-CRF model, similar to the one presented in Section SECREF32 The RE task is solved by using a multi-head selection approach, where each token can have none or more relationships to in-sentence tokens. Additionally, this model also leverages the output of the NER branch model (the CRF prediction) to learn label embeddings. Shared layers consist of a concatenation of word and character embeddings followed by two bidirectional LSTM layers. We keep most of the parameters suggested by the authors and change (1) the number of training epochs to 100 to allow the comparison to other deep learning approaches in this work, (2) use label embeddings of size 64, (3) allow gradient clipping and (4) use $d=0.8$ as the pre-trained word embedding dropout and $d=0.5$ for all other dropouts. $\eta =1^{-3}$ is used as the learning rate with the Adam optimizer and tanh activation functions across layers. Although it is possible to use adversarial training BIBREF16, we omit from using it. We also omit the publication of results for the task of RE as we consider it to be a supporting task and no other competing approaches have been developed. Baseline systems for Named Entity Recognition in medical case reports ::: BioBERT Deep neural language models have recently evolved to a successful method for representing text. In particular, Bidirectional Encoder Representations from Transformers (BERT) outperformed previous state-of-the-art methods by a large margin on various NLP tasks BIBREF17. For our experiments, we use BioBERT, an adaptation of BERT for the biomedical domain, pre-trained on PubMed abstracts and PMC full-text articles BIBREF18. The BERT architecture for deriving text representations uses 12 hidden layers, consisting of 768 units each. For NER, token level BIO-tag probabilities are computed with a single output layer based on the representations from the last layer of BERT. We fine-tune the model on the entity recognition task during four training epochs with batch size $b=32$, dropout probability $d=0.1$ and learning rate $\eta =2^{-5}$. These hyper-parameters are proposed by Devlin2018 for BERT fine-tuning. Baseline systems for Named Entity Recognition in medical case reports ::: Evaluation To evaluate the performance of the four systems, we calculate the span-level precision (P), recall (R) and F1 scores, along with corresponding micro and macro scores. The reported values are shown in Table TABREF29 and are averaged over five folds, utilising the seqeval framework. With a macro avg. F1-score of 0.59, MTL achieves the best result with a significant margin compared to CRF, BiLSTM-CRF and BERT. This confirms the usefulness of jointly training multiple objectives (minimizing multiple loss functions), and enabling knowledge transfer, especially in a setting with limited data (which is usually the case in the biomedical NLP domain). This result also suggest the usefulness of BioBERT for other biomedical datasets as reported by Lee2019. Despite being a rather standard approach, CRF outperforms the more elaborated BiLSTM-CRF, presumably due to data scarcity and class imbalance. We hypothesize that an increase in training data would yield better results for BiLSTM-CRF but not outperform transfer learning approach of MTL (or even BioBERT). In contrast to other common NER corpora, like CoNLL 2003, even the best baseline system only achieves relatively low scores. This outcome is due to the inherent difficulty of the task (annotators are experienced medical doctors) and the small number of training samples. Conclusion We present a new corpus, developed to facilitate the processing of case reports. The corpus focuses on five distinct entity types: cases, conditions, factors, findings and modifiers. Where applicable, relationships between entities are also annotated. Additionally, we annotate discontinuous entities with a special relationship type (discontinuous). The corpus presented in this paper is the very first of its kind and a valuable addition to the scarce number of corpora available in the field of biomedical NLP. Its complexity, given the discontinuous nature of entities and a high number of nested and multi-label entities, poses new challenges for NLP methods applied for NER and can, hence, be a valuable source for insights into what entities “look like in the wild”. Moreover, it can serve as a playground for new modelling techniques such as the resolution of discontinuous entities as well as multi-task learning given the combination of entities and their relations. We provide an evaluation of four distinct NER systems that will serve as robust baselines for future work but which are, as of yet, unable to solve all the complex challenges this dataset holds. A functional service based on the presented corpus is currently being integrated, as a NER service, in the QURATOR platform BIBREF20. Acknowledgments The research presented in this article is funded by the German Federal Ministry of Education and Research (BMBF) through the project QURATOR (Unternehmen Region, Wachstumskern, grant no. 03WKDA1A), see http://qurator.ai. We want to thank our medical experts for their help annotating the data set, especially Ashlee Finckh and Sophie Klopfenstein.
[ "53 documents", "53 documents" ]
2,667
qasper_e
en
null
128f8753d02c6777ce3244345d4799613c188be0b6b66b7a
Where do they get the recipes from?
Introduction In the kitchen, we increasingly rely on instructions from cooking websites: recipes. A cook with a predilection for Asian cuisine may wish to prepare chicken curry, but may not know all necessary ingredients apart from a few basics. These users with limited knowledge cannot rely on existing recipe generation approaches that focus on creating coherent recipes given all ingredients and a recipe name BIBREF0. Such models do not address issues of personal preference (e.g. culinary tastes, garnish choices) and incomplete recipe details. We propose to approach both problems via personalized generation of plausible, user-specific recipes using user preferences extracted from previously consumed recipes. Our work combines two important tasks from natural language processing and recommender systems: data-to-text generation BIBREF1 and personalized recommendation BIBREF2. Our model takes as user input the name of a specific dish, a few key ingredients, and a calorie level. We pass these loose input specifications to an encoder-decoder framework and attend on user profiles—learned latent representations of recipes previously consumed by a user—to generate a recipe personalized to the user's tastes. We fuse these `user-aware' representations with decoder output in an attention fusion layer to jointly determine text generation. Quantitative (perplexity, user-ranking) and qualitative analysis on user-aware model outputs confirm that personalization indeed assists in generating plausible recipes from incomplete ingredients. While personalized text generation has seen success in conveying user writing styles in the product review BIBREF3, BIBREF4 and dialogue BIBREF5 spaces, we are the first to consider it for the problem of recipe generation, where output quality is heavily dependent on the content of the instructions—such as ingredients and cooking techniques. To summarize, our main contributions are as follows: We explore a new task of generating plausible and personalized recipes from incomplete input specifications by leveraging historical user preferences; We release a new dataset of 180K+ recipes and 700K+ user reviews for this task; We introduce new evaluation strategies for generation quality in instructional texts, centering on quantitative measures of coherence. We also show qualitatively and quantitatively that personalized models generate high-quality and specific recipes that align with historical user preferences. Related Work Large-scale transformer-based language models have shown surprising expressivity and fluency in creative and conditional long-text generation BIBREF6, BIBREF7. Recent works have proposed hierarchical methods that condition on narrative frameworks to generate internally consistent long texts BIBREF8, BIBREF9, BIBREF10. Here, we generate procedurally structured recipes instead of free-form narratives. Recipe generation belongs to the field of data-to-text natural language generation BIBREF1, which sees other applications in automated journalism BIBREF11, question-answering BIBREF12, and abstractive summarization BIBREF13, among others. BIBREF14, BIBREF15 model recipes as a structured collection of ingredient entities acted upon by cooking actions. BIBREF0 imposes a `checklist' attention constraint emphasizing hitherto unused ingredients during generation. BIBREF16 attend over explicit ingredient references in the prior recipe step. Similar hierarchical approaches that infer a full ingredient list to constrain generation will not help personalize recipes, and would be infeasible in our setting due to the potentially unconstrained number of ingredients (from a space of 10K+) in a recipe. We instead learn historical preferences to guide full recipe generation. A recent line of work has explored user- and item-dependent aspect-aware review generation BIBREF3, BIBREF4. This work is related to ours in that it combines contextual language generation with personalization. Here, we attend over historical user preferences from previously consumed recipes to generate recipe content, rather than writing styles. Approach Our model's input specification consists of: the recipe name as a sequence of tokens, a partial list of ingredients, and a caloric level (high, medium, low). It outputs the recipe instructions as a token sequence: $\mathcal {W}_r=\lbrace w_{r,0}, \dots , w_{r,T}\rbrace $ for a recipe $r$ of length $T$. To personalize output, we use historical recipe interactions of a user $u \in \mathcal {U}$. Encoder: Our encoder has three embedding layers: vocabulary embedding $\mathcal {V}$, ingredient embedding $\mathcal {I}$, and caloric-level embedding $\mathcal {C}$. Each token in the (length $L_n$) recipe name is embedded via $\mathcal {V}$; the embedded token sequence is passed to a two-layered bidirectional GRU (BiGRU) BIBREF17, which outputs hidden states for names $\lbrace \mathbf {n}_{\text{enc},j} \in \mathbb {R}^{2d_h}\rbrace $, with hidden size $d_h$. Similarly each of the $L_i$ input ingredients is embedded via $\mathcal {I}$, and the embedded ingredient sequence is passed to another two-layered BiGRU to output ingredient hidden states as $\lbrace \mathbf {i}_{\text{enc},j} \in \mathbb {R}^{2d_h}\rbrace $. The caloric level is embedded via $\mathcal {C}$ and passed through a projection layer with weights $W_c$ to generate calorie hidden representation $\mathbf {c}_{\text{enc}} \in \mathbb {R}^{2d_h}$. Ingredient Attention: We apply attention BIBREF18 over the encoded ingredients to use encoder outputs at each decoding time step. We define an attention-score function $\alpha $ with key $K$ and query $Q$: with trainable weights $W_{\alpha }$, bias $\mathbf {b}_{\alpha }$, and normalization term $Z$. At decoding time $t$, we calculate the ingredient context $\mathbf {a}_{t}^{i} \in \mathbb {R}^{d_h}$ as: Decoder: The decoder is a two-layer GRU with hidden state $h_t$ conditioned on previous hidden state $h_{t-1}$ and input token $w_{r, t}$ from the original recipe text. We project the concatenated encoder outputs as the initial decoder hidden state: To bias generation toward user preferences, we attend over a user's previously reviewed recipes to jointly determine the final output token distribution. We consider two different schemes to model preferences from user histories: (1) recipe interactions, and (2) techniques seen therein (defined in data). BIBREF19, BIBREF20, BIBREF21 explore similar schemes for personalized recommendation. Prior Recipe Attention: We obtain the set of prior recipes for a user $u$: $R^+_u$, where each recipe can be represented by an embedding from a recipe embedding layer $\mathcal {R}$ or an average of the name tokens embedded by $\mathcal {V}$. We attend over the $k$-most recent prior recipes, $R^{k+}_u$, to account for temporal drift of user preferences BIBREF22. These embeddings are used in the `Prior Recipe' and `Prior Name' models, respectively. Given a recipe representation $\mathbf {r} \in \mathbb {R}^{d_r}$ (where $d_r$ is recipe- or vocabulary-embedding size depending on the recipe representation) the prior recipe attention context $\mathbf {a}_{t}^{r_u}$ is calculated as Prior Technique Attention: We calculate prior technique preference (used in the `Prior Tech` model) by normalizing co-occurrence between users and techniques seen in $R^+_u$, to obtain a preference vector $\rho _{u}$. Each technique $x$ is embedded via a technique embedding layer $\mathcal {X}$ to $\mathbf {x}\in \mathbb {R}^{d_x}$. Prior technique attention is calculated as where, inspired by copy mechanisms BIBREF23, BIBREF24, we add $\rho _{u,x}$ for technique $x$ to emphasize the attention by the user's prior technique preference. Attention Fusion Layer: We fuse all contexts calculated at time $t$, concatenating them with decoder GRU output and previous token embedding: We then calculate the token probability: and maximize the log-likelihood of the generated sequence conditioned on input specifications and user preferences. fig:ex shows a case where the Prior Name model attends strongly on previously consumed savory recipes to suggest the usage of an additional ingredient (`cilantro'). Recipe Dataset: Food.com We collect a novel dataset of 230K+ recipe texts and 1M+ user interactions (reviews) over 18 years (2000-2018) from Food.com. Here, we restrict to recipes with at least 3 steps, and at least 4 and no more than 20 ingredients. We discard users with fewer than 4 reviews, giving 180K+ recipes and 700K+ reviews, with splits as in tab:recipeixnstats. Our model must learn to generate from a diverse recipe space: in our training data, the average recipe length is 117 tokens with a maximum of 256. There are 13K unique ingredients across all recipes. Rare words dominate the vocabulary: 95% of words appear $<$100 times, accounting for only 1.65% of all word usage. As such, we perform Byte-Pair Encoding (BPE) tokenization BIBREF25, BIBREF26, giving a training vocabulary of 15K tokens across 19M total mentions. User profiles are similarly diverse: 50% of users have consumed $\le $6 recipes, while 10% of users have consumed $>$45 recipes. We order reviews by timestamp, keeping the most recent review for each user as the test set, the second most recent for validation, and the remainder for training (sequential leave-one-out evaluation BIBREF27). We evaluate only on recipes not in the training set. We manually construct a list of 58 cooking techniques from 384 cooking actions collected by BIBREF15; the most common techniques (bake, combine, pour, boil) account for 36.5% of technique mentions. We approximate technique adherence via string match between the recipe text and technique list. Experiments and Results For training and evaluation, we provide our model with the first 3-5 ingredients listed in each recipe. We decode recipe text via top-$k$ sampling BIBREF7, finding $k=3$ to produce satisfactory results. We use a hidden size $d_h=256$ for both the encoder and decoder. Embedding dimensions for vocabulary, ingredient, recipe, techniques, and caloric level are 300, 10, 50, 50, and 5 (respectively). For prior recipe attention, we set $k=20$, the 80th %-ile for the number of user interactions. We use the Adam optimizer BIBREF28 with a learning rate of $10^{-3}$, annealed with a decay rate of 0.9 BIBREF29. We also use teacher-forcing BIBREF30 in all training epochs. In this work, we investigate how leveraging historical user preferences can improve generation quality over strong baselines in our setting. We compare our personalized models against two baselines. The first is a name-based Nearest-Neighbor model (NN). We initially adapted the Neural Checklist Model of BIBREF0 as a baseline; however, we ultimately use a simple Encoder-Decoder baseline with ingredient attention (Enc-Dec), which provides comparable performance and lower complexity. All personalized models outperform baseline in BPE perplexity (tab:metricsontest) with Prior Name performing the best. While our models exhibit comparable performance to baseline in BLEU-1/4 and ROUGE-L, we generate more diverse (Distinct-1/2: percentage of distinct unigrams and bigrams) and acceptable recipes. BLEU and ROUGE are not the most appropriate metrics for generation quality. A `correct' recipe can be written in many ways with the same main entities (ingredients). As BLEU-1/4 capture structural information via n-gram matching, they are not correlated with subjective recipe quality. This mirrors observations from BIBREF31, BIBREF8. We observe that personalized models make more diverse recipes than baseline. They thus perform better in BLEU-1 with more key entities (ingredient mentions) present, but worse in BLEU-4, as these recipes are written in a personalized way and deviate from gold on the phrasal level. Similarly, the `Prior Name' model generates more unigram-diverse recipes than other personalized models and obtains a correspondingly lower BLEU-1 score. Qualitative Analysis: We present sample outputs for a cocktail recipe in tab:samplerecipes, and additional recipes in the appendix. Generation quality progressively improves from generic baseline output to a blended cocktail produced by our best performing model. Models attending over prior recipes explicitly reference ingredients. The Prior Name model further suggests the addition of lemon and mint, which are reasonably associated with previously consumed recipes like coconut mousse and pork skewers. Personalization: To measure personalization, we evaluate how closely the generated text corresponds to a particular user profile. We compute the likelihood of generated recipes using identical input specifications but conditioned on ten different user profiles—one `gold' user who consumed the original recipe, and nine randomly generated user profiles. Following BIBREF8, we expect the highest likelihood for the recipe conditioned on the gold user. We measure user matching accuracy (UMA)—the proportion where the gold user is ranked highest—and Mean Reciprocal Rank (MRR) BIBREF32 of the gold user. All personalized models beat baselines in both metrics, showing our models personalize generated recipes to the given user profiles. The Prior Name model achieves the best UMA and MRR by a large margin, revealing that prior recipe names are strong signals for personalization. Moreover, the addition of attention mechanisms to capture these signals improves language modeling performance over a strong non-personalized baseline. Recipe Level Coherence: A plausible recipe should possess a coherent step order, and we evaluate this via a metric for recipe-level coherence. We use the neural scoring model from BIBREF33 to measure recipe-level coherence for each generated recipe. Each recipe step is encoded by BERT BIBREF34. Our scoring model is a GRU network that learns the overall recipe step ordering structure by minimizing the cosine similarity of recipe step hidden representations presented in the correct and reverse orders. Once pretrained, our scorer calculates the similarity of a generated recipe to the forward and backwards ordering of its corresponding gold label, giving a score equal to the difference between the former and latter. A higher score indicates better step ordering (with a maximum score of 2). tab:coherencemetrics shows that our personalized models achieve average recipe-level coherence scores of 1.78-1.82, surpassing the baseline at 1.77. Recipe Step Entailment: Local coherence is also crucial to a user following a recipe: it is crucial that subsequent steps are logically consistent with prior ones. We model local coherence as an entailment task: predicting the likelihood that a recipe step follows the preceding. We sample several consecutive (positive) and non-consecutive (negative) pairs of steps from each recipe. We train a BERT BIBREF34 model to predict the entailment score of a pair of steps separated by a [SEP] token, using the final representation of the [CLS] token. The step entailment score is computed as the average of scores for each set of consecutive steps in each recipe, averaged over every generated recipe for a model, as shown in tab:coherencemetrics. Human Evaluation: We presented 310 pairs of recipes for pairwise comparison BIBREF8 (details in appendix) between baseline and each personalized model, with results shown in tab:metricsontest. On average, human evaluators preferred personalized model outputs to baseline 63% of the time, confirming that personalized attention improves the semantic plausibility of generated recipes. We also performed a small-scale human coherence survey over 90 recipes, in which 60% of users found recipes generated by personalized models to be more coherent and preferable to those generated by baseline models. Conclusion In this paper, we propose a novel task: to generate personalized recipes from incomplete input specifications and user histories. On a large novel dataset of 180K recipes and 700K reviews, we show that our personalized generative models can generate plausible, personalized, and coherent recipes preferred by human evaluators for consumption. We also introduce a set of automatic coherence measures for instructional texts as well as personalization metrics to support our claims. Our future work includes generating structured representations of recipes to handle ingredient properties, as well as accounting for references to collections of ingredients (e.g. “dry mix"). Acknowledgements. This work is partly supported by NSF #1750063. We thank all reviewers for their constructive suggestions, as well as Rei M., Sujoy P., Alicia L., Eric H., Tim S., Kathy C., Allen C., and Micah I. for their feedback. Appendix ::: Food.com: Dataset Details Our raw data consists of 270K recipes and 1.4M user-recipe interactions (reviews) scraped from Food.com, covering a period of 18 years (January 2000 to December 2018). See tab:int-stats for dataset summary statistics, and tab:samplegk for sample information about one user-recipe interaction and the recipe involved. Appendix ::: Generated Examples See tab:samplechx for a sample recipe for chicken chili and tab:samplewaffle for a sample recipe for sweet waffles. Human Evaluation We prepared a set of 15 pairwise comparisons per evaluation session, and collected 930 pairwise evaluations (310 per personalized model) over 62 sessions. For each pair, users were given a partial recipe specification (name and 3-5 key ingredients), as well as two generated recipes labeled `A' and `B'. One recipe is generated from our baseline encoder-decoder model and one recipe is generated by one of our three personalized models (Prior Tech, Prior Name, Prior Recipe). The order of recipe presentation (A/B) is randomly selected for each question. A screenshot of the user evaluation interface is given in fig:exeval. We ask the user to indicate which recipe they find more coherent, and which recipe best accomplishes the goal indicated by the recipe name. A screenshot of this survey interface is given in fig:exeval2.
[ "from Food.com" ]
2,649
qasper_e
en
null
c3694e336a5431082844e851d3bf6ca4b1d21ce87665281e
How do they evaluate their resulting word embeddings?
Introduction Low dimensional word representations (embeddings) have become a key component in modern NLP systems for language modeling, parsing, sentiment classification, and many others. These embeddings are usually derived by employing the distributional hypothesis: that similar words appear in similar contexts BIBREF0 . The models that perform the word embedding can be divided into two classes: predictive, which learn a target or context word distribution, and counting, which use a raw, weighted, or factored word-context co-occurrence matrix BIBREF1 . The most well-known predictive model, which has become eponymous with word embedding, is word2vec BIBREF2 . Popular counting models include PPMI-SVD BIBREF3 , GloVe BIBREF4 , and LexVec BIBREF5 . These models all learn word-level representations, which presents two main problems: 1) Learned information is not explicitly shared among the representations as each word has an independent vector. 2) There is no clear way to represent out-of-vocabulary (OOV) words. fastText BIBREF6 addresses these issues in the Skip-gram word2vec model by representing a word by the sum of a unique vector and a set of shared character n-grams (from hereon simply referred to as n-grams) vectors. This addresses both issues above as learned information is shared through the n-gram vectors and from these OOV word representations can be constructed. In this paper we propose incorporating subword information into counting models using a strategy similar to fastText. We use LexVec as the counting model as it generally outperforms PPMI-SVD and GloVe on intrinsic and extrinsic evaluations BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , but the method proposed here should transfer to GloVe unchanged. The LexVec objective is modified such that a word's vector is the sum of all its subword vectors. We compare 1) the use of n-gram subwords, like fastText, and 2) unsupervised morphemes identified using Morfessor BIBREF11 to learn whether more linguistically motivated subwords offer any advantage over simple n-grams. To evaluate the impact subword information has on in-vocabulary (IV) word representations, we run intrinsic evaluations consisting of word similarity and word analogy tasks. The incorporation of subword information results in similar gains (and losses) to that of fastText over Skip-gram. Whereas incorporating n-gram subwords tends to capture more syntactic information, unsupervised morphemes better preserve semantics while also improving syntactic results. Given that intrinsic performance can correlate poorly with performance on downstream tasks BIBREF12 , we also conduct evaluation using the VecEval suite of tasks BIBREF13 , in which all subword models, including fastText, show no significant improvement over word-level models. We verify the model's ability to represent OOV words by quantitatively evaluating nearest-neighbors. Results show that, like fastText, both LexVec n-gram and (to a lesser degree) unsupervised morpheme models give coherent answers. This paper discusses related word ( $§$ "Related Work" ), introduces the subword LexVec model ( $§$ "Subword LexVec" ), describes experiments ( $§$ "Materials" ), analyzes results ( $§$ "Results" ), and concludes with ideas for future works ( $§$ "Conclusion and Future Work" ). Related Work Word embeddings that leverage subword information were first introduced by BIBREF14 which represented a word of as the sum of four-gram vectors obtained running an SVD of a four-gram to four-gram co-occurrence matrix. Our model differs by learning the subword vectors and resulting representation jointly as weighted factorization of a word-context co-occurrence matrix is performed. There are many models that use character-level subword information to form word representations BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , as well as fastText (the model on which we base our work). Closely related are models that use morphological segmentation in learning word representations BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 . Our model also uses n-grams and morphological segmentation, but it performs explicit matrix factorization to learn subword and word representations, unlike these related models which mostly use neural networks. Finally, BIBREF26 and BIBREF27 retrofit morphological information onto pre-trained models. These differ from our work in that we incorporate morphological information at training time, and that only BIBREF26 is able to generate embeddings for OOV words. Subword LexVec The LexVec BIBREF7 model factorizes the PPMI-weighted word-context co-occurrence matrix using stochastic gradient descent. $$PPMI_{wc} = max(0, \log \frac{M_{wc} \; M_{**}}{ M_{w*} \; M_{*c} })$$ (Eq. 3) where $M$ is the word-context co-occurrence matrix constructed by sliding a window of fixed size centered over every target word $w$ in the subsampled BIBREF2 training corpus and incrementing cell $M_{wc}$ for every context word $c$ appearing within this window (forming a $(w,c)$ pair). LexVec adjusts the PPMI matrix using context distribution smoothing BIBREF3 . With the PPMI matrix calculated, the sliding window process is repeated and the following loss functions are minimized for every observed $(w,c)$ pair and target word $w$ : $$L_{wc} &= \frac{1}{2} (u_w^\top v_c - PPMI_{wc})^2 \\ L_{w} &= \frac{1}{2} \sum \limits _{i=1}^k{\mathbf {E}_{c_i \sim P_n(c)} (u_w^\top v_{c_i} - PPMI_{wc_i})^2 }$$ (Eq. 4) where $u_w$ and $v_c$ are $d$ -dimensional word and context vectors. The second loss function describes how, for each target word, $k$ negative samples BIBREF2 are drawn from the smoothed context unigram distribution. Given a set of subwords $S_w$ for a word $w$ , we follow fastText and replace $u_w$ in eq:lexvec2,eq:lexvec3 by $u^{\prime }_w$ such that: $$u^{\prime }_w = \frac{1}{|S_w| + 1} (u_w + \sum _{s \in S_w} q_{hash(s)})$$ (Eq. 5) such that a word is the sum of its word vector and its $d$ -dimensional subword vectors $q_x$ . The number of possible subwords is very large so the function $hash(s)$ hashes a subword to the interval $[1, buckets]$ . For OOV words, $$u^{\prime }_w = \frac{1}{|S_w|} \sum _{s \in S_w} q_{hash(s)}$$ (Eq. 7) We compare two types of subwords: simple n-grams (like fastText) and unsupervised morphemes. For example, given the word “cat”, we mark beginning and end with angled brackets and use all n-grams of length 3 to 6 as subwords, yielding $S_{\textnormal {cat}} = \lbrace \textnormal {$ $ ca, at$ $, cat} \rbrace $ . Morfessor BIBREF11 is used to probabilistically segment words into morphemes. The Morfessor model is trained using raw text so it is entirely unsupervised. For the word “subsequent”, we get $S_{\textnormal {subsequent}} = \lbrace \textnormal {$ $ sub, sequent$ $} \rbrace $ . Materials Our experiments aim to measure if the incorporation of subword information into LexVec results in similar improvements as observed in moving from Skip-gram to fastText, and whether unsupervised morphemes offer any advantage over n-grams. For IV words, we perform intrinsic evaluation via word similarity and word analogy tasks, as well as downstream tasks. OOV word representation is tested through qualitative nearest-neighbor analysis. All models are trained using a 2015 dump of Wikipedia, lowercased and using only alphanumeric characters. Vocabulary is limited to words that appear at least 100 times for a total of 303517 words. Morfessor is trained on this vocabulary list. We train the standard LexVec (LV), LexVec using n-grams (LV-N), and LexVec using unsupervised morphemes (LV-M) using the same hyper-parameters as BIBREF7 ( $\textnormal {window} = 2$ , $\textnormal {initial learning rate} = .025$ , $\textnormal {subsampling} = 10^{-5}$ , $\textnormal {negative samples} = 5$ , $\textnormal {context distribution smoothing} = .75$ , $\textnormal {positional contexts} = \textnormal {True}$ ). Both Skip-gram (SG) and fastText (FT) are trained using the reference implementation of fastText with the hyper-parameters given by BIBREF6 ( $\textnormal {window} = 5$ , $\textnormal {initial learning rate} = .025$ , $\textnormal {subsampling} = 10^{-4}$ , $\textnormal {negative samples} = 5$ ). All five models are run for 5 iterations over the training corpus and generate 300 dimensional word representations. LV-N, LV-M, and FT use 2000000 buckets when hashing subwords. For word similarity evaluations, we use the WordSim-353 Similarity (WS-Sim) and Relatedness (WS-Rel) BIBREF28 and SimLex-999 (SimLex) BIBREF29 datasets, and the Rare Word (RW) BIBREF20 dataset to verify if subword information improves rare word representation. Relationships are measured using the Google semantic (GSem) and syntactic (GSyn) analogies BIBREF2 and the Microsoft syntactic analogies (MSR) dataset BIBREF30 . We also evaluate all five models on downstream tasks from the VecEval suite BIBREF13 , using only the tasks for which training and evaluation data is freely available: chunking, sentiment and question classification, and natural language identification (NLI). The default settings from the suite are used, but we run only the fixed settings, where the embeddings themselves are not tunable parameters of the models, forcing the system to use only the information already in the embeddings. Finally, we use LV-N, LV-M, and FT to generate OOV word representations for the following words: 1) “hellooo”: a greeting commonly used in instant messaging which emphasizes a syllable. 2) “marvelicious”: a made-up word obtained by merging “marvelous” and “delicious”. 3) “louisana”: a misspelling of the proper name “Louisiana”. 4) “rereread”: recursive use of prefix “re”. 5) “tuzread”: made-up prefix “tuz”. Results Results for IV evaluation are shown in tab:intrinsic, and for OOV in tab:oov. Like in FT, the use of subword information in both LV-N and LV-M results in 1) better representation of rare words, as evidenced by the increase in RW correlation, and 2) significant improvement on the GSyn and MSR tasks, in evidence of subwords encoding information about a word's syntactic function (the suffix “ly”, for example, suggests an adverb). There seems to a trade-off between capturing semantics and syntax as in both LV-N and FT there is an accompanying decrease on the GSem tasks in exchange for gains on the GSyn and MSR tasks. Morphological segmentation in LV-M appears to favor syntax less strongly than do simple n-grams. On the downstream tasks, we only observe statistically significant ( $p < .05$ under a random permutation test) improvement on the chunking task, and it is a very small gain. We attribute this to both regular and subword models having very similar quality on frequent IV word representation. Statistically, these are the words are that are most likely to appear in the downstream task instances, and so the superior representation of rare words has, due to their nature, little impact on overall accuracy. Because in all tasks OOV words are mapped to the “ $\langle $ unk $\rangle $ ” token, the subword models are not being used to the fullest, and in future work we will investigate whether generating representations for all words improves task performance. In OOV representation (tab:oov), LV-N and FT work almost identically, as is to be expected. Both find highly coherent neighbors for the words “hellooo”, “marvelicious”, and “rereread”. Interestingly, the misspelling of “louisana” leads to coherent name-like neighbors, although none is the expected correct spelling “louisiana”. All models stumble on the made-up prefix “tuz”. A possible fix would be to down-weigh very rare subwords in the vector summation. LV-M is less robust than LV-N and FT on this task as it is highly sensitive to incorrect segmentation, exemplified in the “hellooo” example. Finally, we see that nearest-neighbors are a mixture of similarly pre/suffixed words. If these pre/suffixes are semantic, the neighbors are semantically related, else if syntactic they have similar syntactic function. This suggests that it should be possible to get tunable representations which are more driven by semantics or syntax by a weighted summation of subword vectors, given we can identify whether a pre/suffix is semantic or syntactic in nature and weigh them accordingly. This might be possible without supervision using corpus statistics as syntactic subwords are likely to be more frequent, and so could be down-weighted for more semantic representations. This is something we will pursue in future work. Conclusion and Future Work In this paper, we incorporated subword information (simple n-grams and unsupervised morphemes) into the LexVec word embedding model and evaluated its impact on the resulting IV and OOV word vectors. Like fastText, subword LexVec learns better representations for rare words than its word-level counterpart. All models generated coherent representations for OOV words, with simple n-grams demonstrating more robustness than unsupervised morphemes. In future work, we will verify whether using OOV representations in downstream tasks improves performance. We will also explore the trade-off between semantics and syntax when subword information is used.
[ "We also evaluate all five models on downstream tasks from the VecEval suite BIBREF13 , using only the tasks for which training and evaluation data is freely available: chunking, sentiment and question classification, and natural language identification (NLI). The default settings from the suite are used, but we run only the fixed settings, where the embeddings themselves are not tunable parameters of the models, forcing the system to use only the information already in the embeddings." ]
2,083
qasper_e
en
null
1443430475309d05eb23043251943ee0b3c3ade6f0730fdc
What are 10 other phenotypes that are annotated?
Introduction and Related Work With the widespread adoption of electronic health records (EHRs), medical data are being generated and stored digitally in vast quantities BIBREF0. While much EHR data are structured and amenable to analysis, there appears to be limited homogeneity in data completeness and quality BIBREF1, and it is estimated that the majority of healthcare data are being generated in unstructured, text-based format BIBREF2. The generation and storage of these unstructured data come concurrently with policy initiatives that seek to utilize preventative measures to reduce hospital admission and readmission BIBREF3. Chronic illnesses, behavioral factors, and social determinants of health are known to be associated with higher risks of hospital readmission, BIBREF4 and though behavioral factors and social determinants of health are often determined at the point of care, their identification may not always be curated in structured format within the EHR in the same manner that other factors associated with routine patient history taking and physical examination are BIBREF5. Identifying these patient attributes within EHRs in a reliable manner has the potential to reveal actionable associations which otherwise may remain poorly defined. As EHRs act to streamline the healthcare administration process, much of the data collected and stored in structured format may be those data most relevant to reimbursement and billing, and may not necessarily be those data which were most relevant during the clinical encounter. For example, a diabetic patient who does not adhere to an insulin treatment regimen and who thereafter presents to the hospital with symptoms indicating diabetic ketoacidosis (DKA) will be treated and considered administratively as an individual presenting with DKA, though that medical emergency may have been secondary to non-adherence to the initial treatment regimen in the setting of diabetes. In this instance, any retrospective study analyzing only the structured data from many similarly selected clinical encounters will necessarily then underestimate the effect of treatment non-adherence with respect to hospital admissions. While this form of high context information may not be found in the structured EHR data, it may be accessible in patient notes, including nursing progress notes and discharge summaries, particularly through the utilization of natural language processing (NLP) technologies. BIBREF6, BIBREF7 Given progress in NLP methods, we sought to address the issue of unstructured clinical text by defining and annotating clinical phenotypes in text which may otherwise be prohibitively difficult to discern in the structured data associated with the text entry. For this task, we chose the notes present in the publicly available MIMIC database BIBREF8. Given the MIMIC database as substrate and the aforementioned policy initiatives to reduce unnecessary hospital readmissions, as well as the goal of providing structure to text, we elected to focus on patients who were frequently readmitted to the ICU BIBREF9. In particular, a patient who is admitted to the ICU more than three times in a single year. By defining our cohort in this way we sought to ensure we were able to capture those characteristics unique to the cohort in a manner which may yield actionable intelligence on interventions to assist this patient population. Data Characteristics We have created a dataset of discharge summaries and nursing notes, all in the English language, with a focus on frequently readmitted patients, labeled with 15 clinical patient phenotypes believed to be associated with risk of recurrent Intensive Care Unit (ICU) readmission per our domain experts (co-authors LAC, PAT, DAG) as well as the literature. BIBREF10 BIBREF11 BIBREF12 Each entry in this database of consists of a Subject Identifier (integer), a Hospital Admission Identifier (integer), Category (string), Text (string), 15 Phenotypes (binary) including “None” and “Unsure”, Batch Date (string), and Operators (string). These variables are sufficient to use the data set alone, or to join it to the MIMIC-III database by Subject Identifier or Hospital Admission Identifier for additional patient-level or admission-level data, respectively. The MIMIC database BIBREF8 was utilized to extract Subject Identifiers, Hospital Admission Identifiers, and Note Text. Annotated discharge summaries had a median token count of 1417.50 (Q1-Q3: 1046.75 - 1926.00) with a vocabulary of 26454 unique tokens, while nursing notes had a median count of 208 (Q1-Q3: 120 - 312) with a vocabulary of 12865 unique tokens. Table defines each of the considered clinical patient phenotypes. Table counts the occurrences of these phenotypes across patient notes and Figure contains the corresponding correlation matrix. Lastly, Table presents an overview of some descriptive statistics on the patient notes' lengths. Methods Clinical researchers teamed with junior medical residents in collaboration with more senior intensive care physicians to carry out text annotation over the period of one year BIBREF13. Operators were grouped to facilitate the annotation of notes in duplicate, allowing for cases of disagreement between operators. The operators within each team were instructed to work independently on note annotation. Clinical texts were annotated in batches which were time-stamped on their day of creation, when both operators in a team completed annotation of a batch, a new batch was created and transferred to them. Two groups (group 1: co-authors ETM & JTW; group 2: co-authors JW & JF) of two operator pairs of one clinical researcher and one resident physician (who had previously taken the MCAT®) first annotated nursing notes and then discharge summaries. Everyone was first trained on the high-context phenotypes to look for as well as their definitions by going through a number of notes in a group. A total of 13 phenotypes were considered for annotation, and the label “unsure” was used to indicate that an operator would like to seek assistance determining the presence of an phenotype from a more senior physician. Annotations for phenotypes required explicit text in the note indicating the phenotype, but as a result of the complexity of certain phenotypes there was no specific dictionary of terms, or order in which the terms appeared, required for a phenotype to be considered present. Limitations There exist a few limitations to this database. These data are unique to Beth Israel Deaconess Medical Center (BIDMC), and models resulting from these data may not generalize to notes generated at other hospitals. Admissions to hospitals not associated with BIDMC will not have been captured, and generalizability is limited due to the limited geographic distribution of patients which present to the hospital. We welcome opportunities to continue to expand this dataset with additional phenotypes sought in the unstructured text, patient subsets, and text originating from different sources, with the goal of expanding the utility of NLP methods to further structure patient note text for retrospective analyses. Technical Validation All statistics and tabulations were generated and performed with R Statistical Software version 3.5.2. BIBREF14 Cohen's Kappa BIBREF15 was calculated for each phenotype and pair of annotators for which precisely two note annotations were recorded. Table summarizes the calculated Cohen's Kappa coefficients. Usage Notes As this corpus of annotated patient notes comprises original healthcare data which contains protected health information (PHI) per The Health Information Portability and Accountability Act of 1996 (HIPAA) BIBREF16 and can be joined to the MIMIC-III database, individuals who wish to access to the data must satisfy all requirements to access the data contained within MIMIC-III. To satisfy these conditions, an individual who wishes to access the database must take a “Data or Specimens Management” course, as well as sign a user agreement, as outlined on the MIMIC-III database webpage, where the latest version of this database will be hosted as “Annotated Clinical Texts from MIMIC” BIBREF17. This corpus can also be accessed on GitHub after completing all of the above requirements. Baselines In the section, we present the performance of two well-established baseline models to automatically infer the phenotype based on the patient note, which we approach as a multi-label, multi-class text classification task BIBREF18. Each of the baseline model is a binary classifier indicating whether a given phenotype is present in the input patient note. As a result, we train a separate model for each phenotype. Baselines ::: Bag of Words + Logistic Regression We convert each patient note into a bag of words, and give as input to a logistic regression. Baselines ::: Convolutional Neural Network (CNN) We follow the CNN architecture proposed by collobert2011natural and kim2014convolutional. We use the convolution widths from 1 to 4, and for each convolution width we set the number of filters to 100. We use dropout with a probability of $0.5$ to reduce overfitting BIBREF19. The trainable parameters were initialized using a uniform distribution from $-0.05$ to $0.05$. The model was optimized with adadelta BIBREF20. We use word2vec BIBREF21 as the word embeddings, which we pretrain on all the notes of MIMIC III v3. Table presents the performance of the two baseline models (F1-score). Conclusion In this paper we have presented a new dataset containing discharge summaries and nursing progress notes, focusing on frequently readmitted patients and high-context social determinants of health, and originating from a large tertiary care hospital. Each patient note was annotated by at least one clinical researcher and one resident physician for 13 high-context patient phenotypes. Phenotype definitions, dataset distribution, patient note statistics, inter-operator error, and the results of baseline models were reported to demonstrate that the dataset is well-suited for the development of both rule-based and statistical models for patient phenotyping. We hope that the release of this dataset will accelerate the development of algorithms for patient phenotyping, which in turn would significantly help medical research progress faster. Acknowledgements The authors would like to acknowledge Kai-ou Tang and William Labadie-Moseley for assistance in the development of a graphical user interface for text annotation. We would also like to thank Philips Healthcare, The Laboratory of Computational Physiology at The Massachusetts Institute of Technology, and staff at the Beth Israel Deaconess Medical Center, Boston, for supporting the MIMIC-III database, from which these data were derived.
[ "Adv. Heart Disease, Adv. Lung Disease, Alcohol Abuse, Chronic Neurologic Dystrophies, Dementia, Depression, Developmental Delay, Obesity, Psychiatric disorders and Substance Abuse" ]
1,651
qasper_e
en
null
0b04d1a3009c6f4f4502defd297b37bb756f8e5e904d2449
How long are the essays on average?
Introduction Several learner corpora have been compiled for English, such as the International Corpus of Learner English BIBREF0 . The importance of such resources has been increasingly recognized across a variety of research areas, from Second Language Acquisition to Natural Language Processing. Recently, we have seen substantial growth in this area and new corpora for languages other than English have appeared. For Romance languages, there are a several corpora and resources for French, Spanish BIBREF1 , and Italian BIBREF2 . Portuguese has also received attention in the compilation of learner corpora. There are two corpora compiled at the School of Arts and Humanities of the University of Lisbon: the corpus Recolha de dados de Aprendizagem do Português Língua Estrangeira (hereafter, Leiria corpus), with 470 texts and 70,500 tokens, and the Learner Corpus of Portuguese as Second/Foreign Language, COPLE2 BIBREF3 , with 1,058 texts and 201,921 tokens. The Corpus de Produções Escritas de Aprendentes de PL2, PEAPL2 compiled at the University of Coimbra, contains 516 texts and 119,381 tokens. Finally, the Corpus de Aquisição de L2, CAL2, compiled at the New University of Lisbon, contains 1,380 texts and 281,301 words, and it includes texts produced by adults and children, as well as a spoken subset. The aforementioned Portuguese learner corpora contain very useful data for research, particularly for Native Language Identification (NLI), a task that has received much attention in recent years. NLI is the task of determining the native language (L1) of an author based on their second language (L2) linguistic productions BIBREF4 . NLI works by identifying language use patterns that are common to groups of speakers of the same native language. This process is underpinned by the presupposition that an author’s L1 disposes them towards certain language production patterns in their L2, as influenced by their mother tongue. A major motivation for NLI is studying second language acquisition. NLI models can enable analysis of inter-L1 linguistic differences, allowing us to study the language learning process and develop L1-specific pedagogical methods and materials. However, there are limitations to using existing Portuguese data for NLI. An important issue is that the different corpora each contain data collected from different L1 backgrounds in varying amounts; they would need to be combined to have sufficient data for an NLI study. Another challenge concerns the annotations as only two of the corpora (PEAPL2 and COPLE2) are linguistically annotated, and this is limited to POS tags. The different data formats used by each corpus presents yet another challenge to their usage. In this paper we present NLI-PT, a dataset collected for Portuguese NLI. The dataset is made freely available for research purposes. With the goal of unifying learner data collected from various sources, listed in Section "Collection methodology" , we applied a methodology which has been previously used for the compilation of language variety corpora BIBREF5 . The data was converted to a unified data format and uniformly annotated at different linguistic levels as described in Section "Preprocessing and annotation of texts" . To the best of our knowledge, NLI-PT is the only Portuguese dataset developed specifically for NLI, this will open avenues for research in this area. Related Work NLI has attracted a lot of attention in recent years. Due to the availability of suitable data, as discussed earlier, this attention has been particularly focused on English. The most notable examples are the two editions of the NLI shared task organized in 2013 BIBREF6 and 2017 BIBREF7 . Even though most NLI research has been carried out on English data, an important research trend in recent years has been the application of NLI methods to other languages, as discussed in multilingual-nli. Recent NLI studies on languages other than English include Arabic BIBREF8 and Chinese BIBREF9 , BIBREF10 . To the best of our knowledge, no study has been published on Portuguese and the NLI-PT dataset opens new possibilities of research for Portuguese. In Section "A Baseline for Portuguese NLI" we present the first simple baseline results for this task. Finally, as NLI-PT can be used in other applications besides NLI, it is important to point out that a number of studies have been published on educational NLP applications for Portuguese and on the compilation of learner language resources for Portuguese. Examples of such studies include grammatical error correction BIBREF11 , automated essay scoring BIBREF12 , academic word lists BIBREF13 , and the learner corpora presented in the previous section. Collection methodology The data was collected from three different learner corpora of Portuguese: (i) COPLE2; (ii) Leiria corpus, and (iii) PEAPL2 as presented in Table 1 . The three corpora contain written productions from learners of Portuguese with different proficiency levels and native languages (L1s). In the dataset we included all the data in COPLE2 and sections of PEAPL2 and Leiria corpus. The main variable we used for text selection was the presence of specific L1s. Since the three corpora consider different L1s, we decided to use the L1s present in the largest corpus, COPLE2, as the reference. Therefore, we included in the dataset texts corresponding to the following 15 L1s: Chinese, English, Spanish, German, Russian, French, Japanese, Italian, Dutch, Tetum, Arabic, Polish, Korean, Romanian, and Swedish. It was the case that some of the L1s present in COPLE2 were not documented in the other corpora. The number of texts from each L1 is presented in Table 2 . Concerning the corpus design, there is some variability among the sources we used. Leiria corpus and PEAPL2 followed a similar approach for data collection and show a close design. They consider a close list of topics, called “stimulus”, which belong to three general areas: (i) the individual; (ii) the society; (iii) the environment. Those topics are presented to the students in order to produce a written text. As a whole, texts from PEAPL2 and Leiria represent 36 different stimuli or topics in the dataset. In COPLE2 corpus the written texts correspond to written exercises done during Portuguese lessons, or to official Portuguese proficiency tests. For this reason, the topics considered in COPLE2 corpus are different from the topics in Leiria and PEAPL2. The number of topics is also larger in COPLE2 corpus: 149 different topics. There is some overlap between the different topics considered in COPLE2, that is, some topics deal with the same subject. This overlap allowed us to reorganize COPLE2 topics in our dataset, reducing them to 112. Due to the different distribution of topics in the source corpora, the 148 topics in the dataset are not represented uniformly. Three topics account for a 48.7% of the total texts and, on the other hand, a 72% of the topics are represented by 1-10 texts (Figure 1 ). This variability affects also text length. The longest text has 787 tokens and the shortest has only 16 tokens. Most texts, however, range roughly from 150 to 250 tokens. To better understand the distribution of texts in terms of word length we plot a histogram of all texts with their word length in bins of 10 (1-10 tokens, 11-20 tokens, 21-30 tokens and so on) (Figure 2 ). The three corpora use the proficiency levels defined in the Common European Framework of Reference for Languages (CEFR), but they show differences in the number of levels they consider. There are five proficiency levels in COPLE2 and PEAPL2: A1, A2, B1, B2, and C1. But there are 3 levels in Leiria corpus: A, B, and C. The number of texts included from each proficiency level is presented in Table 4 . Preprocessing and annotation of texts As demonstrated earlier, these learner corpora use different formats. COPLE2 is mainly codified in XML, although it gives the possibility of getting the student version of the essay in TXT format. PEAPL2 and Leiria corpus are compiled in TXT format. In both corpora, the TXT files contain the student version with special annotations from the transcription. For the NLI experiments we were interested in a clean txt version of the students' text, together with versions annotated at different linguistics levels. Therefore, as a first step, we removed all the annotations corresponding to the transcription process in PEAPL2 and Leiria files. As a second step, we proceeded to the linguistic annotation of the texts using different NLP tools. We annotated the dataset at two levels: Part of Speech (POS) and syntax. We performed the annotation with freely available tools for the Portuguese language. For POS we added a simple POS, that is, only type of word, and a fine-grained POS, which is the type of word plus its morphological features. We used the LX Parser BIBREF14 , for the simple POS and the Portuguese morphological module of Freeling BIBREF15 , for detailed POS. Concerning syntactic annotations, we included constituency and dependency annotations. For constituency parsing, we used the LX Parser, and for dependency, the DepPattern toolkit BIBREF16 . Applications NLI-PT was developed primarily for NLI, but it can be used for other research purposes ranging from second language acquisition to educational NLP applications. Here are a few examples of applications in which the dataset can be used: A Baseline for Portuguese NLI To demonstrate the usefulness of the dataset we present the first lexical baseline for Portuguese NLI using a sub-set of NLI-PT. To the best of our knowledge, no study has been published on Portuguese NLI and our work fills this gap. In this experiment we included the five L1s in NLI-PT which contain the largest number of texts in this sub-set and run a simple linear SVM BIBREF21 classifier using a bag of words model to identify the L1 of each text. The languages included in this experiment were Chinese (355 texts), English (236 texts), German (214 texts), Italian (216 texts), and Spanish (271 texts). We evaluated the model using stratified 10-fold cross-validation, achieving 70% accuracy. An important limitation of this experiment is that it does not account for topic bias, an important issue in NLI BIBREF22 . This is due to the fact that NLI-PT is not balanced by topic and the model could be learning topic associations instead. In future work we would like to carry out using syntactic features such as function words, syntactic relations and POS annotation. Conclusion and Future Work This paper presented NLI-PT, the first Portuguese dataset compiled for NLI. NLI-PT contains 1,868 texts written by speakers of 15 L1s amounting to over 380,000 tokens. As discussed in Section "Applications" , NLI-PT opens several avenues for future research. It can be used for different research purposes beyond NLI such as grammatical error correction and CALL. An experiment with the texts written by the speakers of five L1s: Chinese, English, German, Italian, and Spanish using a bag of words model achieved 70% accuracy. We are currently experimenting with different features taking advantage of the annotation available in NLI-PT thus reducing topic bias in classification. In future work we would like to include more texts in the dataset following the same methodology and annotation. Acknowledgement We want to thank the research teams that have made available the data we used in this work: Centro de Estudos de Linguística Geral e Aplicada at Universidade de Coimbra (specially Cristina Martins) and Centro de Linguística da Universidade de Lisboa (particularly Amália Mendes). This work was partially supported by Fundação para a Ciência e a Tecnologia (postdoctoral research grant SFRH/BPD/109914/2015).
[ "204 tokens", "Most texts, however, range roughly from 150 to 250 tokens." ]
1,898
qasper_e
en
null
78ec8052a16e28c7c9da9d61bfd7b70c05ada0efa5169524
What textual patterns are extracted?
Introduction Writing errors can occur in many different forms – from relatively simple punctuation and determiner errors, to mistakes including word tense and form, incorrect collocations and erroneous idioms. Automatically identifying all of these errors is a challenging task, especially as the amount of available annotated data is very limited. Rei2016 showed that while some error detection algorithms perform better than others, it is additional training data that has the biggest impact on improving performance. Being able to generate realistic artificial data would allow for any grammatically correct text to be transformed into annotated examples containing writing errors, producing large amounts of additional training examples. Supervised error generation systems would also provide an efficient method for anonymising the source corpus – error statistics from a private corpus can be aggregated and applied to a different target text, obscuring sensitive information in the original examination scripts. However, the task of creating incorrect data is somewhat more difficult than might initially appear – naive methods for error generation can create data that does not resemble natural errors, thereby making downstream systems learn misleading or uninformative patterns. Previous work on artificial error generation (AEG) has focused on specific error types, such as prepositions and determiners BIBREF0 , BIBREF1 , or noun number errors BIBREF2 . Felice2014a investigated the use of linguistic information when generating artificial data for error correction, but also restricting the approach to only five error types. There has been very limited research on generating artificial data for all types, which is important for general-purpose error detection systems. For example, the error types investigated by Felice2014a cover only 35.74% of all errors present in the CoNLL 2014 training dataset, providing no additional information for the majority of errors. In this paper, we investigate two supervised approaches for generating all types of artificial errors. We propose a framework for generating errors based on statistical machine translation (SMT), training a model to translate from correct into incorrect sentences. In addition, we describe a method for learning error patterns from an annotated corpus and transplanting them into error-free text. We evaluate the effect of introducing artificial data on two error detection benchmarks. Our results show that each method provides significant improvements over using only the available training set, and a combination of both gives an absolute improvement of 4.3% in INLINEFORM0 , without requiring any additional annotated data. Error Generation Methods We investigate two alternative methods for AEG. The models receive grammatically correct text as input and modify certain tokens to produce incorrect sequences. The alternative versions of each sentence are aligned using Levenshtein distance, allowing us to identify specific words that need to be marked as errors. While these alignments are not always perfect, we found them to be sufficient for practical purposes, since alternative alignments of similar sentences often result in the same binary labeling. Future work could explore more advanced alignment methods, such as proposed by felice-bryant-briscoe. In Section SECREF4 , this automatically labeled data is then used for training error detection models. Machine Translation We treat AEG as a translation task – given a correct sentence as input, the system would learn to translate it to contain likely errors, based on a training corpus of parallel data. Existing SMT approaches are already optimised for identifying context patterns that correspond to specific output sequences, which is also required for generating human-like errors. The reverse of this idea, translating from incorrect to correct sentences, has been shown to work well for error correction tasks BIBREF2 , BIBREF3 , and round-trip translation has also been shown to be promising for correcting grammatical errors BIBREF4 . Following previous work BIBREF2 , BIBREF5 , we build a phrase-based SMT error generation system. During training, error-corrected sentences in the training data are treated as the source, and the original sentences written by language learners as the target. Pialign BIBREF6 is used to create a phrase translation table directly from model probabilities. In addition to default features, we add character-level Levenshtein distance to each mapping in the phrase table, as proposed by Felice:2014-CoNLL. Decoding is performed using Moses BIBREF7 and the language model used during decoding is built from the original erroneous sentences in the learner corpus. The IRSTLM Toolkit BIBREF8 is used for building a 5-gram language model with modified Kneser-Ney smoothing BIBREF9 . Pattern Extraction We also describe a method for AEG using patterns over words and part-of-speech (POS) tags, extracting known incorrect sequences from a corpus of annotated corrections. This approach is based on the best method identified by Felice2014a, using error type distributions; while they covered only 5 error types, we relax this restriction and learn patterns for generating all types of errors. The original and corrected sentences in the corpus are aligned and used to identify short transformation patterns in the form of (incorrect phrase, correct phrase). The length of each pattern is the affected phrase, plus up to one token of context on both sides. If a word form changes between the incorrect and correct text, it is fully saved in the pattern, otherwise the POS tags are used for matching. For example, the original sentence `We went shop on Saturday' and the corrected version `We went shopping on Saturday' would produce the following pattern: (VVD shop_VV0 II, VVD shopping_VVG II) After collecting statistics from the background corpus, errors can be inserted into error-free text. The learned patterns are now reversed, looking for the correct side of the tuple in the input sentence. We only use patterns with frequency INLINEFORM0 , which yields a total of 35,625 patterns from our training data. For each input sentence, we first decide how many errors will be generated (using probabilities from the background corpus) and attempt to create them by sampling from the collection of applicable patterns. This process is repeated until all the required errors have been generated or the sentence is exhausted. During generation, we try to balance the distribution of error types as well as keeping the same proportion of incorrect and correct sentences as in the background corpus BIBREF10 . The required POS tags were generated with RASP BIBREF11 , using the CLAWS2 tagset. Error Detection Model We construct a neural sequence labeling model for error detection, following the previous work BIBREF12 , BIBREF13 . The model receives a sequence of tokens as input and outputs a prediction for each position, indicating whether the token is correct or incorrect in the current context. The tokens are first mapped to a distributed vector space, resulting in a sequence of word embeddings. Next, the embeddings are given as input to a bidirectional LSTM BIBREF14 , in order to create context-dependent representations for every token. The hidden states from forward- and backward-LSTMs are concatenated for each word position, resulting in representations that are conditioned on the whole sequence. This concatenated vector is then passed through an additional feedforward layer, and a softmax over the two possible labels (correct and incorrect) is used to output a probability distribution for each token. The model is optimised by minimising categorical cross-entropy with respect to the correct labels. We use AdaDelta BIBREF15 for calculating an adaptive learning rate during training, which accounts for a higher baseline performance compared to previous results. Evaluation We trained our error generation models on the public FCE training set BIBREF16 and used them to generate additional artificial training data. Grammatically correct text is needed as the starting point for inserting artificial errors, and we used two different sources: 1) the corrected version of the same FCE training set on which the system is trained (450K tokens), and 2) example sentences extracted from the English Vocabulary Profile (270K tokens).. While there are other text corpora that could be used (e.g., Wikipedia and news articles), our development experiments showed that keeping the writing style and vocabulary close to the target domain gives better results compared to simply including more data. We evaluated our detection models on three benchmarks: the FCE test data (41K tokens) and the two alternative annotations of the CoNLL 2014 Shared Task dataset (30K tokens) BIBREF3 . Each artificial error generation system was used to generate 3 different versions of the artificial data, which were then combined with the original annotated dataset and used for training an error detection system. Table TABREF1 contains example sentences from the error generation systems, highlighting each of the edits that are marked as errors. The error detection results can be seen in Table TABREF4 . We use INLINEFORM0 as the main evaluation measure, which was established as the preferred measure for error correction and detection by the CoNLL-14 shared task BIBREF3 . INLINEFORM1 calculates a weighted harmonic mean of precision and recall, which assigns twice as much importance to precision – this is motivated by practical applications, where accurate predictions from an error detection system are more important compared to coverage. For comparison, we also report the performance of the error detection system by Rei2016, trained using the same FCE dataset. The results show that error detection performance is substantially improved by making use of artificially generated data, created by any of the described methods. When comparing the error generation system by Felice2014a (FY14) with our pattern-based (PAT) and machine translation (MT) approaches, we see that the latter methods covering all error types consistently improve performance. While the added error types tend to be less frequent and more complicated to capture, the added coverage is indeed beneficial for error detection. Combining the pattern-based approach with the machine translation system (Ann+PAT+MT) gave the best overall performance on all datasets. The two frameworks learn to generate different types of errors, and taking advantage of both leads to substantial improvements in error detection. We used the Approximate Randomisation Test BIBREF17 , BIBREF18 to calculate statistical significance and found that the improvement for each of the systems using artificial data was significant over using only manual annotation. In addition, the final combination system is also significantly better compared to the Felice2014a system, on all three datasets. While Rei2016 also report separate experiments that achieve even higher performance, these models were trained on a considerably larger proprietary corpus. In this paper we compare error detection frameworks trained on the same publicly available FCE dataset, thereby removing the confounding factor of dataset size and only focusing on the model architectures. The error generation methods can generate alternative versions of the same input text – the pattern-based method randomly samples the error locations, and the SMT system can provide an n-best list of alternative translations. Therefore, we also investigated the combination of multiple error-generated versions of the input files when training error detection models. Figure FIGREF6 shows the INLINEFORM0 score on the development set, as the training data is increased by using more translations from the n-best list of the SMT system. These results reveal that allowing the model to see multiple alternative versions of the same file gives a distinct improvement – showing the model both correct and incorrect variations of the same sentences likely assists in learning a discriminative model. Related Work Our work builds on prior research into AEG. Brockett2006 constructed regular expressions for transforming correct sentences to contain noun number errors. Rozovskaya2010a learned confusion sets from an annotated corpus in order to generate preposition errors. Foster2009 devised a tool for generating errors for different types using patterns provided by the user or collected automatically from an annotated corpus. However, their method uses a limited number of edit operations and is thus unable to generate complex errors. Cahill2013 compared different training methodologies and showed that artificial errors helped correct prepositions. Felice2014a learned error type distributions for generating five types of errors, and the system in Section SECREF3 is an extension of this model. While previous work focused on generating a specific subset of error types, we explored two holistic approaches to AEG and showed that they are able to significantly improve error detection performance. Conclusion This paper investigated two AEG methods, in order to create additional training data for error detection. First, we explored a method using textual patterns learned from an annotated corpus, which are used for inserting errors into correct input text. In addition, we proposed formulating error generation as an MT framework, learning to translate from grammatically correct to incorrect sentences. The addition of artificial data to the training process was evaluated on three error detection annotations, using the FCE and CoNLL 2014 datasets. Making use of artificial data provided improvements for all data generation methods. By relaxing the type restrictions and generating all types of errors, our pattern-based method consistently outperformed the system by Felice2014a. The combination of the pattern-based method with the machine translation approach gave further substantial improvements and the best performance on all datasets.
[ "(VVD shop_VV0 II, VVD shopping_VVG II)", "patterns for generating all types of errors" ]
2,133
qasper_e
en
null
3eb54f9ba615e042408b8ddafcbd4d41db0f1e72c8eedca8
Do they study numerical properties of their obtained vectors (such as orthogonality)?
Introduction Numerous lexical semantic properties are captured by representations encoding distributional properties of words, as has been demonstrated in a variety of tasks BIBREF0 , BIBREF1 , BIBREF2 . However, this distributional account of meaning does not scale to larger units like phrases and sentences BIBREF3 , BIBREF4 , motivating research into compositional models that combine word representations to produce representations of the semantics of longer units BIBREF5 , BIBREF6 , BIBREF7 . Previous work has learned these models using autoencoder formulations BIBREF8 or limited human supervision BIBREF5 . In this work, we explore the hypothesis that the equivalent knowledge about how words compose can be obtained through monolingual paraphrases that have been extracted using word alignments and an intermediate language BIBREF9 . Confirming this hypothesis would allow the rapid development of compositional models in a large number of languages. As their name suggests, these models also impose the assumption that longer units like phrases are compositional, i.e., a phrase's meaning can be understood from the literal meaning of its parts. However, countless examples that run contrary to the assumption exist, and handling these non-compositional phrases has been problematic and of long-standing interest in the community BIBREF10 , BIBREF11 . (Non-) Compositionality detection can provide vital information to other language processing systems on whether a multiword unit should be treated semantically as a single entity or not, and scoring this phenomenon is particularly relevant for downstream tasks like machine translation (MT) or information retrieval. We explore the hypothesis that contextual evidence can be used to determine the relative degree to which a phrase is meant compositionally. Rather than focusing purely on intrinsic clean-room evaluations, the goal of this work is to learn relatively accurate context-sensitive compositional models that are also directly applicable in real-world, noisy-data scenarios. This objective necessitates certain design decisions, and to this end we propose a robust, scalable framework that learns compositional functions and scores relative phrasal compositionality. We make three contributions: first, a novel way to learn compositional functions for part-of-speech pairs that uses supervision from an automatically-extracted list of paraphrases (§ SECREF3 ). Second, a context-dependent scoring model that scores the relative compositionality of a phrase BIBREF12 by computing the likelihood of its context given its paraphrase-learned representation (§ SECREF4 ). And third, an evaluation of the impact of compositionality knowledge in an end-to-end MT setup. Our experiments (§ SECREF5 ) reveal that using supervision from automatically extracted paraphrases produces compositional functions with equivalent performance to previous approaches that have relied on hand-annotated training data. Furthermore, compositionality features consistently improve the translations produced by a strong English–Spanish translation system. Parametric Composition Functions We formalize composition as a function INLINEFORM0 that maps INLINEFORM1 -dimensional vector representations of phrase constituents INLINEFORM2 to an INLINEFORM3 -dimensional vector representation of the phrase, i.e., the composed representation. A phrase is defined as any contiguous sequence of words of length 2 or greater, and does not have to adhere to constituents in a phrase structure grammar. This definition is in line with our MT application and ignores “gappy” noncontiguous phrases, but this pragmatic choice does exclude many verb-object relations BIBREF13 . We assume the existence of word-level vector representations for every word in our vocabulary of size INLINEFORM4 . Compositionality is modeled as a bilinear map, and two classes of linear models with different levels of parametrization are proposed. Unlike previous work BIBREF6 , BIBREF7 , BIBREF14 where the functions are word-specific, our compositional functions operate on part-of-speech (POS) tag pairs, which facilitates learning by drastically reducing the number of parameters, and only requires a shallow syntactic parse of the input. Concatenation Models Our first class of models is a generalization of the additive models introduced in Mitchell2008: DISPLAYFORM0 where the notation INLINEFORM0 represents a vertical (row-wise) concatenation of two vectors; namely, the concatenation that results in a INLINEFORM1 -sized vector. In addition to the INLINEFORM2 parameters for the word vector representations that are provided a priori, this model introduces INLINEFORM3 parameters, where INLINEFORM4 is the number of POS-tag pairs we consider. Mitchell2008 significantly simplify parameter estimation by assuming a certain structure for the parameter matrix INLINEFORM0 , which is necessary given the limited human-annotated data they use. For example, by assuming a block-diagonal structure, we get a scaled element-wise addition model INLINEFORM1 . While not strictly in this category due to the non-linearities involved, neural network-based compositional models BIBREF7 , BIBREF15 can be viewed as concatenation models, although the order of concatenation and matrix multiplication is switched. However, these models introduce more than INLINEFORM2 parameters. Tensor Models The second class of models leverages pairwise multiplicative interactions between the components of the two word vectors: DISPLAYFORM0 where INLINEFORM0 corresponds to a tensor contraction along the INLINEFORM1 mode of the tensor INLINEFORM2 . In this case, we first compute a contraction (tensor-vector product) between INLINEFORM3 and INLINEFORM4 along INLINEFORM5 's third mode, corresponding to interactions with the second word vector of a two-word phrase and resulting in a matrix, which is then multiplied along its second mode (corresponding to traditional matrix multiplication on the right) by INLINEFORM6 . The final result is an INLINEFORM7 vector. This model introduces INLINEFORM8 parameters. Tensor models are a generalization of the element-wise multiplicative model BIBREF16 , which permits non-zero values only on the tensor diagonal. Operating at the vocabulary level, the model of Baroni2010 has interesting parallels to our tensor model. They focus on adjective–noun relationships and learn a specific matrix for every adjective in their dataset; in our case, the specific matrix for each adjective has a particular form, namely that it can be factorized into the product of a tensor and a vector; the tensor corresponds to the actual adjective–noun combiner function, and the vector corresponds to specific lexical information that the adjective carries. This concept generalizes to other POS pairs: for example, multiplying the tensor that represents determiner-noun combinations along the second mode with the vector for “the” results in a matrix that represents the semantic operation of definiteness. Learning these parameters jointly is statistically more efficient than separately learning versions for each word. Longer Phrases The proposed models operate on pairs of words at a time. To handle phrases of length greater than two, we greedily construct a left-branching tree of the phrase constituents that eventually dictates the application of the learned bilinear maps. For each internal tree node, we consider the POS tags of its children: if the right child is a noun, and the left child is either a noun, adjective, or determiner, then the internal node is marked as a noun, otherwise we mark it with a generic other tag. At the end of the procedure, unattached nodes (words) are attached at the highest point in the tree. After the tree is constructed, we can compute the overall phrasal representation in a bottom-up manner, guided by the labels of leaf and internal nodes. We note that the emphasis of this work is not to compute sentence-level representations. This goal has been explored in recent research BIBREF17 , BIBREF18 , and combining our models with methods presented therein for sentence-level representations is straightforward. Learning The models described above rely on parameters INLINEFORM0 that must be learned. In this section, we argue that automatically constructed databases of paraphrases provide adequate supervision for learning notions of compositionality. Supervision from Automatic Paraphrases The Paraphrase Database BIBREF9 is a collection of ranked monolingual paraphrases that have been extracted from word-aligned parallel corpora using the bilingual pivot method BIBREF19 . The underlying assumption is that if two strings in the same language align to the same string in another language, then the strings in the original language share the same meaning. Paraphrases are ranked by their word alignment scores, and in this work we use the preselected small portion of PPDB as our training data. Although we can directly extract phrasal representations of a pre-specified list of phrases from the corpus used to compute word representations BIBREF6 , this approach is both computationally and statistically inefficient: the number of phrases increases exponentially in the length of the phrase, and correspondingly the occurrence of any individual phrase decreases exponentially. We can thus circumvent these computational and statistical issues by using monolingual paraphrases. The training data is filtered to provide only two-to-one word paraphrase mappings, and the multiword portion of the paraphrase is subsequently POS-tagged. Table TABREF10 provides a breakdown of such paraphrases by their POS pair type. Given the lack of context when tagging, it is likely that the POS tagger yields the most probable tag for words and not the most probable tag given the (limited) context. Furthermore, even the higher quality portions of PPDB yield paraphrases of ranging quality, ranging from non-trivial mappings such as young people INLINEFORM0 youth, to redundant ones like the ceasefire INLINEFORM1 ceasefire. However, PPDB-like resources are more easily available than human-annotated resources (in multiple languages too: Ganitkevich2014), so it is imperative that methods which learn compositional functions from such sources handle noisy supervision adequately. Parameter Estimation The parameters INLINEFORM0 in Eq. EQREF4 and EQREF6 can be estimated through standard linear regression techniques in conjunction with the data presented in § SECREF3 . These methods provide a natural way to regularize INLINEFORM1 via INLINEFORM2 (ridge) or INLINEFORM3 (LASSO) regularization, which also helps handle noisy paraphrases. Parameters for the INLINEFORM4 -regularized concatenation model for select POS pairs are displayed in Fig. FIGREF12 . The heat-maps display the relative magnitude of parameters, with positive values colored blue, negative values colored red, and white cells indicating zero values. It is evident that the parameters learned from PPDB indicate a notion of linguistic headedness, namely that for particular POS pairs, the semantic information is primarily contained in the right word, but for others such as the noun–noun combination, each constituent's contribution is relatively more equal. Measuring of Compositionality The concatenation and tensor models compute an INLINEFORM0 -dimensional vector representation for a multi-word phrase by assuming the meaning of the phrase can be expressed in terms of the meaning of its constituents. This assumption holds true to varying degrees; while it clearly holds for “large amount" and breaks down for “cloud nine", it is partially valid for phrases such as “zebra crossing" or “crash course". In line with previous work, we assume a compositionality continuum BIBREF12 , but further conjecture that a phrase's level of compositionality is dependent on the specific context in which it occurs, motivating a context-based approach (§ SECREF21 ) which scores compositionality by computing the likelihoods of surrounding context words given a phrase representation. The effect of context is directly measured through a comparison with context-independent methods from prior work BIBREF20 , BIBREF21 It is important to note that most prior work on compositionality scoring assumes access to both word and phrase vector representations (for select phrases that will be evaluated) a priori. The latter are distinct from representations that are computed from learned compositional functions as they are extracted directly from the corpus, which is an expensive procedure. Our aim is to develop compositional models that are applicable in downstream tasks, and thus assuming pre-existing phrase vectors is unreasonable. Hence for phrases, we only rely on representations computed from our learned compositional functions. At the Type Level Given vector representations for the constituent words in a phrase and the phrase itself, the idea behind the type-based model is to compute similarities between the constituent word representations and the phrasal representation, and average the similarities across the constituents. If the contexts in which a constituent word occurs, as dictated by its vector representation, are very different from the contexts of the composed phrase, as indicated by the cosine similarity between the word and phrase representations, then the phrase is likely to be non-compositional. Assuming unit-normalized word vectors INLINEFORM0 and phrase vector INLINEFORM1 computed from one of the learned models in § SECREF2 : DISPLAYFORM0 where INLINEFORM0 is a hyperparameter that controls the contribution of individual constituents. This model leverages the average statistics computed over the training corpora (as encapsulated in the word and phrase vectors) to detect compositionality, and is the primary way compositionality has been evaluated previously BIBREF21 , BIBREF22 . Note that for the simple additive model INLINEFORM1 with unit-normalized word vectors, INLINEFORM2 is independent of INLINEFORM3 . At the Token Level Eq. EQREF20 scores phrases for compositionality regardless of the context that these phrases occur in. However, phrases such as “big fish" or “heavy metal" may occur in both compositional and non-compositional situations, depending on the nature and topic of the texts they occur in. Here, we propose a context-driven model for compositionality detection, inspired by the skip-gram model for learning word representations BIBREF2 . The intuition is simple: if a phrase is compositional, it should be sufficiently predictive of the context words around it; otherwise, it is acting in a non-compositional manner. Thus, we would like to compute the likelihood of the context ( INLINEFORM0 ) given a phrasal representation ( INLINEFORM1 ) and normalization constant INLINEFORM2 : DISPLAYFORM0 As explained in Goldberg2014, the context representations are distinct from the word representations. In practice, we compute the log-likelihood averaged over the context words or the perplexity instead of the actual likelihood. Evaluation Our experiments had three aims: first, demonstrate that the compositional functions learned using paraphrase supervision compute semantically meaningful results for compositional phrases by evaluating on a phrase similarity task (§ SECREF29 ); second, verify the hypothesis that compositionality is context-dependent by comparing a type-based and token-based approach on a compound noun evaluation task (§ SECREF36 ); and third, determine if the compositionality-scoring models based on learned representations improve the translations produced by a state-of-the-art phrase-based MT system (§ SECREF38 ). The word vectors used in all of our experiments were produced by word2vec using the skip-gram model with 20 negative samples, a context window size of 10, a minimum token count of 3, and sub-sampling of frequent words with a parameter of INLINEFORM0 . We extracted corpus statistics for word2vec using the AFP portion of the English Gigaword, which consists of 887.5 million tokens. The code used to generate the results is available at http://www.github.com/xyz, and the evaluation datasets are publicly available. Phrasal Similarity For the phrase similarity task we first compare our concatenation and tensor models learned using INLINEFORM0 and INLINEFORM1 regularization to three baselines: [noitemsep] add: INLINEFORM0 mult1: INLINEFORM0 mult2: INLINEFORM0 Other additive models from previous work BIBREF5 , BIBREF23 , BIBREF24 that impose varying amounts of structural assumptions on the semantic interactions between word representations e.g., INLINEFORM0 or INLINEFORM1 are subsumed by our concatenation model. The regularization strength hyperparameter for INLINEFORM2 and INLINEFORM3 regularization was selected using 5-fold cross-validation on the PPDB training data. We evaluated the phrase compositionality models on the adjective–noun and noun–noun phrase similarity tasks compiled by Mitchell2010, using the same evaluation scheme as in the original work. Spearman's INLINEFORM0 between phrasal similarities derived from our compositional functions and the human annotators (computed individually per annotator and then averaged across all annotators) was the evaluation measure. Figure FIGREF24 presents the correlation results for the two POS pair types as a function of the dimensionality INLINEFORM0 of the representations for the concatenation models (and additive baseline) and tensor models (and multiplicative baselines). The concatenation models seem more effective than the tensor models in the adjective–noun case and give roughly the same performance on the noun–noun dataset, which is consistent with previous work that uses dense, low-dimensional representations BIBREF25 , BIBREF15 , BIBREF26 . Since the concatenation model involve fewer parameters, we use it as the compositional model of choice for subsequent experiments. The absolute results are also consistent with state-of-the-art results on this dataset BIBREF24 , BIBREF26 , indicating that paraphrases are an excellent source of information for learning compositional functions and a reasonable alternative to human-annotated training sets. For reference, the inter-annotator agreements are 0.52 for the adjective–noun evaluation and 0.51 for the noun–noun one. The unweighted additive baseline is surprisingly very strong on the noun–noun set, so we also compare against it in subsequent experiments. Compositionality To evaluate the compositionality-scoring models, we used the compound noun compositionality dataset introduced in Reddy2011. This dataset consists of 2670 annotations of 90 compound-noun phrases exhibiting varying levels compositionality, with scores ranging from 0 to 5 provided by 30 annotators. It also contains three to five example sentences of these phrases that were shown to the annotators, which we make use of in our context-dependent model. Consistent with the original work, Spearman's INLINEFORM0 is computed on the averaged compositionality score for a phrase across all the annotators that scored that phrase (which varies per phrase). For computing the compositional functions, we evaluate three of the best performing setups from § SECREF29 : the INLINEFORM1 and INLINEFORM2 -regularized concatenation models, and the simple additive baseline. For the context-independent model, we select the hyperparameter INLINEFORM0 in Eq. EQREF20 from the values INLINEFORM1 . For the context-dependent model, we vary the context window size INLINEFORM2 by selecting from the values INLINEFORM3 . Table TABREF37 presents Spearman's INLINEFORM4 for these setups. In all cases, the context-dependent models outperform the context-independent ones, and using a relatively simple token-based model we can approximately match the performance of the Bayesian model proposed by Hermann2012. The concatenation models are also consistently better than the additive compositional model, indicating the benefit of learning the compositional parameters via PPDB. Machine Translation While any truly successful model of semantics must match human intuitions, understanding the applications of our models is likewise important. To this end, we consider the problem of machine translation, operating under the hypothesis that sentences which express their meaning non-compositionally should also translate non-compositionally. Modern phrase-based translation systems are faced with a large number of possible segmentations of a source-language sentence during decoding, and all segmentations are considered equally likely BIBREF13 . Thus, it would be helpful to provide guidance on more likely segmentations, as dictated by the compositionality scores of the phrases extracted from a sentence, to the decoder. A low compositionality score would ideally force the decoder to consider the entire phrase as a translation unit, due to its unique semantic characteristics. Correspondingly, a high score informs the decoder that it is safe to rely on word-level translations of the phrasal constituents. Thus, if we reveal to the translation system that a phrase is non-compositional, it should be able to learn that translation decisions which translate it as a unit are to be favored, leading to better translations. To test this hypothesis, we built an English-Spanish MT system using the cdec decoder BIBREF27 for the entire training pipeline (word alignments, phrase extraction, feature weight tuning, and decoding). Corpora from the WMT 2011 evaluation was used to build the translation and language models, and for tuning (on news-test2010) and evaluation (on news-test2011), with scoring done using BLEU BIBREF28 . The baseline is a hierarchical phrase-based system BIBREF29 with a 4-gram language model, with feature weights tuned using MIRA BIBREF30 . For features, each translation rule is decorated with two lexical and phrasal features corresponding to the forward INLINEFORM0 and backward INLINEFORM1 conditional log frequencies, along with the log joint frequency INLINEFORM2 , the log frequency of the source phrase INLINEFORM3 , and whether the phrase pair or the source phrase is a singleton. Weights for the language model, glue rule, and word penalty are also tuned. This setup (Baseline) achieves scores en par with the published WMT results. We added the compositionality score as an additional feature, and also added two binary-valued features: the first indicates if the given translation rule has not been decorated with a compositionality score (either because it consists of non-terminals only or the lexical items in the translation rule are unigrams), and correspondingly the second feature indicates if the translation rule has been scored. Therefore, an appropriate additional baseline would be to mark translation rules with these indicator functions but without the scores, akin to identifying rules with phrases in them (Baseline + SegOn). Table TABREF40 presents the results of the MT evaluation, comparing the baselines to the best-performing context-independent and dependent scoring models from § SECREF36 . The scores have been averaged over three tuning runs with standard deviation in parentheses; bold results on the test set are statistically significant ( INLINEFORM0 ) with respect to the baseline. While knowledge of relative compositionality consistently helps, the improvements using the context-dependent scoring models, especially with the INLINEFORM1 concatenation model, are noticeably better. Related Work There has been a large amount of work on compositional models that operate on vector representations of words. With some exceptions BIBREF16 , BIBREF5 , all of these approaches are lexicalized i.e., parameters (generally in the form of vectors, matrices, or tensors) for specific words are learned, which works well for frequently occurring words but fails when dealing with compositions of arbitrary word sequences containing infrequent words. The functions are either learned with a neural network architecture BIBREF7 or as a linear regression BIBREF6 ; the latter require phrase representations extracted directly from the corpus for supervision, which can be computationally expensive and statistically inefficient. In contrast, we obtain this information through many-to-one PPDB mappings. Most of these models also require additional syntactic BIBREF31 or semantic BIBREF15 , BIBREF14 resources; on the other hand, our proposed approach only requires a shallow syntactic parse (POS tags). Recent efforts to make these models more practical BIBREF32 attempt to reduce their statistically complex and overly-parametrized nature, but with the exception of Zanzotto2010, who propose a way to extract compositional function training examples from a dictionary, these models generally require human-annotated data to work. Most models that score the relative (non-) compositionality of phrases do so in a context-independent manner. A central idea is to replace phrase constituents with semantically-related words and compute the similarity of the new phrase to the original BIBREF22 , BIBREF33 or make use of a variety of lexical association measures BIBREF10 , BIBREF34 . Sporleder2009 however, do make use of context in a token-based approach, where the context in which a phrase occurs as well as the phrase itself is modeled as a lexical chain, and the cohesion of the chain is measured as an indicator of a phrase's compositionality. Cohesion is computed using a web search engine-based measure, whereas we use a probabilistic model of context given a phrase representation. Hermann2012 propose a Bayesian generative model that is also context-based, but learning and inference is done through a relatively expensive Gibbs sampling scheme. In the context of MT, Zhang2008b present a Bayesian model that learns non-compositional phrases from a synchronous parse tree of a sentence pair. However, the primary aim of their work is phrase extraction for MT, and the non-compositional constraints are only applied to make the space of phrase pairs more tractable when bootstrapping their phrasal parser from their word-based parser. In contrast, we score every phrase that is extracted with the standard phrase extraction heuristics BIBREF29 , allowing the decoder to make the final decision on the impact of compositionality scores in translation. Thus, our work is more similar to Xiong2010, who propose maximum entropy classifiers that mark positions between words in a sentence as being a phrase boundary or not, and integrate these scores as additional features in an MT system. Conclusion In this work, we presented two new sources of information for compositionality modeling and scoring, paraphrase information and context. For modeling, we showed that the paraphrase-learned compositional representations performs as well on a phrase similarity task as the average human annotator. For scoring, the importance of context was shown through the comparison of context-independent and dependent models. Improvements by the context-dependent model on an extrinsic machine translation task corroborate the utility of these additional knowledge sources. We hope that this work encourages further research in making compositional semantic approaches applicable in downstream tasks.
[ "No", "No" ]
3,974
qasper_e
en
null
ba4d8e9921bcc53ce93a458307141bc528d92addb66cbc95
What is the Random Kitchen Sink approach?
Introduction In this digital era, online discussions and interactions has become a vital part of daily life of which a huge part is covered by social media platforms like twitter, facebook, instagram etc. Similar to real life there exist anti-social elements in the cyberspace, who take advantage of the anonymous nature in cyber world and indulge in vulgar and offensive communications. This includes bullying, trolling, harassment BIBREF0, BIBREF1 and has become a growing concern for governments. Youth experiencing such victimization was recorded to have psychological symptoms of anxiety, depression, loneliness BIBREF1. Thus it is important to identify and remove such behaviours at the earliest. One solution to this is the automatic detection using machine learning algorithms. Detecting offensive language from social media is a challenging research problem due to the different level of ambiguities present in the natural language and noisy nature of the social media language. Moreover, social media subscribers are from linguistically diverse and varying communities. Overseeing the complication of this problem, BIBREF2 organized a task in SemEval2019, Task 6: Identifying and Categorizing Offensive Language in Social Media. The tweets were collected by the organizers using Twitter API and have annotated them in a hierarchical manner as offensive language present in the tweet, type of the offense and target of the offense. There were three sub-tasks according to the hierarchy of annotation: a) To detect if a post is offensive (OFF) or not (NOT), b) To Identify the type of offense in the post as targeted threat (TTH), targeted insult (TIN), untargeted (UNT), c) To identify if offense is targeted to organization or entity (ORG), group of people (GRP), individual (IND), or other (OTH). The dataset had the following challenges: Dataset was comparatively smaller. Dataset was biased/imbalanced BIBREF3. In this paper, we are proposing a comparative analysis for the sub-task A :Offensive language identification of the SemEval2019, Task 6. Sub-task A was the most popular sub-task among the three and had total 104 team participation. In Table TABREF19, the list of first 5 participants along with system and f1-score has been shown. Offensive language detection is one of the challenging and interesting topic for research. Recent past had multiple shared tasks and research on this topic. One of the initial work on offensive language using supervised classification was done by Yin et al. BIBREF4. They have used Ngram, TFIDF and combination of TFIDF with Sentiment and Contextual as Features. Schmidt and Wiegand BIBREF5 gave a survey on automatic hate speech detection using NLP. The authors surveyed on features like Simple Surface Features, Word Generalization, Linguistic Features, Knowledge-Based Features, Multimodal Information etc. In 2013, a study on detecting cyberbullying in YouTube comments was done by Dadvar et al. BIBREF6. They have used a combination of content-based, cyberbullying-specific and user-based features and showed that detection of cyberbullying can be improved by taking user context into account. Shared task GermEval 2018 organised by Wiegand et al.,BIBREF7 was focused on offensive language detection on German tweets. It had a dataset of over 8,500 annotated tweets and was trained for binary classification of offensive and non-offensive tweets. They obtained an overall maco-F1 score of 76.77%. Another shared task on Aggression Identification in Social Media was organised by Kumar et al., BIBREF8. The task provided a dataset with 15,000 annotated Facebook posts and comments in Hindi and English. They obtained a weighted F-score of 64% for both English and Hindi. The rest of the paper is structured as follows. Section 2 explains about the methodology with formulation. Section 3 discusses on the Proposed approach. Section 4 talks on the Experiments and discussions performed. Finally, conclusion is given in Section 5. Methodology ::: Data Pre-processing Data pre-processing is a very crucial step which needs to be done before applying any machine learning tasks, because the real time data could be very noisy and unstructured. For the two models used in this work, pre-processing of tweets is done separately: Pre-processing for Google model: It has become a common culture to use #tags across social media. So we have replaced multiple #tags with a single #tag. Mostly @ symbol id used to mention person or entities in a tweet. So we replace multiple @symbols with a single @-mention. Some tweets may contain the link to a website or some other urls. So we replace all of these with a single keyword URLS. Pre-processing for fasttext model: For applying fasttext model to get word vectors, we followed a different set of pre-processing steps. First, all the numbers, punctuation marks, urls (http:// or www.) and symbols (emoji, #tags, -mention) were removed from the tweet as it do not contain information related to sentiment. After that, tokenization and lowercasing was applied to the tweets. Tokenization was done using tokenizer from NLTK package BIBREF9. Finally, the stop word are removed. The list is obtained from NLTK package. Methodology ::: Embeddings Word embeddings are ubiquitous for any NLP problem, as algorithms cannot process the plain text or strings in its raw form. Word emeddings are vectors that captures the semantic and contextual information of words. The word embedding used for this work are: FastText: The fastText algorithm created by Facebook BIBREF10 assumes every word to be n-grams of character. It helps to give the vector representations for out of vocabuary words. For the current work, fasttext based word embedding is used for generating token vectors of dimension 300 BIBREF11. Each vector corresponding to the tweet is generated by taking the average of token vectors. Universal Sentence Encoder: Developed by Google, Universal sentence encoder BIBREF12, BIBREF13 provides embeddings at sentence level. The dimension of the embedding vector is 512, irrespective of the number of tokens in the input tweet. These vectors can capture good semantic information from the sentences. For each tweet, this model generates a 512 length embedding vector and is used as features for further classification. DMD and HODMD: DMD is a method initially used in fluid dynamics which captures spatio-temporal features BIBREF14. It has been used in background-foreground separation BIBREF15, load forecasting BIBREF16, saliency detection BIBREF17 etc. For natural language processing, DMD has been first applied for sentiment analysis BIBREF18, BIBREF19. This motivated to explore DMD based feature for the present work. Dynamic mode decomposition (DMD) is a much more powerful concept and it assumes the evolution of the function over the rectangular field is effected by the mapping of a constant matrix $A$. $A$ captures the system’s inherent dynamics and the aim of the DMD is to understand using its dominant eignevalues and eigenvectors. Assumption is that this matrix $A$ is of low rank and hence the sequence of vectors $ \mathop {{x_1}}\limits _|^| ,\mathop {{x_2}}\limits _|^| ,\mathop {{x_3}}\limits _|^| ,...\mathop {,{x_k}}\limits _|^| ,...,\mathop {{x_{m + 1}}}\limits _|^| $ finally become a linearly dependent set. That is, vector $ \mathop {{x_{m + 1}}}\limits _|^|$ become linearly dependent on previous vectors. The data matrix X in terms of eigen vectors associated with matrix $A$. where, ${\Phi ^\dag }$ is pseudo inverse of ${\Phi }$. $A$ is of rank m and ${\Phi }$ have m columns. Hence, pseudo-inverse will do the job than inverse operation. The columns of ${\Phi }$ are called DMD modes and this forms the features. Time-lagged matrices are prepared as snapshot for this approach. In Eigensent BIBREF20, the authors proposed HODMD to find embedings for sentences. The authors suggested, sentences can be represented as a signal using word embeddings by taking the average of word vectors. This is intuitive because word embeddings almost obeys the laws of linear algebra, by capturing word analogies and relationships. Therefore, by considering every sentence as a multi-dimensional signal, we can capture the important transitional dynamics of sentences. Also, for the signal representation of sentences, each word vector will act as a single point in the signal. For the present work, to generate DMD and HODMD based features, Fastext based embedding is used. Proposed Approach RKS approach proposed in BIBREF21, BIBREF22, explicitly maps data vectors to a space where linear separation is possible. It has been explored for natural language processing tasks BIBREF23, BIBREF24. The RKS method provides an approximate kernel function via explicit mapping. Here, $\phi (.)$ denotes the implicit mapping function (used to compute kernel matrix), $Z(.)$ denotes the explicit mapping function using RKS and ${\Omega _k}$ denotes random variable . Figure FIGREF15 show the block diagram of the proposed approach. Experiments and Discussions ::: Data Description OLID (Offensive Language Identification Dataset) is a collection of English tweets which are annotated using a three-level hierarchical annotation model. It was collected using Twitter API and contains 14,460 annotated tweets. The task divided the data as train, trial and test, of which train and trial was initially released as starting kit, finally test was released as Test A release. All three of these partitions were highly biased, thus making the task more challenging and real time. The train set had 13,240 tweets, out of which 8840 tweets were not offensive (NOT) and 4400 tweets were offensive (OFF). Similarly, test set had 860 tweets, which had 620 not offensive and 280 offensive tweets. TableTABREF17 show the data distribution of the entire dataset. For the current work, train and test data are taken which is 14,100 tweets in number. In Sub-task A: Offensive language identification, the goal was to discriminate offensive and not-offensive twitter posts. The target classes for each instance were a) Offensive (OFF): posts that contain any form of profanity or targeted offence. This includes threats, insults, and any form of untargeted profanity. b) Not Offensive (NOT): posts that doesn't have any profanity or offense in it. The result and discussion of the top 10 teams for the sub-task A is in section for Introduction. In that, team BIBREF3 obtained highest f1-score of 82.9% Experiments and Discussions ::: Results and Comparisons This section describes the result as three different cases. Case 1 & 2 provides baseline approach to compare with the proposed RKS approcah described in case 3. Table TABREF19 gives the results of the top 5 teams of sub-task A. Team with rank 1 achieved a maximum f1-score of 82.9%. Experiments and Discussions ::: Results and Comparisons ::: Case 1: Embeddings approach In this work, we have selected word vectors generated by Google universal encoder model, Fasttext, and DMD based features. The classification using the selected features are performed using machine learning algorithms such as Random Forest (RF), Decision Tree (DT), Naive Bayes (NB), Support vector machine (SVM) linear and RBF kernels, Logistic Regression, and Random kitchen sinks. The evaluation measures used are accuracy (Acc.), precision (Prec), recall, f1-score (F1). Table TABREF21 shows the classification result obtained for classical machine learning algorithms using the Google universal sentence encoder model features. It can be observed that svm linear classifier and Logistic regression has given maximum accuracy of 82.44% and 82.56%. Table TABREF22 shows the classification results obtained using the features generated by fasttext model for classical machine learning algorithms. For the fasttext model also, svm linear and logistic regression model have given maximum accuracies of 81.16% respectively. Experiments and Discussions ::: Results and Comparisons ::: Case 2: DMD approach In order to provide a comparison, we explore DMD based features. The Table TABREF24 shows the result obtained for normal DMD and HODMD based feature. The order for HODMD for the present work is 2 & 3. The classification is performed using SVM-linear kernel with control parameter value chosen as 1000 as the suitable one. We tried for other values such as 0.1, 1, 100, 500, and 1000. Figure FIGREF25 shows the control parameter versus accuracy plot which helped to fix the parameter value. Experiments and Discussions ::: Results and Comparisons ::: Case 3: RKS approach RKS approach has been used in the articles for NLP tasks [29,30,23]. In this work, we use RKS to imporve the evaluation scores as discussed previously. The RKS approach explicitly maps the embedding vectors to a dimension where the data becomes linearly separable. In that space, regularized least-square based classification (RLSC) is performed. The implementation of the RKS is taken from BIBREF29, BIBREF30. The feature vectors from Google universal sentence encoder and fasttext are explicitly mapped using RKS and the results are tabulated in Table TABREF27 and TABREF28. The Table TABREF27 shows the classification report on the proposed RKS method taking word vectors generated by Google universal encoder model as features with dimension 512. For this work, such vector is explicitly mapped to dimensions 100, 200, 500 and 1000 using RKS. The maximum accuracy obtained is 90.58% for higher dimension 1000. Table TABREF28 shows the classification report on the proposed RKS method taking word vectors generated by Fasttext model as features. For this model also, features are mapped to dimensions 100, 200, 500 and 1000. For Fasttext model, the proposed method gave a maximum accuracy of 99.53%, which is a bench marking result when compared to the literature. This result shows the discriminating capability of the features chosen, as when mapped to higher dimensions, they become linearly separable. From Table TABREF27 and TABREF28 it can be observed that as the mapping dimension increases, the evaluation score also improves. This shows the effectiveness of the RKS approach to obtain competing score. The capability of the RKS approach cane be explored on large datasets. Conclusion Offensive language detection is an important task related to social media data analysis. The nature of the content can vary as its provided by different people. The current work uses the data provided in SemEval 2019 shared task A for Offensive language identification. A comparative study is provided by exploring the effectiveness of Google universal sentence encoder, Fasttext based embedding, Dynamic Mode Decomposition based features and RKS based explicit mapping approach. For the experiments, we used the machine learning methods such as SVM linear, Random Forest, Logistic regression, Navie Bayes and Regularized least-square based classification. The measures used for evaluation are accuracy, precision, recall, and f1-score. We observed that RKS approach improved the results. However, as a future work, the proposed approach cane be explored on large datasets.
[ "Random Kitchen Sink method uses a kernel function to map data vectors to a space where linear separation is possible.", "explicitly maps data vectors to a space where linear separation is possible, RKS method provides an approximate kernel function via explicit mapping" ]
2,361
qasper_e
en
null
1ef257d8f567afd7d7057d678d14d28941a912b4df8b5775
What other models do they compare to?
Introduction Pre-training of language models has been shown to provide large improvements for a range of language understanding tasks BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . The key idea is to train a large generative model on vast corpora and use the resulting representations on tasks for which only limited amounts of labeled data is available. Pre-training of sequence to sequence models has been previously investigated for text classification BIBREF4 but not for text generation. In neural machine translation, there has been work on transferring representations from high-resource language pairs to low-resource settings BIBREF5 . In this paper, we apply pre-trained representations from language models to language generation tasks that can be modeled by sequence to sequence architectures. Previous work on integrating language models with sequence to sequence models focused on the decoder network and added language model representations right before the output of the decoder BIBREF6 . We extend their study by investigating several other strategies such as inputting ELMo-style representations BIBREF0 or fine-tuning the language model (§ SECREF2 ). Our experiments rely on strong transformer-based language models trained on up to six billion tokens (§ SECREF3 ). We present a detailed study of various strategies in different simulated labeled training data scenarios and observe the largest improvements in low-resource settings but gains of over 1 BLEU are still possible when five million sentence-pairs are available. The most successful strategy to integrate pre-trained representations is as input to the encoder network (§ SECREF4 ). Strategies to add representations We consider augmenting a standard sequence to sequence model with pre-trained representations following an ELMo-style regime (§ SECREF2 ) as well as by fine-tuning the language model (§ SECREF3 ). ELMo augmentation The ELMo approach of BIBREF0 forms contextualized word embeddings based on language model representations without adjusting the actual language model parameters. Specifically, the ELMo module contains a set of parameters INLINEFORM0 to form a linear combination of the INLINEFORM1 layers of the language model: ELMo = INLINEFORM2 where INLINEFORM3 is a learned scalar, INLINEFORM4 is a constant to normalize the INLINEFORM5 to sum to one and INLINEFORM6 is the output of the INLINEFORM7 -th language model layer; the module also considers the input word embeddings of the language model. We also apply layer normalization BIBREF7 to each INLINEFORM8 before computing ELMo vectors. We experiment with an ELMo module to input contextualized embeddings either to the encoder () or the decoder (). This provides word representations specific to the current input sentence and these representations have been trained on much more data than is available for the text generation task. Fine-tuning approach Fine-tuning the pre-trained representations adjusts the language model parameters by the learning signal of the end-task BIBREF1 , BIBREF3 . We replace learned input word embeddings in the encoder network with the output of the language model (). Specifically, we use the language model representation of the layer before the softmax and feed it to the encoder. We also add dropout to the language model output. Tuning separate learning rates for the language model and the sequence to sequence model may lead to better performance but we leave this to future work. However, we do tune the number of encoder blocks INLINEFORM0 as we found this important to obtain good accuracy for this setting. We apply the same strategy to the decoder: we input language model representations to the decoder network and fine-tune the language model when training the sequence to sequence model (). Datasets We train language models on two languages: One model is estimated on the German newscrawl distributed by WMT'18 comprising 260M sentences or 6B tokens. Another model is trained on the English newscrawl data comprising 193M sentences or 5B tokens. We learn a joint Byte-Pair-Encoding (BPE; Sennrich et al., 2016) vocabulary of 37K types on the German and English newscrawl and train the language models with this vocabulary. We consider two benchmarks: Most experiments are run on the WMT'18 English-German (en-de) news translation task and we validate our findings on the WMT'18 English-Turkish (en-tr) news task. For WMT'18 English-German, the training corpus consists of all available bitext excluding the ParaCrawl corpus and we remove sentences longer than 250 tokens as well as sentence-pairs with a source/target length ratio exceeding 1.5. This results in 5.18M sentence pairs. We tokenize all data with the Moses tokenizer BIBREF8 and apply the BPE vocabulary learned on the monolingual corpora. For WMT'18 English-Turkish, we use all of the available bitext comprising 208K sentence-pairs without any filtering. We develop on newstest2017 and test on newstest2018. For en-tr we only experiment with adding representations to the encoder and therefore apply the language model vocabulary to the source side. For the target vocabulary we learn a BPE code with 32K merge operations on the Turkish side of the bitext. Both datasets are evaluated in terms of case-sensitive de-tokenized BLEU BIBREF9 , BIBREF10 . We consider the abstractive document summarization task comprising over 280K news articles paired with multi-sentence summaries. is a widely used dataset for abstractive text summarization. Following BIBREF11 , we report results on the non-anonymized version of rather than the entity-anonymized version BIBREF12 , BIBREF13 because the language model was trained on full text. Articles are truncated to 400 tokens BIBREF11 and we use a BPE vocabulary of 32K types BIBREF14 . We evaluate in terms of F1-Rouge, that is Rouge-1, Rouge-2 and Rouge-L BIBREF15 . Language model pre-training We consider two types of architectures: a bi-directional language model to augment the sequence to sequence encoder and a uni-directional model to augment the decoder. Both use self-attention BIBREF16 and the uni-directional model contains INLINEFORM0 transformer blocks, followed by a word classifier to predict the next word on the right. The bi-directional model solves a cloze-style token prediction task at training time BIBREF17 . The model consists of two towers, the forward tower operates left-to-right and the tower operating right-to-left as backward tower; each tower contains INLINEFORM1 transformer blocks. The forward and backward representations are combined via a self-attention module and the output of this module is used to predict the token at position INLINEFORM2 . The model has access to the entire input surrounding the current target token. Models use the standard settings for the Big Transformer BIBREF16 . The bi-directional model contains 353M parameters and the uni-directional model 190M parameters. Both models were trained for 1M steps using Nesterov's accelerated gradient BIBREF18 with momentum INLINEFORM3 following BIBREF19 . The learning rate is linearly warmed up from INLINEFORM4 to 1 for 16K steps and then annealed using a cosine learning rate schedule with a single phase to 0.0001 BIBREF20 . We train on 32 Nvidia V100 SXM2 GPUs and use the NCCL2 library as well as the torch distributed package for inter-GPU communication. Training relies on 16-bit floating point operations BIBREF21 and it took six days for the bi-directional model and four days for the uni-directional model. Sequence to sequence model We use the transformer implementation of the fairseq toolkit BIBREF22 . The WMT en-de and en-tr experiments are based on the Big Transformer sequence to sequence architecture with 6 blocks in the encoder and decoder. For abstractive summarization we use a base transformer model BIBREF16 . We tune dropout values of between 0.1 and 0.4 on the validation set. Models are optimized with Adam BIBREF23 using INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 and we use the same learning rate schedule as BIBREF16 ; we perform 10K-200K depending on bitext size. All models use label smoothing with a uniform prior distribution over the vocabulary INLINEFORM3 BIBREF24 , BIBREF25 . We run experiments on 8 GPUs and generate translations with a beam of size 5. Machine translation We first present a comparison of the various strategies in different simulated parallel corpus size settings. For each experiment, we tune the dropout applied to the language model representations, and we reduce the number of optimizer steps for smaller bitext setups as models converge faster; all other hyper-parameters are equal between setups. Our baseline is a Big Transformer model and we also consider a variant where we share token embeddings between the encoder and decoder (; Inan et al., 2016; Press & Wolf, 2016). Figure FIGREF10 shows results averaged over six test sets relative to the baseline which does not share source and target embeddings (Appendix SECREF6 shows a detailed breakdown). performs very well with little labeled data but the gains erode to practically zero in large bitext settings. Pre-trained language model representations are most effective in low bitext setups. The best performing strategy is ELMo embeddings input to the encoder (). This improves the baseline by 3.8 BLEU in the 160K bitext setting and it still improves the 5.2M setting by over 1 BLEU. We further improve by sharing learned word representations in the decoder by tying input and output embeddings (). This configuration performs even better than with a gain of 5.3 BLEU in the 160K setup. Sharing decoder embeddings is equally applicable to . Language model representations are much less effective in the decoder: improves the 160K bitext setup but yields no improvements thereafter and performs even worse. We conjecture that pre-trained representations give much easier wins in the encoder. Table TABREF14 shows additional results on newstest2018. Pre-trained representations mostly impacts the training time of the sequence to sequence model (see Appendix SECREF7 ): slows throughput during training by about 5.3x and is even slower because of the need to backpropagate through the LM for fine-tuning (9.2x). However, inference is only 12-14% slower than the baseline when adding pre-trained embeddings to the encoder (, ). This is because the LM computation can be paralelized for all input tokens. Inference is much slower when adding representations to the decoder because the LM needs to be invoked repeatedly. Our current implementation does not cache LM operations for the previous state and can be made much faster. The baseline uses a BPE vocabulary estimated on the language model corpora (§ SECREF3 ). Appendix SECREF6 shows that this vocabulary actually leads to sligtly better performance than a joint BPE code learned on the bitext as is usual. Next, we validate our findings on the WMT'18 English-Turkish task for which the bitext is truly limited (208K sentence-pairs). We use the language model vocab for the the English side of the bitext and a BPE vocabulary learned on the Turkish side. Table TABREF15 shows that ELMo embeddings for the encoder improve English-Turkish translation. Abstractive summarization Following BIBREF11 , we experiment on the non-anonymized version of . When generating summaries, we follow standard practice of tuning the maximum output length and disallow repeating the same trigram BIBREF27 , BIBREF14 . For this task we train language model representations on the combination of newscrawl and the training data. Table TABREF16 shows that pre-trained embeddings can significantly improve on top of a strong baseline transformer. We also compare to BIBREF26 who use a task-specific architecture compared to our generic sequence to sequence baseline. Pre-trained representations are complementary to their method. Conclusion We presented an analysis of different strategies to add pre-trained language model representations to sequence to sequence models for neural machine translation and abstractive document summarization. Adding pre-trained representations is very effective for the encoder network and while returns diminish when more labeled data becomes available, we still observe improvements when millions of examples are available. In future research we will investigate ways to improve the decoder with pre-trained representations.
[ "BIBREF11 , BIBREF26 " ]
1,912
qasper_e
en
null
5a66bf2184751add62aab933cf9076dc61faf39f716d2e72
What are their results on both datasets?
Introduction Writing errors can occur in many different forms – from relatively simple punctuation and determiner errors, to mistakes including word tense and form, incorrect collocations and erroneous idioms. Automatically identifying all of these errors is a challenging task, especially as the amount of available annotated data is very limited. Rei2016 showed that while some error detection algorithms perform better than others, it is additional training data that has the biggest impact on improving performance. Being able to generate realistic artificial data would allow for any grammatically correct text to be transformed into annotated examples containing writing errors, producing large amounts of additional training examples. Supervised error generation systems would also provide an efficient method for anonymising the source corpus – error statistics from a private corpus can be aggregated and applied to a different target text, obscuring sensitive information in the original examination scripts. However, the task of creating incorrect data is somewhat more difficult than might initially appear – naive methods for error generation can create data that does not resemble natural errors, thereby making downstream systems learn misleading or uninformative patterns. Previous work on artificial error generation (AEG) has focused on specific error types, such as prepositions and determiners BIBREF0 , BIBREF1 , or noun number errors BIBREF2 . Felice2014a investigated the use of linguistic information when generating artificial data for error correction, but also restricting the approach to only five error types. There has been very limited research on generating artificial data for all types, which is important for general-purpose error detection systems. For example, the error types investigated by Felice2014a cover only 35.74% of all errors present in the CoNLL 2014 training dataset, providing no additional information for the majority of errors. In this paper, we investigate two supervised approaches for generating all types of artificial errors. We propose a framework for generating errors based on statistical machine translation (SMT), training a model to translate from correct into incorrect sentences. In addition, we describe a method for learning error patterns from an annotated corpus and transplanting them into error-free text. We evaluate the effect of introducing artificial data on two error detection benchmarks. Our results show that each method provides significant improvements over using only the available training set, and a combination of both gives an absolute improvement of 4.3% in INLINEFORM0 , without requiring any additional annotated data. Error Generation Methods We investigate two alternative methods for AEG. The models receive grammatically correct text as input and modify certain tokens to produce incorrect sequences. The alternative versions of each sentence are aligned using Levenshtein distance, allowing us to identify specific words that need to be marked as errors. While these alignments are not always perfect, we found them to be sufficient for practical purposes, since alternative alignments of similar sentences often result in the same binary labeling. Future work could explore more advanced alignment methods, such as proposed by felice-bryant-briscoe. In Section SECREF4 , this automatically labeled data is then used for training error detection models. Machine Translation We treat AEG as a translation task – given a correct sentence as input, the system would learn to translate it to contain likely errors, based on a training corpus of parallel data. Existing SMT approaches are already optimised for identifying context patterns that correspond to specific output sequences, which is also required for generating human-like errors. The reverse of this idea, translating from incorrect to correct sentences, has been shown to work well for error correction tasks BIBREF2 , BIBREF3 , and round-trip translation has also been shown to be promising for correcting grammatical errors BIBREF4 . Following previous work BIBREF2 , BIBREF5 , we build a phrase-based SMT error generation system. During training, error-corrected sentences in the training data are treated as the source, and the original sentences written by language learners as the target. Pialign BIBREF6 is used to create a phrase translation table directly from model probabilities. In addition to default features, we add character-level Levenshtein distance to each mapping in the phrase table, as proposed by Felice:2014-CoNLL. Decoding is performed using Moses BIBREF7 and the language model used during decoding is built from the original erroneous sentences in the learner corpus. The IRSTLM Toolkit BIBREF8 is used for building a 5-gram language model with modified Kneser-Ney smoothing BIBREF9 . Pattern Extraction We also describe a method for AEG using patterns over words and part-of-speech (POS) tags, extracting known incorrect sequences from a corpus of annotated corrections. This approach is based on the best method identified by Felice2014a, using error type distributions; while they covered only 5 error types, we relax this restriction and learn patterns for generating all types of errors. The original and corrected sentences in the corpus are aligned and used to identify short transformation patterns in the form of (incorrect phrase, correct phrase). The length of each pattern is the affected phrase, plus up to one token of context on both sides. If a word form changes between the incorrect and correct text, it is fully saved in the pattern, otherwise the POS tags are used for matching. For example, the original sentence `We went shop on Saturday' and the corrected version `We went shopping on Saturday' would produce the following pattern: (VVD shop_VV0 II, VVD shopping_VVG II) After collecting statistics from the background corpus, errors can be inserted into error-free text. The learned patterns are now reversed, looking for the correct side of the tuple in the input sentence. We only use patterns with frequency INLINEFORM0 , which yields a total of 35,625 patterns from our training data. For each input sentence, we first decide how many errors will be generated (using probabilities from the background corpus) and attempt to create them by sampling from the collection of applicable patterns. This process is repeated until all the required errors have been generated or the sentence is exhausted. During generation, we try to balance the distribution of error types as well as keeping the same proportion of incorrect and correct sentences as in the background corpus BIBREF10 . The required POS tags were generated with RASP BIBREF11 , using the CLAWS2 tagset. Error Detection Model We construct a neural sequence labeling model for error detection, following the previous work BIBREF12 , BIBREF13 . The model receives a sequence of tokens as input and outputs a prediction for each position, indicating whether the token is correct or incorrect in the current context. The tokens are first mapped to a distributed vector space, resulting in a sequence of word embeddings. Next, the embeddings are given as input to a bidirectional LSTM BIBREF14 , in order to create context-dependent representations for every token. The hidden states from forward- and backward-LSTMs are concatenated for each word position, resulting in representations that are conditioned on the whole sequence. This concatenated vector is then passed through an additional feedforward layer, and a softmax over the two possible labels (correct and incorrect) is used to output a probability distribution for each token. The model is optimised by minimising categorical cross-entropy with respect to the correct labels. We use AdaDelta BIBREF15 for calculating an adaptive learning rate during training, which accounts for a higher baseline performance compared to previous results. Evaluation We trained our error generation models on the public FCE training set BIBREF16 and used them to generate additional artificial training data. Grammatically correct text is needed as the starting point for inserting artificial errors, and we used two different sources: 1) the corrected version of the same FCE training set on which the system is trained (450K tokens), and 2) example sentences extracted from the English Vocabulary Profile (270K tokens).. While there are other text corpora that could be used (e.g., Wikipedia and news articles), our development experiments showed that keeping the writing style and vocabulary close to the target domain gives better results compared to simply including more data. We evaluated our detection models on three benchmarks: the FCE test data (41K tokens) and the two alternative annotations of the CoNLL 2014 Shared Task dataset (30K tokens) BIBREF3 . Each artificial error generation system was used to generate 3 different versions of the artificial data, which were then combined with the original annotated dataset and used for training an error detection system. Table TABREF1 contains example sentences from the error generation systems, highlighting each of the edits that are marked as errors. The error detection results can be seen in Table TABREF4 . We use INLINEFORM0 as the main evaluation measure, which was established as the preferred measure for error correction and detection by the CoNLL-14 shared task BIBREF3 . INLINEFORM1 calculates a weighted harmonic mean of precision and recall, which assigns twice as much importance to precision – this is motivated by practical applications, where accurate predictions from an error detection system are more important compared to coverage. For comparison, we also report the performance of the error detection system by Rei2016, trained using the same FCE dataset. The results show that error detection performance is substantially improved by making use of artificially generated data, created by any of the described methods. When comparing the error generation system by Felice2014a (FY14) with our pattern-based (PAT) and machine translation (MT) approaches, we see that the latter methods covering all error types consistently improve performance. While the added error types tend to be less frequent and more complicated to capture, the added coverage is indeed beneficial for error detection. Combining the pattern-based approach with the machine translation system (Ann+PAT+MT) gave the best overall performance on all datasets. The two frameworks learn to generate different types of errors, and taking advantage of both leads to substantial improvements in error detection. We used the Approximate Randomisation Test BIBREF17 , BIBREF18 to calculate statistical significance and found that the improvement for each of the systems using artificial data was significant over using only manual annotation. In addition, the final combination system is also significantly better compared to the Felice2014a system, on all three datasets. While Rei2016 also report separate experiments that achieve even higher performance, these models were trained on a considerably larger proprietary corpus. In this paper we compare error detection frameworks trained on the same publicly available FCE dataset, thereby removing the confounding factor of dataset size and only focusing on the model architectures. The error generation methods can generate alternative versions of the same input text – the pattern-based method randomly samples the error locations, and the SMT system can provide an n-best list of alternative translations. Therefore, we also investigated the combination of multiple error-generated versions of the input files when training error detection models. Figure FIGREF6 shows the INLINEFORM0 score on the development set, as the training data is increased by using more translations from the n-best list of the SMT system. These results reveal that allowing the model to see multiple alternative versions of the same file gives a distinct improvement – showing the model both correct and incorrect variations of the same sentences likely assists in learning a discriminative model. Related Work Our work builds on prior research into AEG. Brockett2006 constructed regular expressions for transforming correct sentences to contain noun number errors. Rozovskaya2010a learned confusion sets from an annotated corpus in order to generate preposition errors. Foster2009 devised a tool for generating errors for different types using patterns provided by the user or collected automatically from an annotated corpus. However, their method uses a limited number of edit operations and is thus unable to generate complex errors. Cahill2013 compared different training methodologies and showed that artificial errors helped correct prepositions. Felice2014a learned error type distributions for generating five types of errors, and the system in Section SECREF3 is an extension of this model. While previous work focused on generating a specific subset of error types, we explored two holistic approaches to AEG and showed that they are able to significantly improve error detection performance. Conclusion This paper investigated two AEG methods, in order to create additional training data for error detection. First, we explored a method using textual patterns learned from an annotated corpus, which are used for inserting errors into correct input text. In addition, we proposed formulating error generation as an MT framework, learning to translate from grammatically correct to incorrect sentences. The addition of artificial data to the training process was evaluated on three error detection annotations, using the FCE and CoNLL 2014 datasets. Making use of artificial data provided improvements for all data generation methods. By relaxing the type restrictions and generating all types of errors, our pattern-based method consistently outperformed the system by Felice2014a. The combination of the pattern-based method with the machine translation approach gave further substantial improvements and the best performance on all datasets.
[ "Combining pattern based and Machine translation approaches gave the best overall F0.5 scores. It was 49.11 for FCE dataset , 21.87 for the first annotation of CoNLL-14, and 30.13 for the second annotation of CoNLL-14. " ]
2,164
qasper_e
en
null
59589a5aa08f01c9772844cd16cf3a57aaf9567123ba2154
What other tasks do they test their method on?
Introduction We understand from Zipf's Law that in any natural language corpus a majority of the vocabulary word types will either be absent or occur in low frequency. Estimating the statistical properties of these rare word types is naturally a difficult task. This is analogous to the curse of dimensionality when we deal with sequences of tokens - most sequences will occur only once in the training data. Neural network architectures overcome this problem by defining non-linear compositional models over vector space representations of tokens and hence assign non-zero probability even to sequences not seen during training BIBREF0 , BIBREF1 . In this work, we explore a similar approach to learning distributed representations of social media posts by composing them from their constituent characters, with the goal of generalizing to out-of-vocabulary words as well as sequences at test time. Traditional Neural Network Language Models (NNLMs) treat words as the basic units of language and assign independent vectors to each word type. To constrain memory requirements, the vocabulary size is fixed before-hand; therefore, rare and out-of-vocabulary words are all grouped together under a common type `UNKNOWN'. This choice is motivated by the assumption of arbitrariness in language, which means that surface forms of words have little to do with their semantic roles. Recently, BIBREF2 challenge this assumption and present a bidirectional Long Short Term Memory (LSTM) BIBREF3 for composing word vectors from their constituent characters which can memorize the arbitrary aspects of word orthography as well as generalize to rare and out-of-vocabulary words. Encouraged by their findings, we extend their approach to a much larger unicode character set, and model long sequences of text as functions of their constituent characters (including white-space). We focus on social media posts from the website Twitter, which are an excellent testing ground for character based models due to the noisy nature of text. Heavy use of slang and abundant misspellings means that there are many orthographically and semantically similar tokens, and special characters such as emojis are also immensely popular and carry useful semantic information. In our moderately sized training dataset of 2 million tweets, there were about 0.92 million unique word types. It would be expensive to capture all these phenomena in a word based model in terms of both the memory requirement (for the increased vocabulary) and the amount of training data required for effective learning. Additional benefits of the character based approach include language independence of the methods, and no requirement of NLP preprocessing such as word-segmentation. A crucial step in learning good text representations is to choose an appropriate objective function to optimize. Unsupervised approaches attempt to reconstruct the original text from its latent representation BIBREF4 , BIBREF0 . Social media posts however, come with their own form of supervision annotated by millions of users, in the form of hashtags which link posts about the same topic together. A natural assumption is that the posts with the same hashtags should have embeddings which are close to each other. Hence, we formulate our training objective to maximize cross-entropy loss at the task of predicting hashtags for a post from its latent representation. We propose a Bi-directional Gated Recurrent Unit (Bi-GRU) BIBREF5 neural network for learning tweet representations. Treating white-space as a special character itself, the model does a forward and backward pass over the entire sequence, and the final GRU states are linearly combined to get the tweet embedding. Posterior probabilities over hashtags are computed by projecting this embedding to a softmax output layer. Compared to a word-level baseline this model shows improved performance at predicting hashtags for a held-out set of posts. Inspired by recent work in learning vector space text representations, we name our model tweet2vec. Related Work Using neural networks to learn distributed representations of words dates back to BIBREF0 . More recently, BIBREF4 released word2vec - a collection of word vectors trained using a recurrent neural network. These word vectors are in widespread use in the NLP community, and the original work has since been extended to sentences BIBREF1 , documents and paragraphs BIBREF6 , topics BIBREF7 and queries BIBREF8 . All these methods require storing an extremely large table of vectors for all word types and cannot be easily generalized to unseen words at test time BIBREF2 . They also require preprocessing to find word boundaries which is non-trivial for a social network domain like Twitter. In BIBREF2 , the authors present a compositional character model based on bidirectional LSTMs as a potential solution to these problems. A major benefit of this approach is that large word lookup tables can be compacted into character lookup tables and the compositional model scales to large data sets better than other state-of-the-art approaches. While BIBREF2 generate word embeddings from character representations, we propose to generate vector representations of entire tweets from characters in our tweet2vec model. Our work adds to the growing body of work showing the applicability of character models for a variety of NLP tasks such as Named Entity Recognition BIBREF9 , POS tagging BIBREF10 , text classification BIBREF11 and language modeling BIBREF12 , BIBREF13 . Previously, BIBREF14 dealt with the problem of estimating rare word representations by building them from their constituent morphemes. While they show improved performance over word-based models, their approach requires a morpheme parser for preprocessing which may not perform well on noisy text like Twitter. Also the space of all morphemes, though smaller than the space of all words, is still large enough that modelling all morphemes is impractical. Hashtag prediction for social media has been addressed earlier, for example in BIBREF15 , BIBREF16 . BIBREF15 also use a neural architecture, but compose text embeddings from a lookup table of words. They also show that the learned embeddings can generalize to an unrelated task of document recommendation, justifying the use of hashtags as supervision for learning text representations. Tweet2Vec Bi-GRU Encoder: Figure 1 shows our model for encoding tweets. It uses a similar structure to the C2W model in BIBREF2 , with LSTM units replaced with GRU units. The input to the network is defined by an alphabet of characters $C$ (this may include the entire unicode character set). The input tweet is broken into a stream of characters $c_1, c_2, ... c_m$ each of which is represented by a 1-by- $|C|$ encoding. These one-hot vectors are then projected to a character space by multiplying with the matrix $P_C \in \mathbb {R}^{|C| \times d_c}$ , where $d_c$ is the dimension of the character vector space. Let $x_1, x_2, ... x_m$ be the sequence of character vectors for the input tweet after the lookup. The encoder consists of a forward-GRU and a backward-GRU. Both have the same architecture, except the backward-GRU processes the sequence in reverse order. Each of the GRU units process these vectors sequentially, and starting with the initial state $h_0$ compute the sequence $h_1, h_2, ... h_m$ as follows: $ r_t &= \sigma (W_r x_t + U_r h_{t-1} + b_r), \\ z_t &= \sigma (W_z x_t + U_z h_{t-1} + b_z), \\ \tilde{h}_t &= tanh(W_h x_t + U_h (r_t \odot h_{t-1}) + b_h), \\ h_t &= (1-z_t) \odot h_{t-1} + z_t \odot \tilde{h}_t. $ Here $r_t$ , $z_t$ are called the reset and update gates respectively, and $\tilde{h}_t$ is the candidate output state which is converted to the actual output state $h_t$ . $W_r, W_z, W_h$ are $d_h \times d_c$ matrices and $U_r, U_z, U_h$ are $d_h \times d_h$ matrices, where $d_h$ is the hidden state dimension of the GRU. The final states $h_m^f$ from the forward-GRU, and $z_t$0 from the backward GRU are combined using a fully-connected layer to the give the final tweet embedding $z_t$1 : $$e_t = W^f h_m^f + W^b h_0^b$$ (Eq. 3) Here $W^f, W^b$ are $d_t \times d_h$ and $b$ is $d_t \times 1$ bias term, where $d_t$ is the dimension of the final tweet embedding. In our experiments we set $d_t=d_h$ . All parameters are learned using gradient descent. Softmax: Finally, the tweet embedding is passed through a linear layer whose output is the same size as the number of hashtags $L$ in the data set. We use a softmax layer to compute the posterior hashtag probabilities: $$P(y=j |e) = \frac{exp(w_j^Te + b_j)}{\sum _{i=1}^L exp(w_i^Te + b_j)}.$$ (Eq. 4) Objective Function: We optimize the categorical cross-entropy loss between predicted and true hashtags: $$J = \frac{1}{B} \sum _{i=1}^{B} \sum _{j=1}^{L} -t_{i,j}log(p_{i,j}) + \lambda \Vert \Theta \Vert ^2.$$ (Eq. 5) Here $B$ is the batch size, $L$ is the number of classes, $p_{i,j}$ is the predicted probability that the $i$ -th tweet has hashtag $j$ , and $t_{i,j} \in \lbrace 0,1\rbrace $ denotes the ground truth of whether the $j$ -th hashtag is in the $i$ -th tweet. We use L2-regularization weighted by $\lambda $ . Word Level Baseline Since our objective is to compare character-based and word-based approaches, we have also implemented a simple word-level encoder for tweets. The input tweet is first split into tokens along white-spaces. A more sophisticated tokenizer may be used, but for a fair comparison we wanted to keep language specific preprocessing to a minimum. The encoder is essentially the same as tweet2vec, with the input as words instead of characters. A lookup table stores word vectors for the $V$ (20K here) most common words, and the rest are grouped together under the `UNK' token. Data Our dataset consists of a large collection of global posts from Twitter between the dates of June 1, 2013 to June 5, 2013. Only English language posts (as detected by the lang field in Twitter API) and posts with at least one hashtag are retained. We removed infrequent hashtags ( $<500$ posts) since they do not have enough data for good generalization. We also removed very frequent tags ( $>19K$ posts) which were almost always from automatically generated posts (ex: #androidgame) which are trivial to predict. The final dataset contains 2 million tweets for training, 10K for validation and 50K for testing, with a total of 2039 distinct hashtags. We use simple regex to preprocess the post text and remove hashtags (since these are to be predicted) and HTML tags, and replace user-names and URLs with special tokens. We also removed retweets and convert the text to lower-case. Implementation Details Word vectors and character vectors are both set to size $d_L=150$ for their respective models. There were 2829 unique characters in the training set and we model each of these independently in a character look-up table. Embedding sizes were chosen such that each model had roughly the same number of parameters (Table 2 ). Training is performed using mini-batch gradient descent with Nesterov's momentum. We use a batch size $B=64$ , initial learning rate $\eta _0=0.01$ and momentum parameter $\mu _0=0.9$ . L2-regularization with $\lambda =0.001$ was applied to all models. Initial weights were drawn from 0-mean gaussians with $\sigma =0.1$ and initial biases were set to 0. The hyperparameters were tuned one at a time keeping others fixed, and values with the lowest validation cost were chosen. The resultant combination was used to train the models until performance on validation set stopped increasing. During training, the learning rate is halved everytime the validation set precision increases by less than 0.01 % from one epoch to the next. The models converge in about 20 epochs. Code for training both the models is publicly available on github. Results We test the character and word-level variants by predicting hashtags for a held-out test set of posts. Since there may be more than one correct hashtag per post, we generate a ranked list of tags for each post from the output posteriors, and report average precision@1, recall@10 and mean rank of the correct hashtags. These are listed in Table 3 . To see the performance of each model on posts containing rare words (RW) and frequent words (FW) we selected two test sets each containing 2,000 posts. We populated these sets with posts which had the maximum and minimum number of out-of-vocabulary words respectively, where vocabulary is defined by the 20K most frequent words. Overall, tweet2vec outperforms the word model, doing significantly better on RW test set and comparably on FW set. This improved performance comes at the cost of increased training time (see Table 2 ), since moving from words to characters results in longer input sequences to the GRU. We also study the effect of model size on the performance of these models. For the word model we set vocabulary size $V$ to 8K, 15K and 20K respectively. For tweet2vec we set the GRU hidden state size to 300, 400 and 500 respectively. Figure 2 shows precision 1 of the two models as the number of parameters is increased, for each test set described above. There is not much variation in the performance, and moreover tweet2vec always outperforms the word based model for the same number of parameters. Table 4 compares the models as complexity of the task is increased. We created 3 datasets (small, medium and large) with an increasing number of hashtags to be predicted. This was done by varying the lower threshold of the minimum number of tags per post for it to be included in the dataset. Once again we observe that tweet2vec outperforms its word-based counterpart for each of the three settings. Finally, table 1 shows some predictions from the word level model and tweet2vec. We selected these to highlight some strengths of the character based approach - it is robust to word segmentation errors and spelling mistakes, effectively interprets emojis and other special characters to make predictions, and also performs comparably to the word-based approach for in-vocabulary tokens. Conclusion We have presented tweet2vec - a character level encoder for social media posts trained using supervision from associated hashtags. Our result shows that tweet2vec outperforms the word based approach, doing significantly better when the input post contains many rare words. We have focused only on English language posts, but the character model requires no language specific preprocessing and can be extended to other languages. For future work, one natural extension would be to use a character-level decoder for predicting the hashtags. This will allow generation of hashtags not seen in the training dataset. Also, it will be interesting to see how our tweet2vec embeddings can be used in domains where there is a need for semantic understanding of social media, such as tracking infectious diseases BIBREF17 . Hence, we provide an off-the-shelf encoder trained on medium dataset described above to compute vector-space representations of tweets along with our code on github. Acknowledgments We would like to thank Alex Smola, Yun Fu, Hsiao-Yu Fish Tung, Ruslan Salakhutdinov, and Barnabas Poczos for useful discussions. We would also like to thank Juergen Pfeffer for providing access to the Twitter data, and the reviewers for their comments.
[ "None" ]
2,473
qasper_e
en
null
31a683b8dfd8036c61d738d6bd75c5c6d7e627ba61634b20
Who were the experts used for annotation?
Introduction Privacy policies are the documents which disclose the ways in which a company gathers, uses, shares and manages a user's data. As legal documents, they function using the principle of notice and choice BIBREF0, where companies post their policies, and theoretically, users read the policies and decide to use a company's products or services only if they find the conditions outlined in its privacy policy acceptable. Many legal jurisdictions around the world accept this framework, including the United States and the European Union BIBREF1, BIBREF2. However, the legitimacy of this framework depends upon users actually reading and understanding privacy policies to determine whether company practices are acceptable to them BIBREF3. In practice this is seldom the case BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10. This is further complicated by the highly individual and nuanced compromises that users are willing to make with their data BIBREF11, discouraging a `one-size-fits-all' approach to notice of data practices in privacy documents. With devices constantly monitoring our environment, including our personal space and our bodies, lack of awareness of how our data is being used easily leads to problematic situations where users are outraged by information misuse, but companies insist that users have consented. The discovery of increasingly egregious uses of data by companies, such as the scandals involving Facebook and Cambridge Analytica BIBREF12, have further brought public attention to the privacy concerns of the internet and ubiquitous computing. This makes privacy a well-motivated application domain for NLP researchers, where advances in enabling users to quickly identify the privacy issues most salient to them can potentially have large real-world impact. [1]https://play.google.com/store/apps/details?id=com.gotokeep.keep.intl [2]https://play.google.com/store/apps/details?id=com.viber.voip [3]A question might not have any supporting evidence for an answer within the privacy policy. Motivated by this need, we contribute PrivacyQA, a corpus consisting of 1750 questions about the contents of privacy policies, paired with over 3500 expert annotations. The goal of this effort is to kickstart the development of question-answering methods for this domain, to address the (unrealistic) expectation that a large population should be reading many policies per day. In doing so, we identify several understudied challenges to our ability to answer these questions, with broad implications for systems seeking to serve users' information-seeking intent. By releasing this resource, we hope to provide an impetus to develop systems capable of language understanding in this increasingly important domain. Related Work Prior work has aimed to make privacy policies easier to understand. Prescriptive approaches towards communicating privacy information BIBREF21, BIBREF22, BIBREF23 have not been widely adopted by industry. Recently, there have been significant research effort devoted to understanding privacy policies by leveraging NLP techniques BIBREF24, BIBREF25, BIBREF26, BIBREF27, BIBREF28, especially by identifying specific data practices within a privacy policy. We adopt a personalized approach to understanding privacy policies, that allows users to query a document and selectively explore content salient to them. Most similar is the PolisisQA corpus BIBREF29, which examines questions users ask corporations on Twitter. Our approach differs in several ways: 1) The PrivacyQA dataset is larger, containing 10x as many questions and answers. 2) Answers are formulated by domain experts with legal training. 3) PrivacyQA includes diverse question types, including unanswerable and subjective questions. Our work is also related to reading comprehension in the open domain, which is frequently based upon Wikipedia passages BIBREF16, BIBREF17, BIBREF15, BIBREF30 and news articles BIBREF20, BIBREF31, BIBREF32. Table.TABREF4 presents the desirable attributes our dataset shares with past approaches. This work is also tied into research in applying NLP approaches to legal documents BIBREF33, BIBREF34, BIBREF35, BIBREF36, BIBREF37, BIBREF38, BIBREF39. While privacy policies have legal implications, their intended audience consists of the general public rather than individuals with legal expertise. This arrangement is problematic because the entities that write privacy policies often have different goals than the audience. feng2015applying, tan-EtAl:2016:P16-1 examine question answering in the insurance domain, another specialized domain similar to privacy, where the intended audience is the general public. Data Collection We describe the data collection methodology used to construct PrivacyQA. With the goal of achieving broad coverage across application types, we collect privacy policies from 35 mobile applications representing a number of different categories in the Google Play Store. One of our goals is to include both policies from well-known applications, which are likely to have carefully-constructed privacy policies, and lesser-known applications with smaller install bases, whose policies might be considerably less sophisticated. Thus, setting 5 million installs as a threshold, we ensure each category includes applications with installs on both sides of this threshold. All policies included in the corpus are in English, and were collected before April 1, 2018, predating many companies' GDPR-focused BIBREF41 updates. We leave it to future studies BIBREF42 to look at the impact of the GDPR (e.g., to what extent GDPR requirements contribute to making it possible to provide users with more informative answers, and to what extent their disclosures continue to omit issues that matter to users). Data Collection ::: Crowdsourced Question Elicitation The intended audience for privacy policies consists of the general public. This informs the decision to elicit questions from crowdworkers on the contents of privacy policies. We choose not to show the contents of privacy policies to crowdworkers, a procedure motivated by a desire to avoid inadvertent biases BIBREF43, BIBREF44, BIBREF45, BIBREF46, BIBREF47, and encourage crowdworkers to ask a variety of questions beyond only asking questions based on practices described in the document. Instead, crowdworkers are presented with public information about a mobile application available on the Google Play Store including its name, description and navigable screenshots. Figure FIGREF9 shows an example of our user interface. Crowdworkers are asked to imagine they have access to a trusted third-party privacy assistant, to whom they can ask any privacy question about a given mobile application. We use the Amazon Mechanical Turk platform and recruit crowdworkers who have been conferred “master” status and are located within the United States of America. Turkers are asked to provide five questions per mobile application, and are paid $2 per assignment, taking ~eight minutes to complete the task. Data Collection ::: Answer Selection To identify legally sound answers, we recruit seven experts with legal training to construct answers to Turker questions. Experts identify relevant evidence within the privacy policy, as well as provide meta-annotation on the question's relevance, subjectivity, OPP-115 category BIBREF49, and how likely any privacy policy is to contain the answer to the question asked. Data Collection ::: Analysis Table.TABREF17 presents aggregate statistics of the PrivacyQA dataset. 1750 questions are posed to our imaginary privacy assistant over 35 mobile applications and their associated privacy documents. As an initial step, we formulate the problem of answering user questions as an extractive sentence selection task, ignoring for now background knowledge, statistical data and legal expertise that could otherwise be brought to bear. The dataset is partitioned into a training set featuring 27 mobile applications and 1350 questions, and a test set consisting of 400 questions over 8 policy documents. This ensures that documents in training and test splits are mutually exclusive. Every question is answered by at least one expert. In addition, in order to estimate annotation reliability and provide for better evaluation, every question in the test set is answered by at least two additional experts. Table TABREF14 describes the distribution over first words of questions posed by crowdworkers. We also observe low redundancy in the questions posed by crowdworkers over each policy, with each policy receiving ~49.94 unique questions despite crowdworkers independently posing questions. Questions are on average 8.4 words long. As declining to answer a question can be a legally sound response but is seldom practically useful, answers to questions where a minority of experts abstain to answer are filtered from the dataset. Privacy policies are ~3000 words long on average. The answers to the question asked by the users typically have ~100 words of evidence in the privacy policy document. Data Collection ::: Analysis ::: Categories of Questions Questions are organized under nine categories from the OPP-115 Corpus annotation scheme BIBREF49: First Party Collection/Use: What, why and how information is collected by the service provider Third Party Sharing/Collection: What, why and how information shared with or collected by third parties Data Security: Protection measures for user information Data Retention: How long user information will be stored User Choice/Control: Control options available to users User Access, Edit and Deletion: If/how users can access, edit or delete information Policy Change: Informing users if policy information has been changed International and Specific Audiences: Practices pertaining to a specific group of users Other: General text, contact information or practices not covered by other categories. For each question, domain experts indicate one or more relevant OPP-115 categories. We mark a category as relevant to a question if it is identified as such by at least two annotators. If no such category exists, the category is marked as `Other' if atleast one annotator has identified the `Other' category to be relevant. If neither of these conditions is satisfied, we label the question as having no agreement. The distribution of questions in the corpus across OPP-115 categories is as shown in Table.TABREF16. First party and third party related questions are the largest categories, forming nearly 66.4% of all questions asked to the privacy assistant. Data Collection ::: Analysis ::: Answer Validation When do experts disagree? We would like to analyze the reasons for potential disagreement on the annotation task, to ensure disagreements arise due to valid differences in opinion rather than lack of adequate specification in annotation guidelines. It is important to note that the annotators are experts rather than crowdworkers. Accordingly, their judgements can be considered valid, legally-informed opinions even when their perspectives differ. For the sake of this question we randomly sample 100 instances in the test data and analyze them for likely reasons for disagreements. We consider a disagreement to have occurred when more than one expert does not agree with the majority consensus. By disagreement we mean there is no overlap between the text identified as relevant by one expert and another. We find that the annotators agree on the answer for 74% of the questions, even if the supporting evidence they identify is not identical i.e full overlap. They disagree on the remaining 26%. Sources of apparent disagreement correspond to situations when different experts: have differing interpretations of question intent (11%) (for example, when a user asks 'who can contact me through the app', the questions admits multiple interpretations, including seeking information about the features of the app, asking about first party collection/use of data or asking about third party collection/use of data), identify different sources of evidence for questions that ask if a practice is performed or not (4%), have differing interpretations of policy content (3%), identify a partial answer to a question in the privacy policy (2%) (for example, when the user asks `who is allowed to use the app' a majority of our annotators decline to answer, but the remaining annotators highlight partial evidence in the privacy policy which states that children under the age of 13 are not allowed to use the app), and other legitimate sources of disagreement (6%) which include personal subjective views of the annotators (for example, when the user asks `is my DNA information used in any way other than what is specified', some experts consider the boilerplate text of the privacy policy which states that it abides to practices described in the policy document as sufficient evidence to answer this question, whereas others do not). Experimental Setup We evaluate the ability of machine learning methods to identify relevant evidence for questions in the privacy domain. We establish baselines for the subtask of deciding on the answerability (§SECREF33) of a question, as well as the overall task of identifying evidence for questions from policies (§SECREF37). We describe aspects of the question that can render it unanswerable within the privacy domain (§SECREF41). Experimental Setup ::: Answerability Identification Baselines We define answerability identification as a binary classification task, evaluating model ability to predict if a question can be answered, given a question in isolation. This can serve as a prior for downstream question-answering. We describe three baselines on the answerability task, and find they considerably improve performance over a majority-class baseline. SVM: We define 3 sets of features to characterize each question. The first is a simple bag-of-words set of features over the question (SVM-BOW), the second is bag-of-words features of the question as well as length of the question in words (SVM-BOW + LEN), and lastly we extract bag-of-words features, length of the question in words as well as part-of-speech tags for the question (SVM-BOW + LEN + POS). This results in vectors of 200, 201 and 228 dimensions respectively, which are provided to an SVM with a linear kernel. CNN: We utilize a CNN neural encoder for answerability prediction. We use GloVe word embeddings BIBREF50, and a filter size of 5 with 64 filters to encode questions. BERT: BERT BIBREF51 is a bidirectional transformer-based language-model BIBREF52. We fine-tune BERT-base on our binary answerability identification task with a learning rate of 2e-5 for 3 epochs, with a maximum sequence length of 128. Experimental Setup ::: Privacy Question Answering Our goal is to identify evidence within a privacy policy for questions asked by a user. This is framed as an answer sentence selection task, where models identify a set of evidence sentences from all candidate sentences in each policy. Experimental Setup ::: Privacy Question Answering ::: Evaluation Metric Our evaluation metric for answer-sentence selection is sentence-level F1, implemented similar to BIBREF30, BIBREF16. Precision and recall are implemented by measuring the overlap between predicted sentences and sets of gold-reference sentences. We report the average of the maximum F1 from each n$-$1 subset, in relation to the heldout reference. Experimental Setup ::: Privacy Question Answering ::: Baselines We describe baselines on this task, including a human performance baseline. No-Answer Baseline (NA) : Most of the questions we receive are difficult to answer in a legally-sound way on the basis of information present in the privacy policy. We establish a simple baseline to quantify the effect of identifying every question as unanswerable. Word Count Baseline : To quantify the effect of using simple lexical matching to answer the questions, we retrieve the top candidate policy sentences for each question using a word count baseline BIBREF53, which counts the number of question words that also appear in a sentence. We include the top 2, 3 and 5 candidates as baselines. BERT: We implement two BERT-based baselines BIBREF51 for evidence identification. First, we train BERT on each query-policy sentence pair as a binary classification task to identify if the sentence is evidence for the question or not (Bert). We also experiment with a two-stage classifier, where we separately train the model on questions only to predict answerability. At inference time, if the answerable classifier predicts the question is answerable, the evidence identification classifier produces a set of candidate sentences (Bert + Unanswerable). Human Performance: We pick each reference answer provided by an annotator, and compute the F1 with respect to the remaining references, as described in section 4.2.1. Each reference answer is treated as the prediction, and the remaining n-1 answers are treated as the gold reference. The average of the maximum F1 across all reference answers is computed as the human baseline. Results and Discussion The results of the answerability baselines are presented in Table TABREF31, and on answer sentence selection in Table TABREF32. We observe that bert exhibits the best performance on a binary answerability identification task. However, most baselines considerably exceed the performance of a majority-class baseline. This suggests considerable information in the question, indicating it's possible answerability within this domain. Table.TABREF32 describes the performance of our baselines on the answer sentence selection task. The No-answer (NA) baseline performs at 28 F1, providing a lower bound on performance at this task. We observe that our best-performing baseline, Bert + Unanswerable achieves an F1 of 39.8. This suggest that bert is capable of making some progress towards answering questions in this difficult domain, while still leaving considerable headroom for improvement to reach human performance. Bert + Unanswerable performance suggests that incorporating information about answerability can help in this difficult domain. We examine this challenging phenomena of unanswerability further in Section . Results and Discussion ::: Error Analysis Disagreements are analyzed based on the OPP-115 categories of each question (Table.TABREF34). We compare our best performing BERT variant against the NA model and human performance. We observe significant room for improvement across all categories of questions but especially for first party, third party and data retention categories. We analyze the performance of our strongest BERT variant, to identify classes of errors and directions for future improvement (Table.8). We observe that a majority of answerability mistakes made by the BERT model are questions which are in fact answerable, but are identified as unanswerable by BERT. We observe that BERT makes 124 such mistakes on the test set. We collect expert judgments on relevance, subjectivity , silence and information about how likely the question is to be answered from the privacy policy from our experts. We find that most of these mistakes are relevant questions. However many of them were identified as subjective by the annotators, and at least one annotator marked 19 of these questions as having no answer within the privacy policy. However, only 6 of these questions were unexpected or do not usually have an answer in privacy policies. These findings suggest that a more nuanced understanding of answerability might help improve model performance in his challenging domain. Results and Discussion ::: What makes Questions Unanswerable? We further ask legal experts to identify potential causes of unanswerability of questions. This analysis has considerable implications. While past work BIBREF17 has treated unanswerable questions as homogeneous, a question answering system might wish to have different treatments for different categories of `unanswerable' questions. The following factors were identified to play a role in unanswerability: Incomprehensibility: If a question is incomprehensible to the extent that its meaning is not intelligible. Relevance: Is this question in the scope of what could be answered by reading the privacy policy. Ill-formedness: Is this question ambiguous or vague. An ambiguous statement will typically contain expressions that can refer to multiple potential explanations, whereas a vague statement carries a concept with an unclear or soft definition. Silence: Other policies answer this type of question but this one does not. Atypicality: The question is of a nature such that it is unlikely for any policy policy to have an answer to the question. Our experts attempt to identify the different `unanswerable' factors for all 573 such questions in the corpus. 4.18% of the questions were identified as being incomprehensible (for example, `any difficulties to occupy the privacy assistant'). Amongst the comprehendable questions, 50% were identified as likely to have an answer within the privacy policy, 33.1% were identified as being privacy-related questions but not within the scope of a privacy policy (e.g., 'has Viber had any privacy breaches in the past?') and 16.9% of questions were identified as completely out-of-scope (e.g., `'will the app consume much space?'). In the questions identified as relevant, 32% were ill-formed questions that were phrased by the user in a manner considered vague or ambiguous. Of the questions that were both relevant as well as `well-formed', 95.7% of the questions were not answered by the policy in question but it was reasonable to expect that a privacy policy would contain an answer. The remaining 4.3% were described as reasonable questions, but of a nature generally not discussed in privacy policies. This suggests that the answerability of questions over privacy policies is a complex issue, and future systems should consider each of these factors when serving user's information seeking intent. We examine a large-scale dataset of “natural” unanswerable questions BIBREF54 based on real user search engine queries to identify if similar unanswerability factors exist. It is important to note that these questions have previously been filtered, according to a criteria for bad questions defined as “(questions that are) ambiguous, incomprehensible, dependent on clear false presuppositions, opinion-seeking, or not clearly a request for factual information.” Annotators made the decision based on the content of the question without viewing the equivalent Wikipedia page. We randomly sample 100 questions from the development set which were identified as unanswerable, and find that 20% of the questions are not questions (e.g., “all I want for christmas is you mariah carey tour”). 12% of questions are unlikely to ever contain an answer on Wikipedia, corresponding closely to our atypicality category. 3% of questions are unlikely to have an answer anywhere (e.g., `what guides Santa home after he has delivered presents?'). 7% of questions are incomplete or open-ended (e.g., `the south west wind blows across nigeria between'). 3% of questions have an unresolvable coreference (e.g., `how do i get to Warsaw Missouri from here'). 4% of questions are vague, and a further 7% have unknown sources of error. 2% still contain false presuppositions (e.g., `what is the only fruit that does not have seeds?') and the remaining 42% do not have an answer within the document. This reinforces our belief that though they have been understudied in past work, any question answering system interacting with real users should expect to receive such unanticipated and unanswerable questions. Conclusion We present PrivacyQA, the first significant corpus of privacy policy questions and more than 3500 expert annotations of relevant answers. The goal of this work is to promote question-answering research in the specialized privacy domain, where it can have large real-world impact. Strong neural baselines on PrivacyQA achieve a performance of only 39.8 F1 on this corpus, indicating considerable room for future research. Further, we shed light on several important considerations that affect the answerability of questions. We hope this contribution leads to multidisciplinary efforts to precisely understand user intent and reconcile it with information in policy documents, from both the privacy and NLP communities. Acknowledgements This research was supported in part by grants from the National Science Foundation Secure and Trustworthy Computing program (CNS-1330596, CNS-1330214, CNS-15-13957, CNS-1801316, CNS-1914486, CNS-1914444) and a DARPA Brandeis grant on Personalized Privacy Assistants (FA8750-15-2-0277). The US Government is authorized to reproduce and distribute reprints for Governmental purposes not withstanding any copyright notation. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the NSF, DARPA, or the US Government. The authors would like to extend their gratitude to Elias Wright, Gian Mascioli, Kiara Pillay, Harrison Kay, Eliel Talo, Alexander Fagella and N. Cameron Russell for providing their valuable expertise and insight to this effort. The authors are also grateful to Eduard Hovy, Lorrie Cranor, Florian Schaub, Joel Reidenberg, Aditya Potukuchi and Igor Shalyminov for helpful discussions related to this work, and to the three anonymous reviewers of this draft for their constructive feedback. Finally, the authors would like to thank all crowdworkers who consented to participate in this study.
[ "Individuals with legal training", "Yes" ]
3,846
qasper_e
en
null
33c06a5e9eb93d7a970d8e42e3ab04467182eef7113c5c5f
Which approaches have been applied to solve word segmentation in Vietnamese?
Introduction Lexical analysis, syntactic analysis, semantic analysis, disclosure analysis and pragmatic analysis are five main steps in natural language processing BIBREF0 , BIBREF1 . While morphology is a basic task in lexical analysis of English, word segmentation is considered a basic task in lexical analysis of Vietnamese and other East Asian languages processing. This task is to determine borders between words in a sentence. In other words, it is segmenting a list of tokens into a list of words such that words are meaningful. Word segmentation is the primary step in prior to other natural language processing tasks i. e., term extraction and linguistic analysis (as shown in Figure 1). It identifies the basic meaningful units in input texts which will be processed in the next steps of several applications. For named entity recognization BIBREF2 , word segmentation chunks sentences in input documents into sequences of words before they are further classified in to named entity classes. For Vietnamese language, words and candidate terms can be extracted from Vietnamese copora (such as books, novels, news, and so on) by using a word segmentation tool. Conformed features and context of these words and terms are used to identify named entity tags, topic of documents, or function words. For linguistic analysis, several linguistic features from dictionaries can be used either to annotating POS tags or to identifying the answer sentences. Moreover, language models can be trained by using machine learning approaches and be used in tagging systems, like the named entity recognization system of Tran et al. BIBREF2 . Many studies forcus on word segmentation for Asian languages, such as: Chinese, Japanese, Burmese (Myanmar) and Thai BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 . Approaches for word segmentation task are variety, from lexicon-based to machine learning-based methods. Recently, machine learning-based methods are used widely to solve this issue, such as: Support Vector Machine or Conditional Random Fields BIBREF7 , BIBREF8 . In general, Chinese is a language which has the most studies on the word segmentation issue. However, there is a lack of survey of word segmentation studies on Asian languages and Vietnamese as well. This paper aims reviewing state-of-the-art word segmentation approaches and systems applying for Vietnamese. This study will be a foundation for studies on Vietnamese word segmentation and other following Vietnamese tasks as well, such as part-of-speech tagger, chunker, or parser systems. There are several studies about the Vietnamese word segmentation task over the last decade. Dinh et al. started this task with Weighted Finite State Transducer (WFST) approach and Neural Network approach BIBREF9 . In addition, machine learning approaches are studied and widely applied to natural language processing and word segmentation as well. In fact, several studies used support vector machines (SVM) and conditional random fields (CRF) for the word segmentation task BIBREF7 , BIBREF8 . Based on annotated corpora and token-based features, studies used machine learning approaches to build word segmentation systems with accuracy about 94%-97%. According to our observation, we found that is lacks of complete review approaches, datasets and toolkits which we recently used in Vietnamese word segmentation. A all sided review of word segmentation will help next studies on Vietnamese natural language processing tasks have an up-to-date guideline and choose the most suitable solution for the task. The remaining part of the paper is organized as follows. Section II discusses building corpus in Vietnamese, containing linguistic issues and the building progress. Section III briefly mentions methods to model sentences and text in machine learning systems. Next, learning models and approaches for labeling and segmenting sequence data will be presented in Section IV. Section V mainly addresses two existing toolkits, vnTokenizer and JVnSegmenter, for Vietnamese word segmentation. Several experiments based on mentioned approaches and toolkits are described in Section VI. Finally, conclusions and future works are given in Section VII. Language Definition Vietnamese, like many languages in continental East Asia, is an isolating language and one branch of Mon-Khmer language group. The most basic linguistic unit in Vietnamese is morpheme, similar with syllable or token in English and “hình vị” (phoneme) or “tiếng” (syllable) in Vietnamese. According to the structured rule of its, Vietnamese can have about 20,000 different syllables (tokens). However, there are about 8,000 syllables used the Vietnamese dictionaries. There are three methods to identify morphemes in Vietnamese text BIBREF10 . Morpheme is the smallest meaningful unit of Vietnamese. Morpheme is the basic unit of Vietnamese. Morpheme is the smallest meaningful unit and is not used independently in the syntax factor. In computational linguistics, morpheme is the basic unit of languages as Leonard Bloomfield mentioned for English BIBREF11 . In our research for Vietnamese, we consider the morpheme as syllable, called “tiếng” in Vietnamese (as Nguyen’s definition BIBREF12 ). The next concept in linguistics is word which has fully grammar and meaning function in sentences. For Vietnamese, word is a single morpheme or a group of morphemes, which are fixed and have full meaning BIBREF12 . According to Nguyen, Vietnamese words are able classified into two types, (1) 1- syllable words with fully meaning and (2) n-syllables words whereas these group of tokens are fixed. Vietnamese syllable is not fully meaningful. However, it is also explained in the meaning and structure characteristics. For example, the token “kỳ” in “quốc kỳ” whereas “quốc” means national, “kỳ” means flag. Therefore, “quốc kỳ” means national flag. Consider dictionary used for evaluating the corpus, extracting features for models, and evaluating the systems, there are many Vietnamese dictionaries, however we recommend the Vietnamese dictionary of Hoang Phe, so called Hoang Phe Dictionary. This dictionary has been built by a group of linguistical scientists at the Linguistic Institute, Vietnam. It was firstly published in 1988, reprinted and extended in 2000, 2005 and 2010. The dictionary currently has 45,757 word items with 15,901 Sino-Vietnamese word items (accounting for 34.75%) BIBREF13 . Name Entity Issue In Vietnamese, not all of meaningful proper names are in the dictionary. Identifying proper names in input text are also important issue in word segmentation. This issue is sometimes included into unknown word issue to be solved. In addition, named entity recognition has to classify it into several types such as person, location, organization, time, money, number, and so on. Proper name identification can be solved by characteristics. For example, systems use beginning characters of proper names which are uppercase characters. Moreover, a list of proper names is also used to identify names in the text. In particular, a list of 2000 personal names extracted from VietnamGiaPha, and a list of 707 names of locations in Vietnam extracted from vi.wikipedia.org are used in the study of Nguyen et al. for Vietnamese word segmentation BIBREF7 . Building Corpus In general, building corpus is carried out through four stages: (1) choose target of corpus and source of raw data; (2) building a guideline based on linguistics knowledge for annotation; (3) annotating or tagging corpus based on rule set in the guideline; and (4) reviewing corpus to check the consistency issue. Encoding word segmentation corpus using B-I-O tagset can be applied, where B, I, and O denoted begin of word, inside of word, and others, respectively. For example, the sentence “Megabit trên giây là đơn vị đo tốc đọ truyền dẫn dữ liệu ." (”Megabit per second is a unit to measure the network traffic.” in English) with the word boundary result “Megabit trên giây là đơn_vị đo tốc_độ truyền_dẫn dữ_liệu ." is encoded as “Megabit/B trên/B giây/B là/B đơn/B vị/I đo/B tốc/B độ/I truyền/B dẫn/I dữ/B liệu/I ./O" . Annotation guidelines can be applied to ensure that annotated corpus has less errors because the manual annotation is applied. Even though there are guidelines for annotating, the available output corpora are still inconsistent. For example, for the Vietnamese Treebank corpus of the VLSP project, Nguyen et al. listed out several Vietnamese word segmentation inconsistencies in the corpus based on POS information and n-gram sequences BIBREF14 . Currently, there are at least three available word segmentation corpus used in Vietnamese word segmentation studies and systems. Firstly, Dinh et al. built the CADASA corpus from CADASA’s books BIBREF15 . Secondly, Nguyen et al. built vnQTAG corpus from general news articles BIBREF7 . More recently, Ngo et al. introduced the EVBCorpus corpus, which is collected from four sources, news articles, books, law documents, and novels. As a part of EVBCorpus, EVBNews, was annotated common tags in NLP, such as word segmentation, chunker, and named entity BIBREF16 . All of these corpora are collected from news articles or book stories, and they are manually annotated the word boundary tags (as shown in Table I). TEXT MODELLING AND FEATURES To understand natural language and analyze documents and text, computers need to represent natural languages as linguistics models. These models can be generated by using machine learning methods (as show in Figure 2). There are two common modeling methods for basic NLP tasks, including n-gram model and bag-of-words model. The n-gram model is widely used in natural language processing while the bag-of-words model is a simplified representation used in natural language processing and information retrieval BIBREF17 , BIBREF18 . According to the bag-of-words model, the representative vector of sentences in the document does not preserve the order of the words in the original sentences. It represents the word using term frequency collected from the document rather than the order of words or the structure of sentences in the document. The bag-of-words model is commonly used in methods of document classification, where the frequency of occurrence of each word is used as an attribute feature for training a classifier. In contrast, an n-gram is a contiguous sequence of n items from a given sequence of text. An n-gram model is a type of probabilistic language model for predicting the next item in a given sequence in form of a Markov model. To address word segmentation issue, the n-gram model is usually used for approaches because it considers the order of tokens in the original sentences. The sequence is also kept the original order as input and output sentences. BUILDING MODEL METHODS There are several studies for Vietnamese Word Segmentation during last decade. For instance, Dinh et al. started the word segmentation task for Vietnamese with Neural Network and Weighted Finite State Transducer (WFST) BIBREF9 . Nguyen et al. continued with machine learning approaches, Conditional Random Fields and Support Vector Machine BIBREF7 . Most of statistical approaches are based on the architecture as shown in Figure 2. According to the architecture, recent studies and systems focus on either improving or modifying difference learning models to get the highest accuracy. Features used in word segmentation systems are syllable, dictionary, and entity name. The detail of all widely used techniques applied are collected and described in following subsections. Maximum Matching Maximum matching (MM) is one of the most popular fundamental and structural segmentation algorithms for word segmentation BIBREF19 . This method is also considered as the Longest Matching (LM) in several research BIBREF9 , BIBREF3 . It is used for identifying word boundary in languages like Chinese, Vietnamese and Thai. This method is a greedy algorithm, which simply chooses longest words based on the dictionary. Segmentation may start from either end of the line without any difference in segmentation results. If the dictionary is sufficient BIBREF19 , the expected segmentation accuracy is over 90%, so it is a major advantage of maximum matching . However, it does not solve the problem of ambiguous words and unknown words that do not exist in the dictionary. There are two types of the maximum matching approach: forward MM (FMM) and backward MM (BMM). FMM starts from the beginning token of the sentence while BMM starts from the end. If the sentence has word boundary ambiguities, the output of FMM and BMM will be different. When applying FMM and BMM, there are two types of common errors due to two ambiguities: overlapping ambiguities and combination ambiguity. Overlapping ambiguities occur when the text AB has both word A, B and AB, which are in the dictionary while the text ABC has word AB and BC, which are in the dictionary. For example, "cụ già đi nhanh quá" (there two meanings: ”the old man goes very fast” or ”the old man died suddenly”) is a case of the overlapping ambiguity while "tốc độ truyền thông tin" is a case of the combination ambiguity. As shown in Figure 2, the method simplification ambiguities, maximum matching is the first step to get features for the modelling stage in machine learning systems, like Conditional Random Fields or Support Vector Machines. Hidden Markov Model (HMM) In Markov chain model is represented as a chain of tokens which are observations, and word taggers are represented as predicted labels. Many researchers applied Hidden Markov model to solve Vietnamese word segmentation such as in BIBREF8 , BIBREF20 and so on. N-gram language modeling applied to estimate probabilities for each word segmentation solution BIBREF21 . The result of this method depends on copora and is based maximal matching strategy. So, they do not solve missing word issue. Let INLINEFORM0 is a product of probabilities of words created from sentence s (1) with length INLINEFORM1 : DISPLAYFORM0 Each conditional probability of word is based on the last n-1 words (n-gram) in the sentence s. It is estimated by Markov chain model for word w from position i-n+1 to i-1 with probability (2) DISPLAYFORM0 We have equation (3) DISPLAYFORM0 Maximum Entropy (ME) Maximum Entropy theory is applied to solve Vietnamese word segmentation BIBREF15 , BIBREF22 , BIBREF23 . Some researchers do not want the limit in Markov chain model. So, they use the context around of the word needed to be segmented. Let h is a context, w is a list of words and t is a list of taggers, Le BIBREF15 , BIBREF22 used DISPLAYFORM0 P(s) is also a product of probabilities of words created from sentence INLINEFORM0 (1). Each conditional probability of word is based on context h of the last n word in the sentence s. Conditional Random Fields To tokenize a Vietnamese word, in HMM or ME, authors only rely on features around a word segment position. Some other features are also affected by adding more special attributes, such as, in case ’?’ question mark at end of sentence, Part of Speech (POS), and so on. Conditional Random Fields is one of methods that uses additional features to improve the selection strategy BIBREF7 . There are several CRF libraries, such as CRF++, CRFsuite. These machine learning toolkits can be used to solve the task by providing an annotated corpus with extracted features. The toolkit will be used to train a model based on the corpus and extract a tagging model. The tagging model will then be used to tag on input text without annotated corpus. In the training and tagging stages, extracting features from the corpus and the input text is necessary for both stages. Support Vector Machines Support Vector Machines (SVM) is a supervised machine learning method which considers dataset as a set of vectors and tries to classify them into specific classes. Basically, SVM is a binary classifier. however, most classification tasks are multi-class classifiers. When applying SVMs, the method has been extended to classify three or more classes. Particular NLP tasks, like word segmentation and Part-of-speech task, each token/word in documents will be used as a feature vector. For the word segmentation task, each token and its features are considered as a vector for the whole document, and the SVM model will classify this vector into one of the three tags (B-IO). This technique is applied for Vietnamese word segmentation in several studies BIBREF7 , BIBREF24 . Nguyen et al. applied on a segmented corpus of 8,000 sentences and got the result at 94.05% while Ngo et al. used it with 45,531 segmented sentences and get the result at 97.2%. It is worth to mention that general SVM libraries (such as LIBSVM, LIBLINEAR, SVMlight, Node-SVM, and TreeSVM ), YamCha is an opened source SVM library that serves several NLP tasks: POS tagging, Named Entity Recognition, base NP chunking, Text Chunking, Text Classification and event Word Segmentation. TOOLKITS vnTokenizer and JVnSegmenter are two famous segmentation toolkits for Vietnamese word segmentation. Both two word segmentation toolkits are implemented the word segmentation data process in Figure 2. This section gives more details of these Vietnamese word toolkits. Programming Languages In general, Java and C++ are the most common language in developing toolkits and systems for natural language processing tasks. For example, GATE, OpenNLP, Stanford CoreNLP and LingPipe platforms are developed by JAVA while foundation tasks and machine learning toolkits are developed by C++. CRF++, SVMLight and YAMCHA . Recently, Python becomes popular among the NLP community. In fact, many toolkits and platforms have been developed by this language, such as NLTK, PyNLPl library for Natural Language Processing. JVnSegmenter JVnSegmenter is a Java-based Vietnamese Word Segmentation Tool developed by Nguyen and Phan. The segmentation model in this tool was trained on about 8,000 tagged Vietnamese text sentences based on CRF model and the model extracted over 151,000 words from the training corpus. In addition, this is used in building the EnglishVietnamese Translation System BIBREF25 , BIBREF26 , BIBREF27 . Vietnamese text classification BIBREF28 and building Vietnamese corpus BIBREF29 , BIBREF30 . vnTokenizer vnTokenizer is implemented in Java and bundled as Eclipse plug-in, and it has already been integrated into vnToolkit, an Eclipse Rich Client application, which is intended to be a general framework integrating tools for processing of Vietnamese text. vnTokenizer plug-in, vnToolkit and related resources, including the lexicon and test corpus are freely available for download. According to our observation, many research cited vnTokenizer to use word segmentation results for applications as building a large Vietnamese corpus BIBREF31 , building an English-Vietnamese Bilingual Corpus for Machine Translation BIBREF32 , Vietnamese text classification BIBREF33 , BIBREF34 , etc. EVALUATION AND RESULTS This research gathers the results of Vietnamese word segmentation of several methods into one table as show in Table II. It is noted that they are not evaluated on a same corpus. The purpose of the result illustration is to provide an overview of the results of current Vietnamese word segmentation systems based on their individual features. All studies mentioned in the table have accuracy around 94-97% based on their provided corpus. This study also evaluates the Vietnamese word segmentation based on existing toolkits using the same annotated Vietnamese word segmentation corpus. There are two available toolkits to evaluate and to segment. To be neutral to both toolkits, we use the EVBNews Vietnamese corpus, a part of EVBCorpus, to evaluate Vietnamese word segmentation. The EVBNews corpus contains over 45,000 segmented Vietnamese sentences extracted from 1,000 general news articles (as shown in Table III) BIBREF16 . We used the same training set which has 1000 files and 45,531 sentences. vnTokenizer outputs 831,455 Vietnamese words and 1,206,475 tokens. JVnSegmenter outputs 840,387 words and 1,201,683. We correct tags (BIO), and compare to previous outputs, we have rate from vnTokenizer is 95.6% and from JVnsegmenter is 93.4%. The result of both vnTokenizer and JVnSegmenter testing on the EVBNews Vietnamese Corpus are provided in Table IV. CONCLUSIONS AND FUTURE WORKS This study reviewed state-of-the-art approaches and systems of Vietnamese word segmentation. The review pointed out common features and methods used in Vietnamese word segmentation studies. This study also had an evaluation of the existing Vietnamese word segmentation toolkits based on a same corpus to show advantages and disadvantages as to shed some lights on system enhancement. There are several challenges on supervised learning approaches in future work. The first challenge is to acquire very large Vietnamese corpus and to use them in building a classifier, which could further improve accuracy. In addition, applying linguistics knowledge on word context to extract useful features also enhances prediction performance. The second challenge is design and development of big data warehouse and analytic framework for Vietnamese documents, which corresponds to the rapid and continuous growth of gigantic volume of articles and/or documents from Web 2.0 applications, such as, Facebook, Twitter, and so on. It should be addressed that there are many kinds of Vietnamese documents, for example, Han - Nom documents and old and modern Vietnamese documents that are essential and still needs further analysis. According to our study, there is no a powerful Vietnamese language processing used for processing Vietnamese big data as well as understanding such language. The final challenge relates to building a system, which is able to incrementally learn new corpora and interactively process feedback. In particular, it is feasible to build an advance NLP system for Vietnamese based on Hadoop platform to improve system performance and to address existing limitations.
[ "Maximum Entropy, Weighted Finite State Transducer (WFST), support vector machines (SVM), conditional random fields (CRF)", "Maximum matching, Hidden Markov model , Maximum Entropy, Conditional Random Fields , Support Vector Machines" ]
3,471
qasper_e
en
null
69e126e9192d0ccc3669d9214ad94883c63d5cc1609e85b4
what ner models were evaluated?
Introduction Named entity recognition is an important task of natural language processing, featuring in many popular text processing toolkits. This area of natural language processing has been actively studied in the latest decades and the advent of deep learning reinvigorated the research on more effective and accurate models. However, most of existing approaches require large annotated corpora. To the best of our knowledge, no such work has been done for the Armenian language, and in this work we address several problems, including the creation of a corpus for training machine learning models, the development of gold-standard test corpus and evaluation of the effectiveness of established approaches for the Armenian language. Considering the cost of creating manually annotated named entity corpus, we focused on alternative approaches. Lack of named entity corpora is a common problem for many languages, thus bringing the attention of many researchers around the globe. Projection based transfer schemes have been shown to be very effective (e.g. BIBREF0 , BIBREF1 , BIBREF2 ), using resource-rich language's corpora to generate annotated data for the low-resource language. In this approach, the annotations of high-resource language are projected over the corresponding tokens of the parallel low-resource language's texts. This strategy can be applied for language pairs that have parallel corpora. However, that approach would not work for Armenian as we did not have access to sufficiently large parallel corpus with a resource-rich language. Another popular approach is using Wikipedia. Klesti Hoxha and Artur Baxhaku employ gazetteers extracted from Wikipedia to generate an annotated corpus for Albanian BIBREF3 , and Weber and Pötzl propose a rule-based system for German that leverages the information from Wikipedia BIBREF4 . However, the latter relies on external tools such as part-of-speech taggers, making it nonviable for the Armenian language. Nothman et al. generated a silver-standard corpus for 9 languages by extracting Wikipedia article texts with outgoing links and turning those links into named entity annotations based on the target article's type BIBREF5 . Sysoev and Andrianov used a similar approach for the Russian language BIBREF6 BIBREF7 . Based on its success for a wide range of languages, our choice fell on this model to tackle automated data generation and annotation for the Armenian language. Aside from the lack of training data, we also address the absence of a benchmark dataset of Armenian texts for named entity recognition. We propose a gold-standard corpus with manual annotation of CoNLL named entity categories: person, location, and organization BIBREF8 BIBREF9 , hoping it will be used to evaluate future named entity recognition models. Furthermore, popular entity recognition models were applied to the mentioned data in order to obtain baseline results for future research in the area. Along with the datasets, we developed GloVe BIBREF10 word embeddings to train and evaluate the deep learning models in our experiments. The contributions of this work are (i) the silver-standard training corpus, (ii) the gold-standard test corpus, (iii) GloVe word embeddings, (iv) baseline results for 3 different models on the proposed benchmark data set. All aforementioned resources are available on GitHub. Automated training corpus generation We used Sysoev and Andrianov's modification of the Nothman et al. approach to automatically generate data for training a named entity recognizer. This approach uses links between Wikipedia articles to generate sequences of named-entity annotated tokens. Dataset extraction The main steps of the dataset extraction system are described in Figure FIGREF3 . First, each Wikipedia article is assigned a named entity class (e.g. the article Քիմ Քաշքաշյան (Kim Kashkashian) is classified as PER (person), Ազգերի լիգա(League of Nations) as ORG (organization), Սիրիա(Syria) as LOC etc). One of the core differences between our approach and Nothman's system is that we do not rely on manual classification of articles and do not use inter-language links to project article classifications across languages. Instead, our classification algorithm uses only an article's Wikidata entry's first instance of label's parent subclass of labels, which are, incidentally, language independent and thus can be used for any language. Then, outgoing links in articles are assigned the article's type they are leading to. Sentences are included in the training corpus only if they contain at least one named entity and all contained capitalized words have an outgoing link to an article of known type. Since in Wikipedia articles only the first mention of each entity is linked, this approach becomes very restrictive and in order to include more sentences, additional links are inferred. This is accomplished by compiling a list of common aliases for articles corresponding to named entities, and then finding text fragments matching those aliases to assign a named entity label. An article's aliases include its title, titles of disambiguation pages with the article, and texts of links leading to the article (e.g. Լենինգրադ (Leningrad), Պետրոգրադ (Petrograd), Պետերբուրգ (Peterburg) are aliases for Սանկտ Պետերբուրգ (Saint Petersburg)). The list of aliases is compiled for all PER, ORG, LOC articles. After that, link boundaries are adjusted by removing the labels for expressions in parentheses, the text after a comma, and in some cases breaking into separate named entities if the linked text contains a comma. For example, [LOC Աբովյան (քաղաք)] (Abovyan (town)) is reworked into [LOC Աբովյան] (քաղաք). Using Wikidata to classify Wikipedia Instead of manually classifying Wikipedia articles as it was done in Nothman et al., we developed a rule-based classifier that used an article's Wikidata instance of and subclass of attributes to find the corresponding named entity type. The classification could be done using solely instance of labels, but these labels are unnecessarily specific for the task and building a mapping on it would require a more time-consuming and meticulous work. Therefore, we classified articles based on their first instance of attribute's subclass of values. Table TABREF4 displays the mapping between these values and named entity types. Using higher-level subclass of values was not an option as their values often were too general, making it impossible to derive the correct named entity category. Generated data Using the algorithm described above, we generated 7455 annotated sentences with 163247 tokens based on 20 February 2018 dump of Armenian Wikipedia. The generated data is still significantly smaller than the manually annotated corpora from CoNLL 2002 and 2003. For comparison, the train set of English CoNLL 2003 corpus contains 203621 tokens and the German one 206931, while the Spanish and Dutch corpora from CoNLL 2002 respectively 273037 and 218737 lines. The smaller size of our generated data can be attributed to the strict selection of candidate sentences as well as simply to the relatively small size of Armenian Wikipedia. The accuracy of annotation in the generated corpus heavily relies on the quality of links in Wikipedia articles. During generation, we assumed that first mentions of all named entities have an outgoing link to their article, however this was not always the case in actual source data and as a result the train set contained sentences where not all named entities are labeled. Annotation inaccuracies also stemmed from wrongly assigned link boundaries (for example, in Wikipedia article Արթուր Ուելսլի Վելինգթոն (Arthur Wellesley) there is a link to the Napoleon article with the text "է Նապոլեոնը" ("Napoleon is"), when it should be "Նապոլեոնը" ("Napoleon")). Another kind of common annotation errors occurred when a named entity appeared inside a link not targeting a LOC, ORG, or PER article (e.g. "ԱՄՆ նախագահական ընտրություններում" ("USA presidential elections") is linked to the article ԱՄՆ նախագահական ընտրություններ 2016 (United States presidential election, 2016) and as a result [LOC ԱՄՆ] (USA) is lost). Test dataset In order to evaluate the models trained on generated data, we manually annotated a named entities dataset comprising 53453 tokens and 2566 sentences selected from over 250 news texts from ilur.am. This dataset is comparable in size with the test sets of other languages (Table TABREF10 ). Included sentences are from political, sports, local and world news (Figures FIGREF8 , FIGREF9 ), covering the period between August 2012 and July 2018. The dataset provides annotations for 3 popular named entity classes: people (PER), organizations (ORG), and locations (LOC), and is released in CoNLL03 format with IOB tagging scheme. Tokens and sentences were segmented according to the UD standards for the Armenian language BIBREF11 . During annotation, we generally relied on categories and guidelines assembled by BBN Technologies for TREC 2002 question answering track. Only named entities corresponding to BBN's person name category were tagged as PER. Those include proper names of people, including fictional people, first and last names, family names, unique nicknames. Similarly, organization name categories, including company names, government agencies, educational and academic institutions, sports clubs, musical ensembles and other groups, hospitals, museums, newspaper names, were marked as ORG. However, unlike BBN, we did not mark adjectival forms of organization names as named entities. BBN's gpe name, facility name, location name categories were combined and annotated as LOC. We ignored entities of other categories (e.g. works of art, law, or events), including those cases where an ORG, LOC or PER entity was inside an entity of extraneous type (e.g. ՀՀ (RA) in ՀՀ Քրեական Օրենսգիրք (RA Criminal Code) was not annotated as LOC). Quotation marks around a named entity were not annotated unless those quotations were a part of that entity's full official name (e.g. «Նաիրիտ գործարան» ՓԲԸ ("Nairit Plant" CJSC)). Depending on context, metonyms such as Կրեմլ (Kremlin), Բաղրամյան 26 (Baghramyan 26) were annotated as ORG when referring to respective government agencies. Likewise, country or city names were also tagged as ORG when referring to sports teams representing them. Word embeddings Apart from the datasets, we also developed word embeddings for the Armenian language, which we used in our experiments to train and evaluate named entity recognition algorithms. Considering their ability to capture semantic regularities, we used GloVe to train word embeddings. We assembled a dataset of Armenian texts containing 79 million tokens from the articles of Armenian Wikipedia, The Armenian Soviet Encyclopedia, a subcorpus of Eastern Armenian National Corpus BIBREF12 , over a dozen Armenian news websites and blogs. Included texts covered topics such as economics, politics, weather forecast, IT, law, society and politics, coming from non-fiction as well as fiction genres. Similar to the original embeddings published for the English language, we release 50-, 100-, 200- and 300-dimensional word vectors for Armenian with a vocabulary size of 400000. Before training, all the words in the dataset were lowercased. For the final models we used the following training hyperparameters: 15 window size and 20 training epochs. Experiments In this section we describe a number of experiments targeted to compare the performance of popular named entity recognition algorithms on our data. We trained and evaluated Stanford NER, spaCy 2.0, and a recurrent model similar to BIBREF13 , BIBREF14 that uses bidirectional LSTM cells for character-based feature extraction and CRF, described in Guillaume Genthial's Sequence Tagging with Tensorflow blog post BIBREF15 . Models Stanford NER is conditional random fields (CRF) classifier based on lexical and contextual features such as the current word, character-level n-grams of up to length 6 at its beginning and the end, previous and next words, word shape and sequence features BIBREF16 . spaCy 2.0 uses a CNN-based transition system for named entity recognition. For each token, a Bloom embedding is calculated based on its lowercase form, prefix, suffix and shape, then using residual CNNs, a contextual representation of that token is extracted that potentially draws information from up to 4 tokens from each side BIBREF17 . Each update of the transition system's configuration is a classification task that uses the contextual representation of the top token on the stack, preceding and succeeding tokens, first two tokens of the buffer, and their leftmost, second leftmost, rightmost, second rightmost children. The valid transition with the highest score is applied to the system. This approach reportedly performs within 1% of the current state-of-the-art for English . In our experiments, we tried out 50-, 100-, 200- and 300-dimensional pre-trained GloVe embeddings. Due to time constraints, we did not tune the rest of hyperparameters and used their default values. The main model that we focused on was the recurrent model with a CRF top layer, and the above-mentioned methods served mostly as baselines. The distinctive feature of this approach is the way contextual word embeddings are formed. For each token separately, to capture its word shape features, character-based representation is extracted using a bidirectional LSTM BIBREF18 . This representation gets concatenated with a distributional word vector such as GloVe, forming an intermediate word embedding. Using another bidirectional LSTM cell on these intermediate word embeddings, the contextual representation of tokens is obtained (Figure FIGREF17 ). Finally, a CRF layer labels the sequence of these contextual representations. In our experiments, we used Guillaume Genthial's implementation of the algorithm. We set the size of character-based biLSTM to 100 and the size of second biLSTM network to 300. Evaluation Experiments were carried out using IOB tagging scheme, with a total of 7 class tags: O, B-PER, I-PER, B-LOC, I-LOC, B-ORG, I-ORG. We randomly selected 80% of generated annotated sentences for training and used the other 20% as a development set. The models with the best F1 score on the development set were tested on the manually annotated gold dataset. Discussion Table TABREF19 shows the average scores of evaluated models. The highest F1 score was achieved by the recurrent model using a batch size of 8 and Adam optimizer with an initial learning rate of 0.001. Updating word embeddings during training also noticeably improved the performance. GloVe word vector models of four different sizes (50, 100, 200, and 300) were tested, with vectors of size 50 producing the best results (Table TABREF20 ). For spaCy 2.0 named entity recognizer, the same word embedding models were tested. However, in this case the performance of 200-dimensional embeddings was highest (Table TABREF21 ). Unsurprisingly, both deep learning models outperformed the feature-based Stanford recognizer in recall, the latter however demonstrated noticeably higher precision. It is clear that the development set of automatically generated examples was not an ideal indicator of models' performance on gold-standard test set. Higher development set scores often led to lower test scores as seen in the evaluation results for spaCy 2.0 and Char-biLSTM+biLSTM+CRF (Tables TABREF21 and TABREF20 ). Analysis of errors on the development set revealed that many were caused by the incompleteness of annotations, when named entity recognizers correctly predicted entities that were absent from annotations (e.g. [ԽՍՀՄ-ի LOC] (USSR's), [Դինամոն ORG] (the_Dinamo), [Պիրենեյան թերակղզու LOC] (Iberian Peninsula's) etc). Similarly, the recognizers often correctly ignored non-entities that are incorrectly labeled in data (e.g. [օսմանների PER], [կոնսերվատորիան ORG] etc). Generally, tested models demonstrated relatively high precision of recognizing tokens that started named entities, but failed to do so with descriptor words for organizations and, to a certain degree, locations. The confusion matrix for one of the trained recurrent models illustrates that difference (Table TABREF22 ). This can be partly attributed to the quality of generated data: descriptor words are sometimes superfluously labeled (e.g. [Հավայան կղզիների տեղաբնիկները LOC] (the indigenous people of Hawaii)), which is likely caused by the inconsistent style of linking in Armenian Wikipedia (in the article ԱՄՆ մշակույթ (Culture of the United States), its linked text fragment "Հավայան կղզիների տեղաբնիկները" ("the indigenous people of Hawaii") leads to the article Հավայան կղզիներ(Hawaii)). Conclusion We release two named-entity annotated datasets for the Armenian language: a silver-standard corpus for training NER models, and a gold-standard corpus for testing. It is worth to underline the importance of the latter corpus, as we aim it to serve as a benchmark for future named entity recognition systems designed for the Armenian language. Along with the corpora, we publish GloVe word vector models trained on a collection of Armenian texts. Additionally, to establish the applicability of Wikipedia-based approaches for the Armenian language, we provide evaluation results for 3 different named entity recognition systems trained and tested on our datasets. The results reinforce the ability of deep learning approaches in achieving relatively high recall values for this specific task, as well as the power of using character-extracted embeddings alongside conventional word embeddings. There are several avenues of future work. Since Nothman et al. 2013, more efficient methods of exploiting Wikipedia have been proposed, namely WiNER BIBREF19 , which could help increase both the quantity and quality of the training corpus. Another potential area of work is the further enrichment of the benchmark test set with additional annotation of other classes such as MISC or more fine-grained types (e.g. CITY, COUNTRY, REGION etc instead of LOC).
[ "Stanford NER, spaCy 2.0 , recurrent model with a CRF top layer", "Stanford NER, spaCy 2.0, recurrent model with a CRF top layer" ]
2,759
qasper_e
en
null
a62a087ff34f25cc4f76205a1431eb4c2254d4ad01c630d4
What datasets are used to evaluate this paper?
Introduction Knowledge graphs have been proved to benefit many artificial intelligence applications, such as relation extraction, question answering and so on. A knowledge graph consists of multi-relational data, having entities as nodes and relations as edges. An instance of fact is represented as a triplet (Head Entity, Relation, Tail Entity), where the Relation indicates a relationship between these two entities. In the past decades, great progress has been made in building large scale knowledge graphs, such as WordNet BIBREF0 , Freebase BIBREF1 . However, most of them have been built either collaboratively or semi-automatically and as a result, they often suffer from incompleteness and sparseness. The knowledge graph completion is to predict relations between entities based on existing triplets in a knowledge graph. Recently, a new powerful paradigm has been proposed to encode every element (entity or relation) of a knowledge graph into a low-dimensional vector space BIBREF2 , BIBREF3 . The representations of entities and relations are obtained by minimizing a global loss function involving all entities and relations. Therefore, we can do reasoning over knowledge graphs through algebraic computations. Although existing methods have good capability to learn knowledge graph embeddings, it remains challenging for entities with few or no facts BIBREF4 . To solve the issue of KB sparsity, many methods have been proposed to learn knowledge graph embeddings by utilizing related text information BIBREF5 , BIBREF6 , BIBREF7 . These methods learn joint embedding of entities, relations, and words (or phrases, sentences) into the same vector space. However, there are still three problems to be solved. (1) The combination methods of the structural and textual representations are not well studied in these methods, in which two kinds of representations are merely aligned on word level or separate loss function. (2) The text description may represent an entity from various aspects, and various relations only focus on fractional aspects of the description. A good encoder should select the information from text in accordance with certain contexts of relations. Figure 1 illustrates the fact that not all information provided in its description are useful to predict the linked entities given a specific relation. (3) Intuitively, entities with many facts depend more on well-trained structured representation while those with few or no facts might be largely determined by text descriptions. A good representation should learn the most valuable information by balancing both sides. In this paper, we propose a new deep architecture to learn the knowledge representation by utilizing the existing text descriptions of entities. Specifically, we learn a joint representation of each entity from two information sources: one is structure information, and another is its text description. The joint representation is the combination of the structure and text representations with a gating mechanism. The gate decides how much information from the structure or text representation will carry over to the final joint representation. In addition, we also introduce an attention mechanism to select the most related information from text description under different contexts. Experimental results on link prediction and triplet classification show that our joint models can handle the sparsity problem well and outperform the baseline method on all metrics with a large margin. Our contributions in this paper are summarized as follows. Knowledge Graph Embedding In this section, we briefly introduce the background knowledge about the knowledge graph embedding. Knowledge graph embedding aims to model multi-relational data (entities and relations) into a continuous low-dimensional vector space. Given a pair of entities $(h,t)$ and their relation $r$ , we can represent them with a triple $(h,r,t)$ . A score function $f(h,r, t)$ is defined to model the correctness of the triple $(h,r,t)$ , thus to distinguish whether two entities $h$ and $t$ are in a certain relationship $r$ . $f(h,r, t)$ should be larger for a golden triplet $(h, r, t)$ that corresponds to a true fact in real world, otherwise $r$0 should be lower for an negative triplet. The difference among the existing methods varies between linear BIBREF2 , BIBREF8 and nonlinear BIBREF3 score functions in the low-dimensional vector space. Among these methods, TransE BIBREF2 is a simple and effective approach, which learns the vector embeddings for both entities and relationships. Its basic idea is that the relationship between two entities is supposed to correspond to a translation between the embeddings of entities, that is, $\textbf {h}+ \mathbf {r}\approx \mathbf {t}$ when $(h,r,t)$ holds. TransE's score function is defined as: $$f(h,r,t)) &= -\Vert \textbf {h}+\mathbf {r}-\mathbf {t}\Vert _{2}^2$$ (Eq. 5) where $\textbf {h},\mathbf {t},\mathbf {r}\in \mathbb {R}^d$ are embeddings of $h,t,r$ respectively, and satisfy $\Vert \textbf {h}\Vert ^2_2=\Vert \mathbf {t}\Vert ^2_2=1$ . The $\textbf {h}, \mathbf {r}, \mathbf {t}$ are indexed by a lookup table respectively. Neural Text Encoding Given an entity in most of the existing knowledge bases, there is always an available corresponding text description with valuable semantic information for this entity, which can provide beneficial supplement for entity representation. To encode the representation of a entity from its text description, we need to encode the variable-length sentence to a fixed-length vector. There are several kinds of neural models used in sentence modeling. These models generally consist of a projection layer that maps words, sub-word units or n-grams to vector representations (often trained beforehand with unsupervised methods), and then combine them with the different architectures of neural networks, such as neural bag-of-words (NBOW), recurrent neural network (RNN) BIBREF9 , BIBREF10 , BIBREF11 and convolutional neural network (CNN) BIBREF12 , BIBREF13 . In this paper, we use three encoders (NBOW, LSTM and attentive LSTM) to model the text descriptions. Bag-of-Words Encoder A simple and intuitive method is the neural bag-of-words (NBOW) model, in which the representation of text can be generated by summing up its constituent word representations. We denote the text description as word sequence $x_{1:n} = x_1,\cdots ,x_n$ , where $x_i$ is the word at position $i$ . The NBOW encoder is $$\mathrm {enc_1}(x_{1:n}) = \sum _{i=1}^{n} \mathbf {x}_i,$$ (Eq. 7) where $\mathbf {x}_i \in \mathbb {R}^d$ is the word embedding of $x_i$ . LSTM Encoder To address some of the modelling issues with NBOW, we consider using a bidirectional long short-term memory network (LSTM) BIBREF14 , BIBREF15 to model the text description. LSTM was proposed by BIBREF16 to specifically address this issue of learning long-term dependencies BIBREF17 , BIBREF18 , BIBREF16 in RNN. The LSTM maintains a separate memory cell inside it that updates and exposes its content only when deemed necessary. Bidirectional LSTM (BLSTM) can be regarded as two separate LSTMs with different directions. One LSTM models the text description from left to right, and another LSTM models text description from right to left respectively. We define the outputs of two LSTM at time step $i$ are $\overrightarrow{\mathbf {z}}_i$ and $\overleftarrow{\mathbf {z}}_i$ respectively. The combined output of BLSTM at position $i$ is ${\mathbf {z}_i} = \overrightarrow{\mathbf {z}}_i \oplus \overleftarrow{\mathbf {z}}_i$ , where $\oplus $ denotes the concatenation operation. The LSTM encoder combines all the outputs $\mathbf {z}_i \in \mathbb {R}^d$ of BLSTM at different position. $$\mathrm {enc_2}(x_{1:n}) = \sum _{i=1}^{n} {\mathbf {z}_i}.$$ (Eq. 9) Attentive LSTM Encoder While the LSTM encoder has richer capacity than NBOW, it produces the same representation for the entire text description regardless of its contexts. However, the text description may present an entity from various aspects, and various relations only focus on fractional aspects of the description. This phenomenon also occurs in structure embedding for an entity BIBREF8 , BIBREF19 . Given a relation for an entity, not all of words/phrases in its text description are useful to model a specific fact. Some of them may be important for the given relation, but may be useless for other relations. Therefore, we introduce an attention mechanism BIBREF20 to utilize an attention-based encoder that constructs contextual text encodings according to different relations. For each position $i$ of the text description, the attention for a given relation $r$ is defined as $\alpha _i(r)$ , which is $$e_i(r) &= \mathbf {v}_a^T \tanh (\mathbf {W}_a {\mathbf {z}}_i + \mathbf {U}_a \mathbf {r}), \\ \alpha _i(r)&=\operatorname{\mathbf {softmax}}(e_i(r))\nonumber \\ &=\frac{\exp (e_i(r))}{\sum ^{n}_{j=1} \exp (e_j(r))},$$ (Eq. 12) where $\mathbf {r}\in \mathbb {R}^d$ is the relation embedding; ${\mathbf {z}}_i \in \mathbb {R}^d$ is the output of BLSTM at position $i$ ; $\mathbf {W}_a,\mathbf {U}_a \in \mathbb {R}^{d\times d}$ are parameters matrices; $\mathbf {v}_a \in \mathbb {R}^{d}$ is a parameter vector. The attention $\alpha _i(r)$ is interpreted as the degree to which the network attends to partial representation $\mathbf {z}_{i}$ for given relation $r$ . The contextual encoding of text description can be formed by a weighted sum of the encoding $\mathbf {z}_{i}$ with attention. $$\mathbf {enc_3}(x_{1:n};r) &= \sum _{i=1}^{n} \alpha _i(r) * \mathbf {z}_i.$$ (Eq. 13) Joint Structure and Text Encoder Since both the structure and text description provide valuable information for an entity , we wish to integrate all these information into a joint representation. We propose a united model to learn a joint representation of both structure and text information. The whole model can be end-to-end trained. For an entity $e$ , we denote $\mathbf {e}_s$ to be its embedding of structure information, $\mathbf {e}_d$ to be encoding of its text descriptions. The main concern is how to combine $\mathbf {e}_s$ and $\mathbf {e}_d$ . To integrate two kinds of representations of entities, we use gating mechanism to decide how much the joint representation depends on structure or text. The joint representation $\mathbf {e}$ is a linear interpolation between the $\mathbf {e}_s$ and $\mathbf {e}_d$ . $$\mathbf {e}= \textbf {g}_e \odot \mathbf {e}_s + (1-\textbf {g}_e)\odot \mathbf {e}_d,$$ (Eq. 14) where $\textbf {g}_e$ is a gate to balance two sources information and its elements are in $[0,1]$ , and $\odot $ is an element-wise multiplication. Intuitively, when the gate is close to 0, the joint representation is forced to ignore the structure information and is the text representation only. Training We use the contrastive max-margin criterion BIBREF2 , BIBREF3 to train our model. Intuitively, the max-margin criterion provides an alternative to probabilistic, likelihood-based estimation methods by concentrating directly on the robustness of the decision boundary of a model BIBREF23 . The main idea is that each triplet $(h,r,t)$ coming from the training corpus should receives a higher score than a triplet in which one of the elements is replaced with a random elements. We assume that there are $n_t$ triplets in training set and denote the $i$ th triplet by $(h_i, r_i, t_i),(i = 1, 2, \cdots ,n_t)$ . Each triplet has a label $y_i$ to indicate the triplet is positive ( $y_i = 1$ ) or negative ( $y_i = 0$ ). Then the golden and negative triplets are denoted by $\mathcal {D} = \lbrace (h_j, r_j, t_j) | y_j = 1\rbrace $ and $\mathcal {\hat{D}} = \lbrace (h_j, r_j, t_j) | y_j = 0\rbrace $ , respectively. The positive example are the triplets from training dataset, and the negative examples are generated as follows: $ \mathcal {\hat{D}} = \lbrace (h_l, r_k, t_k) | h_l \ne h_k \wedge y_k = 1\rbrace \cup \lbrace (h_k, r_k, t_l) | t_l \ne t_k \wedge y_k = 1\rbrace \cup \lbrace (h_k, r_l, t_k) | r_l \ne r_k \wedge y_k = 1\rbrace $ . The sampling strategy is Bernoulli distribution described in BIBREF8 . Let the set of all parameters be $\Theta $ , we minimize the following objective: $$J(\Theta )=\sum _{(h,r,t) \in \mathcal {D}}\sum _{( \hat{h},\hat{r},\hat{t}) \in \mathcal {\hat{D}}} \max \left(0,\gamma - \right. \nonumber \\ f( h,r,t)+f(\hat{h},\hat{r},\hat{t})\left.\right)+ \eta \Vert \Theta \Vert _2^2,$$ (Eq. 22) where $\gamma > 0$ is a margin between golden triplets and negative triplets., $f(h, r, t)$ is the score function. We use the standard $L_2$ regularization of all the parameters, weighted by the hyperparameter $\eta $ . Experiment In this section, we study the empirical performance of our proposed models on two benchmark tasks: triplet classification and link prediction. Datasets We use two popular knowledge bases: WordNet BIBREF0 and Freebase BIBREF1 in this paper. Specifically, we use WN18 (a subset of WordNet) BIBREF24 and FB15K (a subset of Freebase) BIBREF2 since their text descriptions are easily publicly available. Table 1 lists statistics of the two datasets. Link Prediction Link prediction is a subtask of knowledge graph completion to complete a triplet $(h, r, t)$ with $h$ or $t$ missing, i.e., predict $t$ given $(h, r)$ or predict $h$ given $(r, t)$ . Rather than requiring one best answer, this task emphasizes more on ranking a set of candidate entities from the knowledge graph. Similar to BIBREF2 , we use two measures as our evaluation metrics. (1) Mean Rank: the averaged rank of correct entities or relations; (2) Hits@p: the proportion of valid entities or relations ranked in top $p$ predictions. Here, we set $p=10$ for entities and $p=1$ for relations. A lower Mean Rank and a higher Hits@p should be achieved by a good embedding model. We call this evaluation setting “Raw”. Since a false predicted triplet may also exist in knowledge graphs, it should be regard as a valid triplet. Hence, we should remove the false predicted triplets included in training, validation and test sets before ranking (except the test triplet of interest). We call this evaluation setting “Filter”. The evaluation results are reported under these two settings. We select the margin $\gamma $ among $\lbrace 1, 2\rbrace $ , the embedding dimension $d$ among $\lbrace 20, 50, 100\rbrace $ , the regularization $\eta $ among $\lbrace 0, 1E{-5}, 1E{-6}\rbrace $ , two learning rates $\lambda _s$ and $\lambda _t$ among $\lbrace 0.001, 0.01, 0.05\rbrace $ to learn the parameters of structure and text encoding. The dissimilarity measure is set to either $L_1$ or $\lbrace 1, 2\rbrace $0 distance. In order to speed up the convergence and avoid overfitting, we initiate the structure embeddings of entity and relation with the results of TransE. The embedding of a word is initialized by averaging the linked entity embeddings whose description include this word. The rest parameters are initialized by randomly sampling from uniform distribution in $[-0.1, 0.1]$ . The final optimal configurations are: $\gamma = 2$ , $d=20$ , $\eta =1E{-5}$ , $\lambda _s = 0.01$ , $\lambda _t = 0.1$ , and $L_1$ distance on WN18; $\gamma =2$ , $d=100$ , $\eta =1E{-5}$ , $\lambda _s = 0.01$ , $d=20$0 , and $d=20$1 distance on FB15K. Experimental results on both WN18 and FB15k are shown in Table 2 , where we use “Jointly(CBOW)”, “Jointly(LSTM)” and “Jointly(A-LSTM)” to represent our jointly encoding models with CBOW, LSTM and attentive LSTM text encoders. Our baseline is TransE since that the score function of our models is based on TransE. From the results, we observe that proposed models surpass the baseline, TransE, on all metrics, which indicates that knowledge representation can benefit greatly from text description. On WN18, the reason why “Jointly(A-LSTM)” is slightly worse than “Jointly(LSTM)” is probably because the number of relations is limited. Therefore, the attention mechanism does not have obvious advantage. On FB15K, “Jointly(A-LSTM)” achieves the best performance and is significantly higher than baseline methods on mean rank. Although the Hits@10 of our models are worse than the best state-of-the-art method, TransD, it is worth noticing that the score function of our models is based on TransE, not TransD. Our models are compatible with other state-of-the-art knowledge embedding models. We believe that our model can be further improved by adopting the score functions of other state-of-the-art methods, such as TransD. Besides, textual information largely alleviates the issue of sparsity and our model achieves substantial improvement on Mean Rank comparing with TransD. However, textual information may slightly degrade the representation of frequent entities which have been well-trained. This may be another reason why our Hits@10 is worse than TransD which only utilizes structural information. For the comparison of Hits@10 of different kinds of relations, we categorized the relationships according to the cardinalities of their head and tail arguments into four classes: 1-to-1, 1-to-many, many-to-1, many-to-many. Mapping properties of relations follows the same rules in BIBREF2 . Table 3 shows the detailed results by mapping properties of relations on FB15k. We can see that our models outperform baseline TransE in all types of relations (1-to-1, 1-to-N, N-to-1 and N-to-N), especially when (1) predicting “1-to-1” relations and (2) predicting the 1 side for “1-to-N” and “N-to-1” relations. To get more insights into how the joint representation is influenced by the structure and text information. We observe the activations of gates, which control the balance between two sources of information, to understand the behavior of neurons. We sort the entities by their frequencies and divide them into 50 equal-size groups of different frequencies, and average the values of all gates in each group. Figure 3 gives the average of gates in ten groups from high- to low-frequency. We observe that the text information play more important role for the low-frequency entities. Triplet Classification Triplet classification is a binary classification task, which aims to judge whether a given triplet $(h, r, t)$ is a correct fact or not. Since our used test sets (WN18 and FB15K) only contain correct triplets, we construct negative triplets following the same setting used in BIBREF3 . For triplets classification, we set a threshold $\delta _r$ for each relation $r$ . $\delta _r$ is obtained by maximizing the classification accuracies on the valid set. For a given triplet $(h, r, t)$ , if its score is larger than $\delta _r$ , it will be classified as positive, otherwise negative. Table 4 shows the evaluation results of triplets classification. The results reveal that our joint encoding models is effective and also outperform the baseline method. On WN18, “Jointly(A-LSTM)” achieves the best performance, and the “Jointly(LSTM)” is slightly worse than “Jointly(A-LSTM)”. The reason is that the number of relations is relatively small. Therefore, the attention mechanism does not show obvious advantage. On FB15K, the classification accuracy of “Jointly(A-LSTM)” achieves 91.5%, which is the best and significantly higher than that of state-of-the-art methods. Related Work Recently, it has gained lots of interests to jointly learn the embeddings of knowledge graph and text information. There are several methods using textual information to help KG representation learning. BIBREF3 represent an entity as the average of its word embeddings in entity name, allowing the sharing of textual information located in similar entity names. BIBREF5 jointly embed knowledge and text into the same space by aligning the entity name and its Wikipedia anchor, which brings promising improvements to the accuracy of predicting facts. BIBREF6 extend the joint model and aligns knowledge and words in the entity descriptions. However, these two works align the two kinds of embeddings on word level, which can lose some semantic information on phrase or sentence level. BIBREF25 also represent entities with entity names or the average of word embeddings in descriptions. However, their use of descriptions neglects word orders, and the use of entity names struggles with ambiguity. BIBREF7 jointly learn knowledge graph embeddings with entity descriptions. They use continuous bag-of-words and convolutional neural network to encode semantics of entity descriptions. However, they separate the objective functions into two energy functions of structure-based and description-based representations. BIBREF26 embeds both entity and relation embeddings by taking KG and text into consideration using CNN. To utilize both representations, they need further estimate an optimum weight coefficients to combine them together in the specific tasks. Besides entity representation, there are also a lot of works BIBREF27 , BIBREF28 , BIBREF29 to map textual relations and knowledge base relations to the same vector space and obtained substantial improvements. While releasing the current paper we discovered a paper by BIBREF30 proposing a similar model with attention mechanism which is evaluated on link prediction and triplet classification. However, our work encodes text description as a whole without explicit segmentation of sentences, which breaks the order and coherence among sentences. Conclusion We propose a united representation for knowledge graph, utilizing both structure and text description information of the entities. Experiments show that our proposed jointly representation learning with gating mechanism is effective, which benefits to modeling the meaning of an entity. In the future, we will consider the following research directions to improve our model:
[ "WordNet BIBREF0, Freebase BIBREF1, WN18 (a subset of WordNet) BIBREF24 , FB15K (a subset of Freebase) BIBREF2" ]
3,367
qasper_e
en
null
741e32f2f93410bcffc12553b94d72b680db834ee5a45669
What baseline model is used?
Introduction In the era of social media and networking platforms, Twitter has been doomed for abuse and harassment toward users specifically women. In fact, online harassment becomes very common in Twitter and there have been a lot of critics that Twitter has become the platform for many racists, misogynists and hate groups which can express themselves openly. Online harassment is usually in the form of verbal or graphical formats and is considered harassment, because it is neither invited nor has the consent of the receipt. Monitoring the contents including sexism and sexual harassment in traditional media is easier than monitoring on the online social media platforms like Twitter. The main reason is because of the large amount of user generated content in these media. So, the research about the automated detection of content containing sexual harassment is an important issue and could be the basis for removing that content or flagging it for human evaluation. The basic goal of this automatic classification is that it will significantly improve the process of detecting these types of hate speech on social media by reducing the time and effort required by human beings. Previous studies have been focused on collecting data about sexism and racism in very broad terms or have proposed two categories of sexism as benevolent or hostile sexism BIBREF0, which undermines other types of online harassment. However, there is no much study focusing on different types online harassment alone attracting natural language processing techniques. In this paper we present our work, which is a part of the SociaL Media And Harassment Competition of the ECML PKDD 2019 Conference. The topic of the competition is the classification of different types of harassment and it is divided in two tasks. The first one is the classification of the tweets in harassment and non-harassment categories, while the second one is the classification in specific harassment categories like indirect harassment, physical and sexual harassment as well. We are using the dataset of the competition, which includes text from tweets having the aforementioned categories. Our approach is based on the Recurrent Neural Networks and particularly we are using a deep, classification specific attention mechanism. Moreover, we present a comparison between different variations of this attention-based approach like multi-attention and single attention models. The next Section includes a short description of the related work, while the third Section includes a description of the dataset. After that, we describe our methodology. Finally, we describe the experiments and we present the results and our conclusion. Related Work Waseem et al. BIBREF1 were the first who collected hateful tweets and categorized them into being sexist, racist or neither. However, they did not provide specific definitions for each category. Jha and Mamidi BIBREF0 focused on just sexist tweets and proposed two categories of hostile and benevolent sexism. However, these categories were general as they ignored other types of sexism happening in social media. Sharifirad S. and Matwin S. BIBREF2 proposed complimentary categories of sexist language inspired from social science work. They categorized the sexist tweets into the categories of indirect harassment, information threat, sexual harassment and physical harassment. In the next year the same authors proposed BIBREF3 a more comprehensive categorization of online harassment in social media e.g. twitter into the following categories, indirect harassment, information threat, sexual harassment, physical harassment and not sexist. For the detection of hate speech in social media like twitter, many approaches have been proposed. Jha and Mamidi BIBREF0 tested support vector machine, bi-directional RNN encoder-decoder and FastText on hostile and benevolent sexist tweets. They also used SentiWordNet and subjectivity lexicon on the extracted phrases to show the polarity of the tweets. Sharifirad et al. BIBREF4 trained, tested and evaluated different classification methods on the SemEval2018 dataset and chose the classifier with the highest accuracy for testing on each category of sexist tweets to know the mental state and the affectual state of the user who tweets in each category. To overcome the limitations of small data sets on sexist speech detection, Sharifirad S. et al. BIBREF5 have applied text augmentation and text generation with certain success. They have generated new tweets by replacing words in order to increase the size of our training set. Moreover, in the presented text augmentation approach, the number of tweets in each class remains the same, but their words are augmented with words extracted from their ConceptNet relations and their description extracted from Wikidata. Zhang et al. BIBREF6 combined convolutional and gated recurrent networks to detect hate speech in tweets. Others have proposed different methods, which are not based on deep learning. Burnap and Williams BIBREF7 used Support Vector Machines, Random Forests and a meta-classifier to distinguish between hateful and non-hateful messages. A survey of recent research in the field is presented in BIBREF8. For the problem of the hate speech detection a few approaches have been proposed that are based on the Attention mechanism. Pavlopoulos et al. BIBREF9 have proposed a novel, classification-specific attention mechanism that improves the performance of the RNN further for the detection of abusive content in the web. Xie et al. BIBREF10 for emotion intensity prediction, which is a similar problem to ours, have proposed a novel attention mechanism for CNN model that associates attention-based weights for every convolution window. Park and Fung BIBREF11 transformed the classification into a 2-step problem, where abusive text first is distinguished from the non-abusive, and then the class of abuse (Sexism or Racism) is determined. However, while the first part of the two step classification performs quite well, it falls short in detecting the particular class the abusive text belongs to. Pitsilis et al. BIBREF12 have proposed a detection scheme that is an ensemble of RNN classifiers, which incorporates various features associated with user related information, such as the users’ tendency towards racism or sexism Dataset description The dataset from Twitter that we are using in our work, consists of a train set, a validation set and a test set. It was published for the "First workshop on categorizing different types of online harassment languages in social media". The whole dataset is divided into two categories, which are harassment and non-harassment tweets. Moreover, considering the type of the harassment, the tweets are divided into three sub-categories which are indirect harassment, sexual and physical harassment. We can see in Table TABREF1 the class distribution of our dataset. One important issue here is that the categories of indirect and physical harassment seem to be more rare in the train set than in the validation and test sets. To tackle this issue, as we describe in the next section, we are performing data augmentation techniques. However, the dataset is imbalanced and this has a significant impact in our results. Proposed methodology ::: Data augmentation As described before one crucial issue that we are trying to tackle in this work is that the given dataset is imbalanced. Particularly, there are only a few instances from indirect and physical harassment categories respectively in the train set, while there are much more in the validation and test sets for these categories. To tackle this issue we applying a back-translation method BIBREF13, where we translate indirect and physical harassment tweets of the train set from english to german, french and greek. After that, we translate them back to english in order to achieve data augmentation. These "noisy" data that have been translated back, increase the number of indirect and physical harassment tweets and boost significantly the performance of our models. Another way to enrich our models is the use of pre-trained word embeddings from 2B Twitter data BIBREF14 having 27B tokens, for the initialization of the embedding layer. Proposed methodology ::: Text processing Before training our models we are processing the given tweets using a tweet pre-processor. The scope here is the cleaning and tokenization of the dataset. Proposed methodology ::: RNN Model and Attention Mechanism We are presenting an attention-based approach for the problem of the harassment detection in tweets. In this section, we describe the basic approach of our work. We are using RNN models because of their ability to deal with sequence information. The RNN model is a chain of GRU cells BIBREF15 that transforms the tokens $w_{1}, w_{2},..., w_{k}$ of each tweet to the hidden states $h_{1}, h_{2},..., h_{k}$, followed by an LR Layer that uses $h_{k}$ to classify the tweet as harassment or non-harassment (similarly for the other categories). Given the vocabulary V and a matrix E $\in $ $R^{d \times \vert V \vert }$ containing d-dimensional word embeddings, an initial $h_{0}$ and a tweet $w = <w_{1},.., w_{k}>$, the RNN computes $h_{1}, h_{2},..., h_{k}$, with $h_{t} \in R^{m}$, as follows: where $h^{^{\prime }}_{t} \in R^{m}$ is the proposed hidden state at position t, obtained using the word embedding $x_{t}$ of token $w_{t}$ and the previous hidden state $h_{t-1}$, $\odot $ represents the element-wise multiplication, $r_{t} \in R^{m}$ is the reset gate, $z_{t} \in R^{m}$ is the update gate, $\sigma $ is the sigmoid function. Also $W_{h}, W_{z}, W_{r} \in R^{m \times d}$ and $U_{h}, U_{z}, U_{r} \in R^{m \times m}$, $b_{h}, b_{z}, b_{r} \in R^{m}$. After the computation of state $h_{k}$ the LR Layer estimates the probability that tweet w should be considered as harassment, with $W_{p} \in R^{1 \times m}, b_{p} \in R$: We would like to add an attention mechanism similar to the one presented in BIBREF9, so that the LR Layer will consider the weighted sum $h_{sum}$ of all the hidden states instead of $h_{k}$: $h_{sum} = \sum _{t=1}^{k} \alpha _{t}h_{t}$ $P_{attentionRNN} = \sigma (W_{p}h_{sum} + b_{p})$ Alternatively, we could pass $h_{sum}$ through an MLP with k layers and then the LR layer will estimate the corresponding probability. More formally, $P_{attentionRNN} = \sigma (W_{p}h_{*} + b_{p})$ where $h_{*}$ is the state that comes out from the MLP. The weights $\alpha _{t}$ are produced by an attention mechanism presented in BIBREF9 (see Fig. FIGREF7), which is an MLP with l layers. This attention mechanism differs from most previous ones BIBREF16, BIBREF17, because it is used in a classification setting, where there is no previously generated output sub-sequence to drive the attention. It assigns larger weights $\alpha _{t}$ to hidden states $h_{t}$ corresponding to positions, where there is more evidence that the tweet should be harassment (or any other specific type of harassment) or not. In our work we are using four attention mechanisms instead of one that is presented in BIBREF9. Particularly, we are using one attention mechanism per category. Another element that differentiates our approach from Pavlopoulos et al. BIBREF9 is that we are using a projection layer for the word embeddings (see Fig. FIGREF2). In the next subsection we describe the Model Architecture of our approach. Proposed methodology ::: Model Architecture The Embedding Layer is initialized using pre-trained word embeddings of dimension 200 from Twitter data that have been described in a previous sub-section. After the Embedding Layer, we are applying a Spatial Dropout Layer, which drops a certain percentage of dimensions from each word vector in the training sample. The role of Dropout is to improve generalization performance by preventing activations from becoming strongly correlated BIBREF18. Spatial Dropout, which has been proposed in BIBREF19, is an alternative way to use dropout with convolutional neural networks as it is able to dropout entire feature maps from the convolutional layer which are then not used during pooling. After that, the word embeddings are passing through a one-layer MLP, which has tanh as activation function and 128 hidden units, in order to project them in the vector space of our problem considering that they have been pre-trained using text that has a different subject. In the next step the embeddings are fed in a unidirectional GRU having 1 Stacked Layer and size 128. We prefer GRU than LSTM, because it is more efficient computationally. Also the basic advantage of LSTM which is the ability to keep in memory large text documents, does not hold here, because tweets supposed to be not too large text documents. The output states of the GRU are passing through four self-attentions like the one described above BIBREF9, because we are using one attention per category (see Fig. FIGREF7). Finally, a one-layer MLP having 128 nodes and ReLU as activation function computes the final score for each category. At this final stage we have avoided using a softmax function to decide the harassment type considering that the tweet is a harassment, otherwise we had to train our models taking into account only the harassment tweets and this might have been a problem as the dataset is not large enough. Experiments ::: Training Models In this subsection we are giving the details of the training process of our models. Moreover, we are describing the different models that we compare in our experiments. Batch size which pertains to the amount of training samples to consider at a time for updating our network weights, is set to 32, because our dataset is not large and small batches might help to generalize better. Also, we set other hyperparameters as: epochs = 20, patience = 10. As early stopping criterion we choose the average AUC, because our dataset is imbalanced. The training process is based on the optimization of the loss function mentioned below and it is carried out with the Adam optimizer BIBREF20, which is known for yielding quicker convergence. We set the learning rate equal to 0.001: $L = \frac{1}{2}BCE(harassment) + \frac{1}{2}(\frac{1}{5}BCE(sexualH) + \frac{2}{5}BCE(indirectH)+\frac{2}{5}BCE(physicalH))$ where BCE is the binary cross-entropy loss function, $BCE = -\frac{1}{n}\sum _{i=1}^{n}[y_{i}log(y^{^{\prime }}_{i}) + (1 - y_{i})log(1 - y^{^{\prime }}_{i}))]$ $i$ denotes the $i$th training sample, $y$ is the binary representation of true harassment label, and $y^{^{\prime }}$ is the predicted probability. In the loss function we have applied equal weight to both tasks. However, in the second task (type of harassment classification) we have applied higher weight in the categories that it is harder to predict due to the problem of the class imbalance between the training, validation and test sets respectively. Experiments ::: Evaluation and Results Each model produces four scores and each score is the probability that a tweet includes harassment language, indirect, physical and sexual harassment language respectively. For any tweet, we first check the score of the harassment language and if it is less than a specified threshold, then the harassment label is zero, so the other three labels are zero as well. If it is greater than or equal to that threshold, then the harassment label is one and the type of harassment is the one among these three having that has the greatest score (highest probability). We set this threshold equal to 0.33. We compare eight different models in our experiments. Four of them have a Projected Layer (see Fig. FIGREF2), while the others do not have, and this is the only difference between these two groups of our models. So, we actually include four models in our experiments (having a projected layer or not). Firstly, LastStateRNN is the classic RNN model, where the last state passes through an MLP and then the LR Layer estimates the corresponding probability. In contrast, in the AvgRNN model we consider the average vector of all states that come out of the cells. The AttentionRNN model is the one that it has been presented in BIBREF9. Moreover, we introduce the MultiAttentionRNN model for the harassment language detection, which instead of one attention, it includes four attentions, one for each category. We have evaluated our models considering the F1 Score, which is the harmonic mean of precision and recall. We have run ten times the experiment for each model and considered the average F1 Score. The results are mentioned in Table TABREF11. Considering F1 Macro the models that include the multi-attention mechanism outperform the others and particularly the one with the Projected Layer has the highest performance. In three out of four pairs of models, the ones with the Projected Layer achieved better performance, so in most cases the addition of the Projected Layer had a significant enhancement. Conclusion - Future work We present an attention-based approach for the detection of harassment language in tweets and the detection of different types of harassment as well. Our approach is based on the Recurrent Neural Networks and particularly we are using a deep, classification specific attention mechanism. Moreover, we present a comparison between different variations of this attention-based approach and a few baseline methods. According to the results of our experiments and considering the F1 Score, the multi-attention method having a projected layer, achieved the highest performance. Also, we tackled the problem of the imbalance between the training, validation and test sets performing the technique of back-translation. In the future, we would like to perform more experiments with this dataset applying different models using BERT BIBREF21. Also, we would like to apply the models presented in this work, in other datasets about hate speech in social media.
[ " LastStateRNN, AvgRNN, AttentionRNN", "LastStateRNN, AvgRNN, AttentionRNN " ]
2,823
qasper_e
en
null
7dbc40699fa6742050f798cfeea408507ca4b90811be532d
What cyberbulling topics did they address?
Introduction Cyberbullying has been defined by the National Crime Prevention Council as the use of the Internet, cell phones or other devices to send or post text or images intended to hurt or embarrass another person. Various studies have estimated that between to 10% to 40% of internet users are victims of cyberbullying BIBREF0 . Effects of cyberbullying can range from temporary anxiety to suicide BIBREF1 . Many high profile incidents have emphasized the prevalence of cyberbullying on social media. Most recently in October 2017, a Swedish model Arvida Byström was cyberbullied to the extent of receiving rape threats after she appeared in an advertisement with hairy legs. Detection of cyberbullying in social media is a challenging task. Definition of what constitutes cyberbullying is quite subjective. For example, frequent use of swear words might be considered as bullying by the general population. However, for teen oriented social media platforms such as Formspring, this does not necessarily mean bullying (Table TABREF9 ). Across multiple SMPs, cyberbullies attack victims on different topics such as race, religion, and gender. Depending on the topic of cyberbullying, vocabulary and perceived meaning of words vary significantly across SMPs. For example, in our experiments we found that for word `fat', the most similar words as per Twitter dataset are `female' and `woman' (Table TABREF23 ). However, other two datasets do not show such particular bias against women. This platform specific semantic similarity between words is a key aspect of cyberbullying detection across SMPs. Style of communication varies significantly across SMPs. For example, Twitter posts are short and lack anonymity. Whereas posts on Q&A oriented SMPs are long and have option of anonymity (Table TABREF7 ). Fast evolving words and hashtags in social media make it difficult to detect cyberbullying using swear word list based simple filtering approaches. The option of anonymity in certain social networks also makes it harder to identify cyberbullying as profile and history of the bully might not be available. Past works on cyberbullying detection have at least one of the following three bottlenecks. First (Bottleneck B1), they target only one particular social media platform. How these methods perform across other SMPs is unknown. Second (Bottleneck B2), they address only one topic of cyberbullying such as racism, and sexism. Depending on the topic, vocabulary and nature of cyberbullying changes. These models are not flexible in accommodating changes in the definition of cyberbullying. Third (Bottleneck B3), they rely on carefully handcrafted features such as swear word list and POS tagging. However, these handcrafted features are not robust against variations in writing style. In contrast to existing bottlenecks, this work targets three different types of social networks (Formspring: a Q&A forum, Twitter: microblogging, and Wikipedia: collaborative knowledge repository) for three topics of cyberbullying (personal attack, racism, and sexism) without doing any explicit feature engineering by developing deep learning based models along with transfer learning. We experimented with diverse traditional machine learning models (logistic regression, support vector machine, random forest, naive Bayes) and deep neural network models (CNN, LSTM, BLSTM, BLSTM with Attention) using variety of representation methods for words (bag of character n-gram, bag of word unigram, GloVe embeddings, SSWE embeddings). Summary of our findings and research contributions is as follows. Datasets Please refer to Table TABREF7 for summary of datasets used. We performed experiments using large, diverse, manually annotated, and publicly available datasets for cyberbullying detection in social media. We cover three different types of social networks: teen oriented Q&A forum (Formspring), large microblogging platform (Twitter), and collaborative knowledge repository (Wikipedia talk pages). Each dataset addresses a different topic of cyberbullying. Twitter dataset contains examples of racism and sexism. Wikipedia dataset contains examples of personal attack. However, Formspring dataset is not specifically about any single topic. All three datasets have the problem of class imbalance where posts labeled as cyberbullying are in the minority as compared to neutral posts. Variation in the number of posts across datasets also affects vocabulary size that represents the number of distinct words encountered in the dataset. We measure the size of a post in terms of the number of words in the post. For each dataset, there are only a few posts with large size. We truncate such large posts to the size of post ranked at 95 percentile in that dataset. For example, in Wikipedia dataset, the largest post has 2846 words. However, size of post ranked at 95 percentile in that dataset is only 231. Any post larger than size 231 in Wikipedia dataset will be truncated by considering only first 231 words. This truncation affects only a small minority of posts in each dataset. However, it is required for efficiently training various models in our experiments. Details of each dataset are as follows. Formspring BIBREF2 : It was a question and answer based website where users could openly invite others to ask and answer questions. The dataset includes 12K annotated question and answer pairs. Each post is manually labeled by three workers. Among these pairs, 825 were labeled as containing cyberbullying content by at least two Amazon Mechanical turk workers. Twitter BIBREF3 : This dataset includes 16K annotated tweets. The authors bootstrapped the corpus collection, by performing an initial manual search of common slurs and terms used pertaining to religious, sexual, gender, and ethnic minorities. Of the 16K tweets, 3117 are labeled as sexist, 1937 as racist, and the remaining are marked as neither sexist nor racist. Wikipedia BIBREF4 : For each page in Wikipedia, a corresponding talk page maintains the history of discussion among users who participated in its editing. This data set includes over 100k labeled discussion comments from English Wikipedia's talk pages. Each comment was labeled by 10 annotators via Crowdflower on whether it contains a personal attack. There are total 13590 comments labeled as personal attack. Use of Swear Words and Anonymity Please refer to Table TABREF9 . We use the following short forms in this section: B=Bullying, S=Swearing, A=Anonymous. Some of the values for Twitter dataset are undefined as Twitter does not allow anonymous postings. Use of swear words has been repeatedly linked to cyberbullying. However, preliminary analysis of datasets reveals that depending on swear word usage can neither lead to high precision nor high recall for cyberbullying detection. Swear word list based methods will have low precision as P(B INLINEFORM0 S) is not close to 1. In fact, for teen oriented social network Formspring, 78% of the swearing posts are non-bullying. Swear words based filtering will be irritating to the users in such SMPs where swear words are used casually. Swear word list based methods will also have a low recall as P(S INLINEFORM1 B) is not close to 1. For Twitter dataset, 82% of bullying posts do not use any swear words. Such passive-aggressive cyberbullying will go undetected with swear word list based methods. Anonymity is another clue that is used for detecting cyberbullying as bully might prefer to hide its identity. Anonymity definitely leads to increased use of swear words (P(S INLINEFORM2 A) INLINEFORM3 P(S)) and cyberbullying (P(B INLINEFORM4 A) INLINEFORM5 P(B), and P(B INLINEFORM6 A&S)) INLINEFORM7 P(B)). However, significant fraction of anonymous posts are non-bullying (P(B INLINEFORM8 A) not close to 1) and many of bullying posts are not anonymous (P(A INLINEFORM9 B) not close to 1). Further, anonymity might not be allowed by many SMPs such as Twitter. Related Work Cyberbullying is recognized as a phenomenon at least since 2003 BIBREF5 . Use of social media exploded with launching of multiple platforms such as Wikipedia (2001), MySpace (2003), Orkut (2004), Facebook (2004), and Twitter (2005). By 2006, researchers had pointed that cyberbullying was as serious phenomenon as offline bullying BIBREF6 . However, automatic detection of cyberbullying was addressed only since 2009 BIBREF7 . As a research topic, cyberbullying detection is a text classification problem. Most of the existing works fit in the following template: get training dataset from single SMP, engineer variety of features with certain style of cyberbullying as the target, apply a few traditional machine learning methods, and evaluate success in terms of measures such as F1 score and accuracy. These works heavily rely on handcrafted features such as use of swear words. These methods tend to have low precision for cyberbullying detection as handcrafted features are not robust against variations in bullying style across SMPs and bullying topics. Only recently, deep learning has been applied for cyberbullying detection BIBREF8 . Table TABREF27 summarizes important related work. Deep Neural Network (DNN) Based Models We experimented with four DNN based models for cyberbullying detection: CNN, LSTM, BLSTM, and BLSTM with attention. These models are listed in the increasing complexity of their neural architecture and amount of information used by these models. Please refer to Figure 1 for general architecture that we have used across four models. Various models differ only in the Neural Architecture layer while having identical rest of the layers. CNNs are providing state-of-the-results on extracting contextual feature for classification tasks in images, videos, audios, and text. Recently, CNNs were used for sentiment classification BIBREF9 . Long Short Term Memory networks are a special kind of RNN, capable of learning long-term dependencies. Their ability to use their internal memory to process arbitrary sequences of inputs has been found to be effective for text classification BIBREF10 . Bidirectional LSTMs BIBREF11 further increase the amount of input information available to the network by encoding information in both forward and backward direction. By using two directions, input information from both the past and future of the current time frame can be used. Attention mechanisms allow for a more direct dependence between the state of the model at different points in time. Importantly, attention mechanism lets the model learn what to attend to based on the input sentence and what it has produced so far. The embedding layer processes a fixed size sequence of words. Each word is represented as a real-valued vector, also known as word embeddings. We have experimented with three methods for initializing word embeddings: random, GloVe BIBREF12 , and SSWE BIBREF13 . During the training, model improves upon the initial word embeddings to learn task specific word embeddings. We have observed that these task specific word embeddings capture the SMP specific and topic specific style of cyberbullying. Using GloVe vectors over random vector initialization has been reported to improve performance for some NLP tasks. Most of the word embedding methods such as GloVe, consider only syntactic context of the word while ignoring the sentiment conveyed by the text. SSWE method overcomes this problem by incorporating the text sentiment as one of the parameters for word embedding generation. We experimented with various dimension size for word embeddings. Experimental results reported here are with dimension size as 50. There was no significant variation in results with dimension size ranging from 30 to 200. To avoid overfitting, we used two dropout layers, one before the neural architecture layer and one after, with dropout rates of 0.25 and 0.5 respectively. Fully connected layer is a dense output layer with the number of neurons equal to the number of classes, followed by softmax layer that provides softmax activation. All our models are trained using backpropagation. The optimizer used for training is Adam and the loss function is categorical cross-entropy. Besides learning the network weights, these methods also learn task-specific word embeddings tuned towards the bullying labels (See Section SECREF21 ). Our code is available at: https://github.com/sweta20/Detecting-Cyberbullying-Across-SMPs. Experiments Existing works have heavily relied on traditional machine learning models for cyberbullying detection. However, they do not study the performance of these models across multiple SMPs. We experimented with four models: logistic regression (LR), support vector machine (SVM), random forest (RF), and naive Bayes (NB), as these are used in previous works (Table TABREF27 ). We used two data representation methods: character n-gram and word unigram. Past work in the domain of detecting abusive language have showed that simple n-gram features are more powerful than linguistic and syntactic features, hand-engineered lexicons, and word and paragraph embeddings BIBREF14 . As compared to DNN models, performance of all four traditional machine learning models was significantly lower. Please refer to Table TABREF11 . All DNN models reported here were implemented using Keras. We pre-process the data, subjecting it to standard operations of removal of stop words, punctuation marks and lowercasing, before annotating it to assigning respective labels to each comment. For each trained model, we report its performance after doing five-fold cross-validation. We use following short forms. Effect of Oversampling Bullying Instances The training datasets had a major problem of class imbalance with posts marked as bullying in the minority. As a result, all models were biased towards labeling the posts as non-bullying. To remove this bias, we oversampled the data from bullying class thrice. That is, we replicated bullying posts thrice in the training data. This significantly improved the performance of all DNN models with major leap in all three evaluation measures. Table TABREF17 shows the effect of oversampling for a variety of word embedding methods with BLSTM Attention as the detection model. Results for other models are similar BIBREF15 . We can notice that oversampled datasets (F+, T+, W+) have far better performance than their counterparts (F, T, W respectively). Oversampling particularly helps the smallest dataset Formspring where number of training instances for bullying class is quite small (825) as compared to other two datasets (about 5K and 13K). We also experimented with varying the replication rate for bullying posts BIBREF15 . However, we observed that for bullying posts, replication rate of three is good enough. Choice of Initial Word Embeddings and Model Initial word embeddings decide data representation for DNN models. However during the training, DNN models modify these initial word embeddings to learn task specific word embeddings. We have experimented with three methods to initialize word embeddings. Please refer to Table TABREF19 . This table shows the effect of varying initial word embeddings for multiple DNN models across datasets. We can notice that initial word embeddings do not have a significant effect on cyberbullying detection when oversampling of bullying posts is done (rows corresponding to F+, T+, W+). In the absence of oversampling (rows corresponding to F, T W), there is a gap in performance of simplest (CNN) and most complex (BLSTM with attention) models. However, this gap goes on reducing with the increase in the size of datasets. Table TABREF20 compares the performance of four DNN models for three evaluation measures while using SSWE as the initial word embeddings. We have noticed that most of the time LSTM performs weaker than other three models. However, performance gap in the other three models is not significant. Task Specific Word Embeddings DNN models learn word embeddings over the training data. These learned embeddings across multiple datasets show the difference in nature and style of bullying across cyberbullying topics and SMPs. Here we report results for BLSTM with attention model. Results for other models are similar. We first verify that important words for each topic of cyberbullying form clusters in the learned embeddings. To enable the visualization of grouping, we reduced dimensionality with t-SNE BIBREF16 , a well-known technique for dimensionality reduction particularly well suited for visualization of high dimensional datasets. Please refer to Table TABREF22 . This table shows important clusters observed in t-SNE projection of learned word embeddings. Each cluster shows that words most relevant to a particular topic of bullying form cluster. We also observed changes in the meanings of the words across topics of cyberbullying. Table TABREF23 shows most similar words for a given query word for two datasets. Twitter dataset which is heavy on sexism and racism, considers word slave as similar to targets of racism and sexism. However, Wikipedia dataset that is about personal attacks does not show such bias. Transfer Learning We used transfer learning to check if the knowledge gained by DNN models on one dataset can be used to improve cyberbullying detection performance on other datasets. We report results where BLSTM with attention is used as the DNN model. Results for other models are similar BIBREF15 . We experimented with following three flavors of transfer learning. Complete Transfer Learning (TL1): In this flavor, a model trained on one dataset was directly used to detect cyberbullying in other datasets without any extra training. TL1 resulted in significantly low recall indicating that three datasets have different nature of cyberbullying with low overlap (Table TABREF25 ). However precision was relatively higher for TL1, indicating that DNN models are cautious in labeling a post as bully (Table TABREF25 ). TL1 also helps to measure similarity in nature of cyberbullying across three datasets. We can observe that bullying nature in Formspring and Wikipedia datasets is more similar to each other than the Twitter dataset. This can be inferred from the fact that with TL1, cyberbullying detection performance for Formspring dataset is higher when base model is Wikipedia (precision =0.51 and recall=0.66)as compared to Twitter as the base model (precision=0.38 and recall=0.04). Similarly, for Wikipedia dataset, Formspring acts as a better base model than Twitter while using TL1 flavor of transfer learning. Nature of SMP might be a factor behind this similarity in nature of cyberbullying. Both Formspring and Wikipedia are task oriented social networks (Q&A and collaborative knowledge repository respectively) that allow anonymity and larger posts. Whereas communication on Twitter is short, free of anonymity and not oriented towards a particular task. Feature Level Transfer Learning (TL2): In this flavor, a model was trained on one dataset and only learned word embeddings were transferred to another dataset for training a new model. As compared to TL1, recall score improved dramatically with TL2 (Table TABREF25 ). Improvement in precision was also significant (Table TABREF25 ). These improvements indicate that learned word embeddings are an essential part of knowledge transfer across datasets for cyberbullying detection. Model Level Transfer Learning (TL3): In this flavor, a model was trained on one dataset and learned word embeddings, as well as network weights, were transferred to another dataset for training a new model. TL3 does not result in any significant improvement over TL2. This lack of improvement indicates that transfer of network weights is not essential for cyberbullying detection and learned word embeddings is the key knowledge gained by the DNN models. DNN based models coupled with transfer learning beat the best-known results for all three datasets. Previous best F1 scores for Wikipedia BIBREF4 and Twitter BIBREF8 datasets were 0.68 and 0.93 respectively. We achieve F1 scores of 0.94 for both these datasets using BLSTM with attention and feature level transfer learning (Table TABREF25 ). For Formspring dataset, authors have not reported F1 score. Their method has accuracy score of 78.5% BIBREF2 . We achieve F1 score of 0.95 with accuracy score of 98% for the same dataset. Conclusion and Future Work We have shown that DNN models can be used for cyberbullying detection on various topics across multiple SMPs using three datasets and four DNN models. These models coupled with transfer learning beat state of the art results for all three datasets. These models can be further improved with extra data such as information about the profile and social graph of users. Most of the current datasets do not provide any information about the severity of bullying. If such fine-grained information is made available, then cyberbullying detection models can be further improved to take a variety of actions depending on the perceived seriousness of the posts.
[ "personal attack, racism, and sexism", "racism, sexism, personal attack, not specifically about any single topic" ]
3,244
qasper_e
en
null
b1fdb80cd96f455762f4f3be78a4932e7ae5edb2802d9e5c
Do they report results only on English data?
Introduction Since its rise in 2013, the Islamic State of Iraq and Syria (ISIS) has utilized the Internet to spread its ideology, radicalize individuals, and recruit them to their cause. In comparison to other Islamic extremist groups, ISIS' use of technology was more sophisticated, voluminous, and targeted. For example, during ISIS' advance toward Mosul, ISIS related accounts tweeted some 40,000 tweets in one day BIBREF0.However, this heavy engagement forced social media platforms to institute policies to prevent unchecked dissemination of terrorist propaganda to their users, forcing ISIS to adapt to other means to reach their target audience. One such approach was the publication of online magazines in different languages including English. Although discontinued now, these online resources provided a window into ISIS ideology, recruitment, and how they wanted the world to perceive them. For example, after predominantly recruiting men, ISIS began to also include articles in their magazines that specifically addressed women. ISIS encouraged women to join the group by either traveling to the caliphate or by carrying out domestic attacks on behalf of ISIS in their respective countries. This tactical change concerned both practitioners and researchers in the counterterrorism community. New advancements in data science can shed light on exactly how the targeting of women in extremist propaganda works and whether it differs significantly from mainstream religious rhetoric. We utilize natural language processing methods to answer three questions: What are the main topics in women-related articles in ISIS' online magazines? What similarities and/or differences do these topics have with non-violent, non-Islamic religious material addressed specifically to women? What kind of emotions do these articles evoke in their readers and are there similarities in the emotions evoked from both ISIS and non-violent religious materials? As these questions suggest, to understand what, if anything, makes extremist appeals distinctive, we need a point of comparison in terms of the outreach efforts to women from a mainstream, non-violent religious group. For this purpose, we rely on an online Catholic women's forum. Comparison between Catholic material and the content of ISIS' online magazines allows for novel insight into the distinctiveness of extremist rhetoric when targeted towards the female population. To accomplish this task, we employ topic modeling and an unsupervised emotion detection method. The rest of the paper is organized as follows: in Section SECREF2, we review related works on ISIS propaganda and applications of natural language methods. Section SECREF3 describes data collection and pre-processing. Section SECREF4 describes in detail the approach. Section SECREF5 reports the results, and finally, Section SECREF6 presents the conclusion. Related Work Soon after ISIS emerged and declared its caliphate, counterterrorism practitioners and political science researchers started to turn their attention towards understanding how the group operated. Researchers investigated the origins of ISIS, its leadership, funding, and how they rose became a globally dominant non-state actor BIBREF1. This interest in the organization's distinctiveness immediately led to inquiries into ISIS' rhetoric, particularly their use of social media and online resources in recruitment and ideological dissemination. For example, Al-Tamimi examines how ISIS differentiated itself from other jihadist movements by using social media with unprecedented efficiency to improve its image with locals BIBREF2. One of ISIS' most impressive applications of its online prowess was in the recruitment process. The organization has used a variety of materials, especially videos, to recruit both foreign and local fighters. Research shows that ISIS propaganda is designed to portray the organization as a provider of justice, governance, and development in a fashion that resonates with young westerners BIBREF3. This propaganda machine has become a significant area of research, with scholars such as Winter identifying key themes in it such as brutality, mercy, victimhood, war, belonging and utopianism. BIBREF4. However, there has been insufficient attention focused on how these approaches have particularly targeted and impacted women. This is significant given that scholars have identified the distinctiveness of this population when it comes to nearly all facets of terrorism. ISIS used different types of media to propagate its messages, such as videos, images, texts, and even music. Twitter was particularly effective and the Arabic Twitter app allowed ISIS to tweet extensively without triggering spam-detection mechanisms the platform uses BIBREF0. Scholars followed the resulting trove of data and this became the preeminent way in which they assess ISIS messages. For example, in BIBREF5 they use both lexical analysis of tweets as well as social network analysis to examine ISIS support or opposition on Twitter. Other researchers used data mining techniques to detect pro-ISIS user divergence behavior at various points in time BIBREF6. By looking at these works, the impact of using text mining and lexical analysis to address important questions becomes obvious. Proper usage of these tools allows the research community to analyze big chunks of unstructured data. This approach, however, became less productive as the social media networks began cracking down and ISIS recruiters moved off of them. With their ability to operate freely on social media now curtailed, ISIS recruiters and propagandists increased their attentiveness to another longstanding tool–English language online magazines targeting western audiences. Al Hayat, the media wing of ISIS, published multiple online magazines in different languages including English. The English online magazine of ISIS was named Dabiq and first appeared on the dark web on July 2014 and continued publishing for 15 issues. This publication was followed by Rumiyah which produced 13 English language issues through September 2017. The content of these magazines provides a valuable but underutilized resource for understanding ISIS strategies and how they appeal to recruits, specifically English-speaking audiences. They also provide a way to compare ISIS' approach with other radical groups. Ingram compared Dabiq contents with Inspire (Al Qaeda publication) and suggested that Al Qaeda heavily emphasized identity-choice, while ISIS' messages were more balanced between identity-choice and rational-choice BIBREF7. In another research paper, Wignell et al. BIBREF8 compared Dabiq and Rumiah by examining their style and what both magazine messages emphasized. Despite the volume of research on these magazines, only a few researchers used lexical analysis and mostly relied on experts' opinions. BIBREF9 is one exception to this approach where they used word frequency on 11 issues of Dabiq publications and compared attributes such as anger, anxiety, power, motive, etc. This paper seeks to establish how ISIS specifically tailored propaganda targeting western women, who became a particular target for the organization as the “caliphate” expanded. Although the number of recruits is unknown, in 2015 it was estimated that around 10 percent of all western recruits were female BIBREF10. Some researchers have attempted to understand how ISIS propaganda targets women. Kneip, for example, analyzed women's desire to join as a form of emancipation BIBREF11. We extend that line of inquiry by leveraging technology to answer key outstanding questions about the targeting of women in ISIS propaganda. To further assess how ISIS propaganda might affect women, we used emotion detection methods on these texts. Emotion detection techniques are mostly divided into lexicon-base or machine learning-base methods. Lexicon-base methods rely on several lexicons while machine learning (ML) methods use algorithm to detect the elation of texts as inputs and emotions as the target, usually trained on a large corpus. Unsupervised methods usually use Non-negative matrix factorization (NMF) and Latent Semantic Analysis (LSA) BIBREF12 approaches. An important distinction that should be made when using text for emotion detection is that emotion detected in the text and the emotion evoked in the reader of that text might differ. In the case of propaganda, it is more desirable to detect possible emotions that will be evoked in a hypothetical reader. In the next section, we describe methods to analyze content and technique to find evoked emotions in a potential reader using available natural language processing tools. Data Collection & Pre-Processing ::: Data collection Finding useful collections of texts where ISIS targets women is a challenging task. Most of the available material are not reflecting ISIS' official point of view or they do not talk specifically about women. However, ISIS' online magazines are valuable resources for understanding how the organization attempts to appeal to western audiences, particularly women. Looking through both Dabiq and Rumiyah, many issues of the magazines contain articles specifically addressing women, usually with “ to our sisters ” incorporated into the title. Seven out of fifteen Dabiq issues and all thirteen issues of Rumiyah contain articles targeting women, clearly suggesting an increase in attention to women over time. We converted all the ISIS magazines to texts using pdf readers and all articles that addressed women in both magazines (20 articles) were selected for our analysis. To facilitate comparison with a mainstream, non-violent religious group, we collected articles from catholicwomensforum.org, an online resource catering to Catholic women. We scrapped 132 articles from this domain. While this number is large, the articles themselves are much shorter than those published by ISIS. These texts were pre-processed by tokenizing the sentences and eliminating non-word tokens and punctuation marks. Also, all words turned into lower case and numbers and English stop words such as “our, is, did, can, etc. ” have been removed from the produced tokens. For the emotion analysis part, we used a spacy library as part of speech tagging to identify the exact role of words in the sentence. A word and its role have been used to look for emotional values of that word in the same role in the sentence. Data Collection & Pre-Processing ::: Pre-Processing ::: Text Cleaning and Pre-processing Most text and document datasets contain many unnecessary words such as stopwords, misspelling, slang, etc. In many algorithms, especially statistical and probabilistic learning algorithms, noise and unnecessary features can have adverse effects on system performance. In this section, we briefly explain some techniques and methods for text cleaning and pre-processing text datasets BIBREF13. Data Collection & Pre-Processing ::: Pre-Processing ::: Tokenization Tokenization is a pre-processing method which breaks a stream of text into words, phrases, symbols, or other meaningful elements called tokens BIBREF14. The main goal of this step is to investigate the words in a sentence BIBREF14. Both text classification and text mining requires a parser which processes the tokenization of the documents; for example: sentence BIBREF15 : After sleeping for four hours, he decided to sleep for another four. In this case, the tokens are as follows: {“After” “sleeping” “for” “four” “hours” “he” “decided” “to” “sleep” “for” “another” “four”}. Data Collection & Pre-Processing ::: Pre-Processing ::: Stop words Text and document classification includes many words which do not hold important significance to be used in classification algorithms such as {“a”, “about”, “above”, “across”, “after”, “afterwards”, “again”,$\hdots $}. The most common technique to deal with these words is to remove them from the texts and documents BIBREF16. Data Collection & Pre-Processing ::: Pre-Processing ::: Term Frequency-Inverse Document Frequency K Sparck Jones BIBREF17 proposed inverse document frequency (IDF) as a method to be used in conjunction with term frequency in order to lessen the effect of implicitly common words in the corpus. IDF assigns a higher weight to words with either high frequency or low frequency term in the document. This combination of TF and IDF is well known as term frequency-inverse document frequency (tf-idf). The mathematical representation of the weight of a term in a document by tf-idf is given in Equation DISPLAY_FORM10. Here N is the number of documents and $df(t)$ is the number of documents containing the term t in the corpus. The first term in Equation DISPLAY_FORM10 improves the recall while the second term improves the precision of the word embedding BIBREF18. Although tf-idf tries to overcome the problem of common terms in the document, it still suffers from some other descriptive limitations. Namely, tf-idf cannot account for the similarity between the words in the document since each word is independently presented as an index. However, with the development of more complex models in recent years, new methods, such as word embedding, have been presented that can incorporate concepts such as similarity of words and part of speech tagging. Method In this section, we describe our methods used for comparing topics and evoked emotions in both ISIS and non-violent religious materials. Method ::: Content Analysis The key task in comparing ISIS material with that of a non-violent group involves analyzing the content of these two corpora to identify the topics. For our analysis, we considered a simple uni-gram model where each word is considered as a single unit. Understanding what words appear most frequently provides a simple metric for comparison. To do so we normalized the count of words with the number of words in each corpora to account for the size of each corpus. It should be noted, however, that a drawback of word frequencies is that there might be some dominant words that will overcome all the other contents without conveying much information. Topic modeling methods are the more powerful technique for understanding the contents of a corpus. These methods try to discover abstract topics in a corpus and reveal hidden semantic structures in a collection of documents. The most popular topic modeling methods use probabilistic approaches such as probabilistic latent semantic analysis (PLSA) and latent Dirichlet allocation (LDA). LDA is a generalization of pLSA where documents are considered as a mixture of topics and the distribution of topics is governed by a Dirichlet prior ($\alpha $). Figure FIGREF12 shows plate notation of general LDA structure where $\beta $ represents prior of word distribution per topic and $\theta $ refers to topics distribution for documents BIBREF19. Since LDA is among the most widely utilized algorithms for topic modeling, we applied it to our data. However, the coherence of the topics produced by LDA is poorer than expected. To address this lack of coherence, we applied non-negative matrix factorization (NMF). This method decomposes the term-document matrix into two non-negative matrices as shown in Figure FIGREF13. The resulting non-negative matrices are such that their product closely approximate the original data. Mathematically speaking, given an input matrix of document-terms $V$, NMF finds two matrices by solving the following equation BIBREF20: Where W is topic-word matrix and H represents topic-document matrix. NMF appears to provide more coherent topic on specific corpora. O'Callaghan et al. compared LDA with NMF and concluded that NMF performs better in corporas with specific and non-mainstream areas BIBREF21. Our findings align with this assessment and thus our comparison of topics is based on NMF. Method ::: Emotion detection Propaganda effectiveness hinges on the emotions that it elicits. But detecting emotion in text requires that two essential challenges are overcome. First, emotions are generally complex and emotional representation models are correspondingly contested. Despite this, some models proposed by psychologists have gained wide-spread usage that extends to text-emotion analysis. Robert Plutchik presented a model that arranged emotions from basic to complex in a circumplex as shown in Figure FIGREF15. The model categorizes emotions into 8 main subsets and with addition of intensity and interactions it will classify emotions into 24 classes BIBREF23. Other models have been developed to capture all emotions by defining a 3-dimensional model of pleasure, arousal, and dominance. The second challenge lies in using text for detecting emotion evoked in a potential reader. Common approaches use either lexicon-base methods (such as keyword-based or ontology-based model) or machine learning-base models (usually using large corpus with labeled emotions) BIBREF12. These methods are suited to addressing the emotion that exist in the text, but in the case of propaganda we are more interested in emotions that are elicited in the reader of such materials. The closest analogy to this problem can be found in research that seek to model feelings of people after reading a news article. One solution for this type of problem is to use an approach called Depechemood. Depechemood is a lexicon-based emotion detection method gathered from crowd-annotated news BIBREF24. Drawing on approximately 23.5K documents with average of 500 words per document from rappler.com, researchers asked subjects to report their emotions after reading each article. They then multiplied the document-emotion matrix and word-document matrix to derive emotion-word matrix for these words. Due to limitations of their experiment setup, the emotion categories that they present does not exactly match the emotions from the Plutchik wheel categories. However, they still provide a good sense of the general feeling of an individual after reading an article. The emotion categories of Depechemood are: AFRAID, AMUSED, ANGRY, ANNOYED, DON'T CARE, HAPPY, INSPIRED, SAD. Depechemood simply creates dictionaries of words where each word has scores between 0 and 1 for all of these 8 emotion categories. We present our finding using this approach in the result section. Results In this section, we present the results of our analysis based on the contents of ISIS propaganda materials as compared to articles from the Catholic women forum. We then present the results of emotion analysis conducted on both corpora. Results ::: Content Analysis After pre-processing the text, both corpora were analyzed for word frequencies. These word frequencies have been normalized by the number of words in each corpus. Figure FIGREF17 shows the most common words in each of these corpora. A comparison of common words suggests that those related to marital relationships ( husband, wife, etc.) appear in both corpora, but the religious theme of ISIS material appears to be stronger. A stronger comparison can be made using topic modeling techniques to discover main topics of these documents. Although we used LDA, our results by using NMF outperform LDA topics, due to the nature of these corpora. Also, fewer numbers of ISIS documents might contribute to the comparatively worse performance. Therefore, we present only NMF results. Based on their coherence, we selected 10 topics for analyzing within both corporas. Table TABREF18 and Table TABREF19 show the most important words in each topic with a general label that we assigned to the topic manually. Based on the NMF output, ISIS articles that address women include topics mainly about Islam, women's role in early Islam, hijrah (moving to another land), spousal relations, marriage, and motherhood. The topics generated from the Catholic women forum are clearly quite different. Some, however, exist in both contexts. More specifically, marriage/divorce, motherhood, and to some extent spousal relations appeared in both generated topics. This suggests that when addressing women in a religious context, these may be very broadly effective and appeal to the feminine audience. More importantly, suitable topic modeling methods will be able to identify these similarities no matter the size of the corpus we are working with. Although, finding the similarities/differences between topics in these two groups of articles might provide some new insights, we turn to emotional analysis to also compare the emotions evoked in the audience. Results ::: Emotion Analysis We rely on Depechemood dictionaries to analyze emotions in both corpora. These dictionaries are freely available and come in multiple arrangements. We used a version that includes words with their part of speech (POS) tags. Only words that exist in the Depechemood dictionary with the same POS tag are considered for our analysis. We aggregated the score for each word and normalized each article by emotions. To better compare the result, we added a baseline of 100 random articles from a Reuters news dataset as a non-religious general resource which is available in an NLTK python library. Figure FIGREF22 shows the aggregated score for different feelings in our corpora. Both Catholic and ISIS related materials score the highest in “inspired” category. Furthermore, in both cases, being afraid has the lowest score. However, this is not the case for random news material such as the Reuters corpus, which are not that inspiring and, according to this method, seems to cause more fear in their audience. We investigate these results further by looking at the most inspiring words detected in these two corpora. Table TABREF24 presents 10 words that are among the most inspiring in both corpora. The comparison of the two lists indicate that the method picks very different words in each corpus to reach to the same conclusion. Also, we looked at separate articles in each of the issues of ISIS material addressing women. Figure FIGREF23 shows emotion scores in each of the 20 issues of ISIS propaganda. As demonstrated, in every separate article, this method gives the highest score to evoking inspirations in the reader. Also, in most of these issues the method scored “being afraid” as the lowest score in each issue. Conclusion and Future Work In this paper, we have applied natural language processing methods to ISIS propaganda materials in an attempt to understand these materials using available technologies. We also compared these texts with a non-violent religious groups' (both focusing on women related articles) to examine possible similarities or differences in their approaches. To compare the contents, we used word frequency and topic modeling with NMF. Also, our results showed that NMF outperforms LDA due to the niche domain and relatively small number of documents. The results suggest that certain topics play a particularly important roles in ISIS propaganda targeting women. These relate to the role of women in early Islam, Islamic ideology, marriage/divorce, motherhood, spousal relationships, and hijrah (moving to a new land). Comparing these topics with those that appeared on a Catholic women forum, it seems that both ISIS and non-violent groups use topics about motherhood, spousal relationship, and marriage/divorce when they address women. Moreover, we used Depechemood methods to analyze the emotions that these materials are likely to elicit in readers. The result of our emotion analysis suggests that both corpuses used words that aim to inspire readers while avoiding fear. However, the actual words that lead to these effects are very different in the two contexts. Overall, our findings indicate that, using proper methods, automated analysis of large bodies of textual data can provide novel insight insight into extremist propaganda that can assist the counterterrorism community.
[ "Yes", "Yes" ]
3,634
qasper_e
en
null
9559dbefad997c0a896d9cae62aeeb1703add67a670930fb
What dataset is used for this study?
Introduction In recent years, there has been a movement to leverage social medial data to detect, estimate, and track the change in prevalence of disease. For example, eating disorders in Spanish language Twitter tweets BIBREF0 and influenza surveillance BIBREF1 . More recently, social media has been leveraged to monitor social risks such as prescription drug and smoking behaviors BIBREF2 , BIBREF3 , BIBREF4 as well as a variety of mental health disorders including suicidal ideation BIBREF5 , attention deficient hyperactivity disorder BIBREF6 and major depressive disorder BIBREF7 . In the case of major depressive disorder, recent efforts range from characterizing linguistic phenomena associated with depression BIBREF8 and its subtypes e.g., postpartum depression BIBREF5 , to identifying specific depressive symptoms BIBREF9 , BIBREF10 e.g., depressed mood. However, more research is needed to better understand the predictive power of supervised machine learning classifiers and the influence of feature groups and feature sets for efficiently classifying depression-related tweets to support mental health monitoring at the population-level BIBREF11 . This paper builds upon related works toward classifying Twitter tweets representing symptoms of major depressive disorder by assessing the contribution of lexical features (e.g., unigrams) and emotion (e.g., strongly negative) to classification performance, and by applying methods to eliminate low-value features. METHODS Specifically, we conducted a feature ablation study to assess the informativeness of each feature group and a feature elimination study to determine the optimal feature sets for classifying Twitter tweets. We leveraged an existing, annotated Twitter dataset that was constructed based on a hierarchical model of depression-related symptoms BIBREF12 , BIBREF13 . The dataset contains 9,473 annotations for 9,300 tweets. Each tweet is annotated as no evidence of depression (e.g., “Citizens fear an economic depression") or evidence of depression (e.g., “depressed over disappointment"). If a tweet is annotated evidence of depression, then it is further annotated with one or more depressive symptoms, for example, depressed mood (e.g., “feeling down in the dumps"), disturbed sleep (e.g., “another restless night"), or fatigue or loss of energy (e.g., “the fatigue is unbearable") BIBREF10 . For each class, every annotation (9,473 tweets) is binarized as the positive class e.g., depressed mood=1 or negative class e.g., not depressed mood=0. Features Furthermore, this dataset was encoded with 7 feature groups with associated feature values binarized (i.e., present=1 or absent=0) to represent potentially informative features for classifying depression-related classes. We describe the feature groups by type, subtype, and provide one or more examples of words representing the feature subtype from a tweet: lexical features, unigrams, e.g., “depressed”; syntactic features, parts of speech, e.g., “cried” encoded as V for verb; emotion features, emoticons, e.g., :( encoded as SAD; demographic features, age and gender e.g., “this semester” encoded as an indicator of 19-22 years of age and “my girlfriend” encoded as an indicator of male gender, respectively; sentiment features, polarity and subjectivity terms with strengths, e.g., “terrible” encoded as strongly negative and strongly subjective; personality traits, neuroticism e.g., “pissed off” implies neuroticism; LIWC Features, indicators of an individual's thoughts, feelings, personality, and motivations, e.g., “feeling” suggestions perception, feeling, insight, and cognitive mechanisms experienced by the Twitter user. A more detailed description of leveraged features and their values, including LIWC categories, can be found in BIBREF10 . Based on our prior initial experiments using these feature groups BIBREF10 , we learned that support vector machines perform with the highest F1-score compared to other supervised approaches. For this study, we aim to build upon this work by conducting two experiments: 1) to assess the contribution of each feature group and 2) to determine the optimal percentile of top ranked features for classifying Twitter tweets in the depression schema hierarchy. Feature Contribution Feature ablation studies are conducted to assess the informativeness of a feature group by quantifying the change in predictive power when comparing the performance of a classifier trained with the all feature groups versus the performance without a particular feature group. We conducted a feature ablation study by holding out (sans) each feature group and training and testing the support vector model using a linear kernel and 5-fold, stratified cross-validation. We report the average F1-score from our baseline approach (all feature groups) and report the point difference (+ or -) in F1-score performance observed by ablating each feature set. By ablating each feature group from the full dataset, we observed the following count of features - sans lexical: 185, sans syntactic: 16,935, sans emotion: 16,954, sans demographics: 16,946, sans sentiment: 16,950, sans personality: 16,946, and sans LIWC: 16,832. In Figure 1, compared to the baseline performance, significant drops in F1-scores resulted from sans lexical for depressed mood (-35 points), disturbed sleep (-43 points), and depressive symptoms (-45 points). Less extensive drops also occurred for evidence of depression (-14 points) and fatigue or loss of energy (-3 points). In contrast, a 3 point gain in F1-score was observed for no evidence of depression. We also observed notable drops in F1-scores for disturbed sleep by ablating demographics (-7 points), emotion (-5 points), and sentiment (-5 points) features. These F1-score drops were accompanied by drops in both recall and precision. We found equal or higher F1-scores by removing non-lexical feature groups for no evidence of depression (0-1 points), evidence of depression (0-1 points), and depressive symptoms (2 points). Unsurprisingly, lexical features (unigrams) were the largest contributor to feature counts in the dataset. We observed that lexical features are also critical for identifying depressive symptoms, specifically for depressed mood and for disturbed sleep. For the classes higher in the hierarchy - no evidence of depression, evidence of depression, and depressive symptoms - the classifier produced consistent F1-scores, even slightly above the baseline for depressive symptoms and minor fluctuations of change in recall and precision when removing other feature groups suggesting that the contribution of non-lexical features to classification performance was limited. However, notable changes in F1-score were observed for the classes lower in the hierarchy including disturbed sleep and fatigue or loss of energy. For instance, changes in F1-scores driven by both recall and precision were observed for disturbed sleep by ablating demographics, emotion, and sentiment features, suggesting that age or gender (“mid-semester exams have me restless”), polarity and subjective terms (“lack of sleep is killing me”), and emoticons (“wide awake :(”) could be important for both identifying and correctly classifying a subset of these tweets. Feature Elimination Feature elimination strategies are often taken 1) to remove irrelevant or noisy features, 2) to improve classifier performance, and 3) to reduce training and run times. We conducted an experiment to determine whether we could maintain or improve classifier performances by applying the following three-tiered feature elimination approach: Reduction We reduced the dataset encoded for each class by eliminating features that occur less than twice in the full dataset. Selection We iteratively applied Chi-Square feature selection on the reduced dataset, selecting the top percentile of highest ranked features in increments of 5 percent to train and test the support vector model using a linear kernel and 5-fold, stratified cross-validation. Rank We cumulatively plotted the average F1-score performances of each incrementally added percentile of top ranked features. We report the percentile and count of features resulting in the first occurrence of the highest average F1-score for each class. All experiments were programmed using scikit-learn 0.18. The initial matrices of almost 17,000 features were reduced by eliminating features that only occurred once in the full dataset, resulting in 5,761 features. We applied Chi-Square feature selection and plotted the top-ranked subset of features for each percentile (at 5 percent intervals cumulatively added) and evaluated their predictive contribution using the support vector machine with linear kernel and stratified, 5-fold cross validation. In Figure 2, we observed optimal F1-score performance using the following top feature counts: no evidence of depression: F1: 87 (15th percentile, 864 features), evidence of depression: F1: 59 (30th percentile, 1,728 features), depressive symptoms: F1: 55 (15th percentile, 864 features), depressed mood: F1: 39 (55th percentile, 3,168 features), disturbed sleep: F1: 46 (10th percentile, 576 features), and fatigue or loss of energy: F1: 72 (5th percentile, 288 features) (Figure 1). We note F1-score improvements for depressed mood from F1: 13 at the 1st percentile to F1: 33 at the 20th percentile. We observed peak F1-score performances at low percentiles for fatigue or loss of energy (5th percentile), disturbed sleep (10th percentile) as well as depressive symptoms and no evidence of depression (both 15th percentile) suggesting fewer features are needed to reach optimal performance. In contrast, peak F1-score performances occurred at moderate percentiles for evidence of depression (30th percentile) and depressed mood (55th percentile) suggesting that more features are needed to reach optimal performance. However, one notable difference between these two classes is the dramatic F1-score improvements for depressed mood i.e., 20 point increase from the 1st percentile to the 20th percentile compared to the more gradual F1-score improvements for evidence of depression i.e., 11 point increase from the 1st percentile to the 20th percentile. This finding suggests that for identifying depressed mood a variety of features are needed before incremental gains are observed. RESULTS From our annotated dataset of Twitter tweets (n=9,300 tweets), we conducted two feature studies to better understand the predictive power of several feature groups for classifying whether or not a tweet contains no evidence of depression (n=6,829 tweets) or evidence of depression (n=2,644 tweets). If there was evidence of depression, we determined whether the tweet contained one or more depressive symptoms (n=1,656 tweets) and further classified the symptom subtype of depressed mood (n=1,010 tweets), disturbed sleep (n=98 tweets), or fatigue or loss of energy (n=427 tweets) using support vector machines. From our prior work BIBREF10 and in Figure 1, we report the performance for prediction models built by training a support vector machine using 5-fold, stratified cross-validation with all feature groups as a baseline for each class. We observed high performance for no evidence of depression and fatigue or loss of energy and moderate performance for all remaining classes. Discussion We conducted two feature study experiments: 1) a feature ablation study to assess the contribution of feature groups and 2) a feature elimination study to determine the optimal percentile of top ranked features for classifying Twitter tweets in the depression schema hierarchy. Future Work Our next step is to address the classification of rarer depressive symptoms suggestive of major depressive disorder from our dataset and hierarchy including inappropriate guilt, difficulty concentrating, psychomotor agitation or retardation, weight loss or gain, and anhedonia BIBREF15 , BIBREF16 . We are developing a population-level monitoring framework designed to estimate the prevalence of depression (and depression-related symptoms and psycho-social stressors) over millions of United States-geocoded tweets. Identifying the most discriminating feature sets and natural language processing classifiers for each depression symptom is vital for this goal. Conclusions In summary, we conducted two feature study experiments to assess the contribution of feature groups and to determine the optimal percentile of top ranked features for classifying Twitter tweets in the depression schema hierarchy. From these experiments, we conclude that simple lexical features and reduced feature sets can produce comparable results to the much larger feature dataset. Acknowledgments Research reported in this publication was supported by the National Library of Medicine of the [United States] National Institutes of Health under award numbers K99LM011393 and R00LM011393. This study was granted an exemption from review by the University of Utah Institutional Review Board (IRB 00076188). Note that in order to protect tweeter anonymity, we have not reproduced tweets verbatim. Example tweets shown were generated by the researchers as exemplars only. Finally, we would like to thank the anonymous reviewers of this paper for their valuable comments.
[ "BIBREF12 , BIBREF13", "an existing, annotated Twitter dataset that was constructed based on a hierarchical model of depression-related symptoms BIBREF12 , BIBREF13" ]
1,939
qasper_e
en
null
3ead7355dc4b6356c66ae3a8cffe00652a7d35a899dcb316
Which languages are similar to each other?
Introduction Accurate language identification (LID) is the first step in many natural language processing and machine comprehension pipelines. If the language of a piece of text is known then the appropriate downstream models like parts of speech taggers and language models can be applied as required. LID is further also an important step in harvesting scarce language resources. Harvested data can be used to bootstrap more accurate LID models and in doing so continually improve the quality of the harvested data. Availability of data is still one of the big roadblocks for applying data driven approaches like supervised machine learning in developing countries. Having 11 official languages of South Africa has lead to initiatives (discussed in the next section) that have had positive effect on the availability of language resources for research. However, many of the South African languages are still under resourced from the point of view of building data driven models for machine comprehension and process automation. Table TABREF2 shows the percentages of first language speakers for each of the official languages of South Africa. These are four conjunctively written Nguni languages (zul, xho, nbl, ssw), Afrikaans (afr) and English (eng), three disjunctively written Sotho languages (nso, sot, tsn), as well as tshiVenda (ven) and Xitsonga (tso). The Nguni languages are similar to each other and harder to distinguish. The same is true of the Sotho languages. This paper presents a hierarchical naive Bayesian and lexicon based classifier for LID of short pieces of text of 15-20 characters long. The algorithm is evaluated against recent approaches using existing test sets from previous works on South African languages as well as the Discriminating between Similar Languages (DSL) 2015 and 2017 shared tasks. Section SECREF2 reviews existing works on the topic and summarises the remaining research problems. Section SECREF3 of the paper discusses the proposed algorithm and Section SECREF4 presents comparative results. Related Works The focus of this section is on recently published datasets and LID research applicable to the South African context. An in depth survey of algorithms, features, datasets, shared tasks and evaluation methods may be found in BIBREF0. The datasets for the DSL 2015 & DSL 2017 shared tasks BIBREF1 are often used in LID benchmarks and also available on Kaggle . The DSL datasets, like other LID datasets, consists of text sentences labelled by language. The 2017 dataset, for example, contains 14 languages over 6 language groups with 18000 training samples and 1000 testing samples per language. The recently published JW300 parallel corpus BIBREF2 covers over 300 languages with around 100 thousand parallel sentences per language pair on average. In South Africa, a multilingual corpus of academic texts produced by university students with different mother tongues is being developed BIBREF3. The WiLI-2018 benchmark dataset BIBREF4 for monolingual written natural language identification includes around 1000 paragraphs of 235 languages. A possibly useful link can also be made BIBREF5 between Native Language Identification (NLI) (determining the native language of the author of a text) and Language Variety Identification (LVI) (classification of different varieties of a single language) which opens up more datasets. The Leipzig Corpora Collection BIBREF6, the Universal Declaration of Human Rights and Tatoeba are also often used sources of data. The NCHLT text corpora BIBREF7 is likely a good starting point for a shared LID task dataset for the South African languages BIBREF8. The NCHLT text corpora contains enough data to have 3500 training samples and 600 testing samples of 300+ character sentences per language. Researchers have recently started applying existing algorithms for tasks like neural machine translation in earnest to such South African language datasets BIBREF9. Existing NLP datasets, models and services BIBREF10 are available for South African languages. These include an LID algorithm BIBREF11 that uses a character level n-gram language model. Multiple papers have shown that 'shallow' naive Bayes classifiers BIBREF12, BIBREF8, BIBREF13, BIBREF14, SVMs BIBREF15 and similar models work very well for doing LID. The DSL 2017 paper BIBREF1, for example, gives an overview of the solutions of all of the teams that competed on the shared task and the winning approach BIBREF16 used an SVM with character n-gram, parts of speech tag features and some other engineered features. The winning approach for DSL 2015 used an ensemble naive Bayes classifier. The fasttext classifier BIBREF17 is perhaps one of the best known efficient 'shallow' text classifiers that have been used for LID . Multiple papers have proposed hierarchical stacked classifiers (including lexicons) that would for example first classify a piece of text by language group and then by exact language BIBREF18, BIBREF19, BIBREF8, BIBREF0. Some work has also been done on classifying surnames between Tshivenda, Xitsonga and Sepedi BIBREF20. Additionally, data augmentation BIBREF21 and adversarial training BIBREF22 approaches are potentially very useful to reduce the requirement for data. Researchers have investigated deeper LID models like bidirectional recurrent neural networks BIBREF23 or ensembles of recurrent neural networks BIBREF24. The latter is reported to achieve 95.12% in the DSL 2015 shared task. In these models text features can include character and word n-grams as well as informative character and word-level features learnt BIBREF25 from the training data. The neural methods seem to work well in tasks where more training data is available. In summary, LID of short texts, informal styles and similar languages remains a difficult problem which is actively being researched. Increased confusion can in general be expected between shorter pieces of text and languages that are more closely related. Shallow methods still seem to work well compared to deeper models for LID. Other remaining research opportunities seem to be data harvesting, building standardised datasets and creating shared tasks for South Africa and Africa. Support for language codes that include more languages seems to be growing and discoverability of research is improving with more survey papers coming out. Paywalls also seem to no longer be a problem; the references used in this paper was either openly published or available as preprint papers. Methodology The proposed LID algorithm builds on the work in BIBREF8 and BIBREF26. We apply a naive Bayesian classifier with character (2, 4 & 6)-grams, word unigram and word bigram features with a hierarchical lexicon based classifier. The naive Bayesian classifier is trained to predict the specific language label of a piece of text, but used to first classify text as belonging to either the Nguni family, the Sotho family, English, Afrikaans, Xitsonga or Tshivenda. The scikit-learn multinomial naive Bayes classifier is used for the implementation with an alpha smoothing value of 0.01 and hashed text features. The lexicon based classifier is then used to predict the specific language within a language group. For the South African languages this is done for the Nguni and Sotho groups. If the lexicon prediction of the specific language has high confidence then its result is used as the final label else the naive Bayesian classifier's specific language prediction is used as the final result. The lexicon is built over all the data and therefore includes the vocabulary from both the training and testing sets. The lexicon based classifier is designed to trade higher precision for lower recall. The proposed implementation is considered confident if the number of words from the winning language is at least one more than the number of words considered to be from the language scored in second place. The stacked classifier is tested against three public LID implementations BIBREF17, BIBREF23, BIBREF8. The LID implementation described in BIBREF17 is available on GitHub and is trained and tested according to a post on the fasttext blog. Character (5-6)-gram features with 16 dimensional vectors worked the best. The implementation discussed in BIBREF23 is available from https://github.com/tomkocmi/LanideNN. Following the instructions for an OSX pip install of an old r0.8 release of TensorFlow, the LanideNN code could be executed in Python 3.7.4. Settings were left at their defaults and a learning rate of 0.001 was used followed by a refinement with learning rate of 0.0001. Only one code modification was applied to return the results from a method that previously just printed to screen. The LID algorithm described in BIBREF8 is also available on GitHub. The stacked classifier is also tested against the results reported for four other algorithms BIBREF16, BIBREF26, BIBREF24, BIBREF15. All the comparisons are done using the NCHLT BIBREF7, DSL 2015 BIBREF19 and DSL 2017 BIBREF1 datasets discussed in Section SECREF2. Results and Analysis The average classification accuracy results are summarised in Table TABREF9. The accuracies reported are for classifying a piece of text by its specific language label. Classifying text only by language group or family is a much easier task as reported in BIBREF8. Different variations of the proposed classifier were evaluated. A single NB classifier (NB), a stack of two NB classifiers (NB+NB), a stack of a NB classifier and lexicon (NB+Lex) and a lexicon (Lex) by itself. A lexicon with a 50% training token dropout is also listed to show the impact of the lexicon support on the accuracy. From the results it seems that the DSL 2017 task might be harder than the DSL 2015 and NCHLT tasks. Also, the results for the implementation discussed in BIBREF23 might seem low, but the results reported in that paper is generated on longer pieces of text so lower scores on the shorter pieces of text derived from the NCHLT corpora is expected. The accuracy of the proposed algorithm seems to be dependent on the support of the lexicon. Without a good lexicon a non-stacked naive Bayesian classifier might even perform better. The execution performance of some of the LID implementations are shown in Table TABREF10. Results were generated on an early 2015 13-inch Retina MacBook Pro with a 2.9 GHz CPU (Turbo Boosted to 3.4 GHz) and 8GB RAM. The C++ implementation in BIBREF17 is the fastest. The implementation in BIBREF8 makes use of un-hashed feature representations which causes it to be slower than the proposed sklearn implementation. The execution performance of BIBREF23 might improve by a factor of five to ten when executed on a GPU. Conclusion LID of short texts, informal styles and similar languages remains a difficult problem which is actively being researched. The proposed algorithm was evaluated on three existing datasets and compared to the implementations of three public LID implementations as well as to reported results of four other algorithms. It performed well relative to the other methods beating their results. However, the performance is dependent on the support of the lexicon. We would like to investigate the value of a lexicon in a production system and how to possibly maintain it using self-supervised learning. We are investigating the application of deeper language models some of which have been used in more recent DSL shared tasks. We would also like to investigate data augmentation strategies to reduce the amount of training data that is required. Further research opportunities include data harvesting, building standardised datasets and shared tasks for South Africa as well as the rest of Africa. In general, the support for language codes that include more languages seems to be growing, discoverability of research is improving and paywalls seem to no longer be a big problem in getting access to published research.
[ "Nguni languages (zul, xho, nbl, ssw), Sotho languages (nso, sot, tsn)", "The Nguni languages are similar to each other, The same is true of the Sotho languages" ]
1,877
qasper_e
en
null
5c576f06ce64e385da68b3465a0e077c81651dfbc19c72a4
What sentiment analysis dataset is used?
Introduction There have been many implementations of the word2vec model in either of the two architectures it provides: continuous skipgram and CBoW (BIBREF0). Similar distributed models of word or subword embeddings (or vector representations) find usage in sota, deep neural networks like BERT and its successors (BIBREF1, BIBREF2, BIBREF3). These deep networks generate contextual representations of words after been trained for extended periods on large corpora, unsupervised, using the attention mechanisms (BIBREF4). It has been observed that various hyper-parameter combinations have been used in different research involving word2vec with the possibility of many of them being sub-optimal (BIBREF5, BIBREF6, BIBREF7). Therefore, the authors seek to address the research question: what is the optimal combination of word2vec hyper-parameters for intrinsic and extrinsic NLP purposes? There are astronomically high numbers of combinations of hyper-parameters possible for neural networks, even with just a few layers. Hence, the scope of our extensive work over three corpora is on dimension size, training epochs, window size and vocabulary size for the training algorithms (hierarchical softmax and negative sampling) of both skipgram and CBoW. The corpora used for word embeddings are English Wiki News Abstract by BIBREF8 of about 15MB, English Wiki Simple (SW) Articles by BIBREF9 of about 711MB and the Billion Word (BW) of 3.9GB by BIBREF10. The corpus used for sentiment analysis is the IMDb dataset of movie reviews by BIBREF11 while that for NER is Groningen Meaning Bank (GMB) by BIBREF12, containing 47,959 sentence samples. The IMDb dataset used has a total of 25,000 sentences with half being positive sentiments and the other half being negative sentiments. The GMB dataset has 17 labels, with 9 main labels and 2 context tags. It is however unbalanced due to the high percentage of tokens with the label 'O'. This skew in the GMB dataset is typical with NER datasets. The objective of this work is to determine the optimal combinations of word2vec hyper-parameters for intrinsic evaluation (semantic and syntactic analogies) and extrinsic evaluation tasks (BIBREF13, BIBREF14), like SA and NER. It is not our objective in this work to record sota results. Some of the main contributions of this research are the empirical establishment of optimal combinations of word2vec hyper-parameters for NLP tasks, discovering the behaviour of quality of vectors viz-a-viz increasing dimensions and the confirmation of embeddings being task-specific for the downstream. The rest of this paper is organised as follows: the literature review that briefly surveys distributed representation of words, particularly word2vec; the methodology employed in this research work; the results obtained and the conclusion. Literature Review Breaking away from the non-distributed (high-dimensional, sparse) representations of words, typical of traditional bag-of-words or one-hot-encoding (BIBREF15), BIBREF0 created word2vec. Word2Vec consists of two shallow neural network architectures: continuous skipgram and CBoW. It uses distributed (low-dimensional, dense) representations of words that group similar words. This new model traded the complexity of deep neural network architectures, by other researchers, for more efficient training over large corpora. Its architectures have two training algorithms: negative sampling and hierarchical softmax (BIBREF16). The released model was trained on Google news dataset of 100 billion words. Implementations of the model have been undertaken by researchers in the programming languages Python and C++, though the original was done in C (BIBREF17). Continuous skipgram predicts (by maximizing classification of) words before and after the center word, for a given range. Since distant words are less connected to a center word in a sentence, less weight is assigned to such distant words in training. CBoW, on the other hand, uses words from the history and future in a sequence, with the objective of correctly classifying the target word in the middle. It works by projecting all history or future words within a chosen window into the same position, averaging their vectors. Hence, the order of words in the history or future does not influence the averaged vector. This is similar to the traditional bag-of-words, which is oblivious of the order of words in its sequence. A log-linear classifier is used in both architectures (BIBREF0). In further work, they extended the model to be able to do phrase representations and subsample frequent words (BIBREF16). Being a NNLM, word2vec assigns probabilities to words in a sequence, like other NNLMs such as feedforward networks or recurrent neural networks (BIBREF15). Earlier models like latent dirichlet allocation (LDA) and latent semantic analysis (LSA) exist and effectively achieve low dimensional vectors by matrix factorization (BIBREF18, BIBREF19). It's been shown that word vectors are beneficial for NLP tasks (BIBREF15), such as sentiment analysis and named entity recognition. Besides, BIBREF0 showed with vector space algebra that relationships among words can be evaluated, expressing the quality of vectors produced from the model. The famous, semantic example: vector("King") - vector("Man") + vector("Woman") $\approx $ vector("Queen") can be verified using cosine distance. Another type of semantic meaning is the relationship between a capital city and its corresponding country. Syntactic relationship examples include plural verbs and past tense, among others. Combination of both syntactic and semantic analyses is possible and provided (totaling over 19,000 questions) as Google analogy test set by BIBREF0. WordSimilarity-353 test set is another analysis tool for word vectors (BIBREF20). Unlike Google analogy score, which is based on vector space algebra, WordSimilarity is based on human expert-assigned semantic similarity on two sets of English word pairs. Both tools rank from 0 (totally dissimilar) to 1 (very much similar or exact, in Google analogy case). A typical artificial neural network (ANN) has very many hyper-parameters which may be tuned. Hyper-parameters are values which may be manually adjusted and include vector dimension size, type of algorithm and learning rate (BIBREF19). BIBREF0 tried various hyper-parameters with both architectures of their model, ranging from 50 to 1,000 dimensions, 30,000 to 3,000,000 vocabulary sizes, 1 to 3 epochs, among others. In our work, we extended research to 3,000 dimensions. Different observations were noted from the many trials. They observed diminishing returns after a certain point, despite additional dimensions or larger, unstructured training data. However, quality increased when both dimensions and data size were increased together. Although BIBREF16 pointed out that choice of optimal hyper-parameter configurations depends on the NLP problem at hand, they identified the most important factors are architecture, dimension size, subsampling rate, and the window size. In addition, it has been observed that variables like size of datasets improve the quality of word vectors and, potentially, performance on downstream tasks (BIBREF21, BIBREF0). Methodology The models were generated in a shared cluster running Ubuntu 16 with 32 CPUs of 32x Intel Xeon 4110 at 2.1GHz. Gensim (BIBREF17) python library implementation of word2vec was used with parallelization to utilize all 32 CPUs. The downstream experiments were run on a Tesla GPU on a shared DGX cluster running Ubuntu 18. Pytorch deep learning framework was used. Gensim was chosen because of its relative stability, popular support and to minimize the time required in writing and testing a new implementation in python from scratch. To form the vocabulary, words occurring less than 5 times in the corpora were dropped, stop words removed using the natural language toolkit (NLTK) (BIBREF22) and data pre-processing carried out. Table TABREF2 describes most hyper-parameters explored for each dataset. In all, 80 runs (of about 160 minutes) were conducted for the 15MB Wiki Abstract dataset with 80 serialized models totaling 15.136GB while 80 runs (for over 320 hours) were conducted for the 711MB SW dataset, with 80 serialized models totaling over 145GB. Experiments for all combinations for 300 dimensions were conducted on the 3.9GB training set of the BW corpus and additional runs for other dimensions for the window 8 + skipgram + heirarchical softmax combination to verify the trend of quality of word vectors as dimensions are increased. Google (semantic and syntactic) analogy tests and WordSimilarity-353 (with Spearman correlation) by BIBREF20 were chosen for intrinsic evaluations. They measure the quality of word vectors. The analogy scores are averages of both semantic and syntactic tests. NER and SA were chosen for extrinsic evaluations. The GMB dataset for NER was trained in an LSTM network, which had an embedding layer for input. The network diagram is shown in fig. FIGREF4. The IMDb dataset for SA was trained in a BiLSTM network, which also used an embedding layer for input. Its network diagram is given in fig. FIGREF4. It includes an additional hidden linear layer. Hyper-parameter details of the two networks for the downstream tasks are given in table TABREF3. The metrics for extrinsic evaluation include F1, precision, recall and accuracy scores. In both tasks, the default pytorch embedding was tested before being replaced by pre-trained embeddings released by BIBREF0 and ours. In each case, the dataset was shuffled before training and split in the ratio 70:15:15 for training, validation (dev) and test sets. Batch size of 64 was used. For each task, experiments for each embedding was conducted four times and an average value calculated and reported in the next section Results and Discussion Table TABREF5 summarizes key results from the intrinsic evaluations for 300 dimensions. Table TABREF6 reveals the training time (in hours) and average embedding loading time (in seconds) representative of the various models used. Tables TABREF11 and TABREF12 summarize key results for the extrinsic evaluations. Figures FIGREF7, FIGREF9, FIGREF10, FIGREF13 and FIGREF14 present line graph of the eight combinations for different dimension sizes for Simple Wiki, trend of Simple Wiki and Billion Word corpora over several dimension sizes, analogy score comparison for models across datasets, NER mean F1 scores on the GMB dataset and SA mean F1 scores on the IMDb dataset, respectively. Combination of the skipgram using hierarchical softmax and window size of 8 for 300 dimensions outperformed others in analogy scores for the Wiki Abstract. However, its results are so poor, because of the tiny file size, they're not worth reporting here. Hence, we'll focus on results from the Simple Wiki and Billion Word corpora. Best combination changes when corpus size increases, as will be noticed from table TABREF5. In terms of analogy score, for 10 epochs, w8s0h0 performs best while w8s1h0 performs best in terms of WordSim and corresponding Spearman correlation. Meanwhile, increasing the corpus size to BW, w4s1h0 performs best in terms of analogy score while w8s1h0 maintains its position as the best in terms of WordSim and Spearman correlation. Besides considering quality metrics, it can be observed from table TABREF6 that comparative ratio of values between the models is not commensurate with the results in intrinsic or extrinsic values, especially when we consider the amount of time and energy spent, since more training time results in more energy consumption (BIBREF23). Information on the length of training time for the released Mikolov model is not readily available. However, it's interesting to note that their presumed best model, which was released is also s1h0. Its analogy score, which we tested and report, is confirmed in their paper. It beats our best models in only analogy score (even for Simple Wiki), performing worse in others. This is inspite of using a much bigger corpus of 3,000,000 vocabulary size and 100 billion words while Simple Wiki had vocabulary size of 367,811 and is 711MB. It is very likely our analogy scores will improve when we use a much larger corpus, as can be observed from table TABREF5, which involves just one billion words. Although the two best combinations in analogy (w8s0h0 & w4s0h0) for SW, as shown in fig. FIGREF7, decreased only slightly compared to others with increasing dimensions, the increased training time and much larger serialized model size render any possible minimal score advantage over higher dimensions undesirable. As can be observed in fig. FIGREF9, from 100 dimensions, scores improve but start to drop after over 300 dimensions for SW and after over 400 dimensions for BW. More becomes worse! This trend is true for all combinations for all tests. Polynomial interpolation may be used to determine the optimal dimension in both corpora. Our models are available for confirmation and source codes are available on github. With regards to NER, most pretrained embeddings outperformed the default pytorch embedding, with our BW w4s1h0 model (which is best in BW analogy score) performing best in F1 score and closely followed by BIBREF0 model. On the other hand, with regards to SA, pytorch embedding outperformed the pretrained embeddings but was closely followed by our SW w8s0h0 model (which also had the best SW analogy score). BIBREF0 performed second worst of all, despite originating from a very huge corpus. The combinations w8s0h0 & w4s0h0 of SW performed reasonably well in both extrinsic tasks, just as the default pytorch embedding did. Conclusion This work analyses, empirically, optimal combinations of hyper-parameters for embeddings, specifically for word2vec. It further shows that for downstream tasks, like NER and SA, there's no silver bullet! However, some combinations show strong performance across tasks. Performance of embeddings is task-specific and high analogy scores do not necessarily correlate positively with performance on downstream tasks. This point on correlation is somewhat similar to results by BIBREF24 and BIBREF14. It was discovered that increasing dimension size depreciates performance after a point. If strong considerations of saving time, energy and the environment are made, then reasonably smaller corpora may suffice or even be better in some cases. The on-going drive by many researchers to use ever-growing data to train deep neural networks can benefit from the findings of this work. Indeed, hyper-parameter choices are very important in neural network systems (BIBREF19). Future work that may be investigated are performance of other architectures of word or sub-word embeddings, the performance and comparison of embeddings applied to languages other than English and how embeddings perform in other downstream tasks. In addition, since the actual reason for the changes in best model as corpus size increases is not clear, this will also be suitable for further research. The work on this project is partially funded by Vinnova under the project number 2019-02996 "Språkmodeller för svenska myndigheter" Acronyms
[ "IMDb dataset of movie reviews", "IMDb" ]
2,327
qasper_e
en
null
5fa58c27137b7ed6888f0d3c5b3450aee78a7d479d98672c
What was their system's performance?
Introduction The apparent rise in political incivility has attracted substantial attention from scholars in recent years. These studies have largely focused on the extent to which politicians and elected officials are increasingly employing rhetoric that appears to violate norms of civility BIBREF0 , BIBREF1 . For the purposes of our work, we use the incidence of offensive rhetoric as a stand in for incivility. The 2016 US presidential election was an especially noteworthy case in this regard, particularly in terms of Donald Trump's campaign which frequently violated norms of civility both in how he spoke about broad groups in the public (such as Muslims, Mexicans, and African Americans) and the attacks he leveled at his opponents BIBREF2 . The consequences of incivility are thought to be crucial to the functioning of democracy since “public civility and interpersonal politeness sustain social harmony and allow people who disagree with one another to maintain ongoing relationships" BIBREF3 . While political incivility appears to be on the rise among elites, it is less clear whether this is true among the mass public as well. Is political discourse particularly lacking in civility compared to discourse more generally? Does the incivility of mass political discourse respond to the dynamics of political campaigns? Addressing these questions has been difficult for political scientists because traditional tools for studying mass behavior, such as public opinion surveys, are ill-equipped to measure how citizens discuss politics with one another. Survey data does reveal that the public tends to perceive politics as becoming increasingly less civil during the course of a political campaign BIBREF4 . Yet, it is not clear whether these perceptions match the reality, particularly in terms of the types of discussions that citizens have with each other. An additional question is how incivility is received by others. On one hand, violations of norms regarding offensive discourse may be policed by members of a community, rendering such speech ineffectual. On the other hand, offensive speech may be effective as a means for drawing attention to a particular argument. Indeed, there is evidence that increasing incivility in political speech results in higher levels of attention from the public BIBREF1 . During the 2016 campaign, the use of swearing in comments posted on Donald Trump's YouTube channel tended to result in additional responses that mimicked such swearing BIBREF5 . Thus, offensive speech in online fora may attract more attention from the community and lead to the spread of even more offensive speech in subsequent posts. To address these questions regarding political incivility, we examine the use of offensive speech in political discussions housed on Reddit. Scholars tend to define uncivil discourse as “communication that violates the norms of politeness" BIBREF1 a definition that clearly includes offensive remarks. Reddit fora represent a “most likely" case for the study of offensive political speech due its strong free speech culture BIBREF6 and the ability of participants to use pseudonymous identities. That is, if political incivility in the public did increase during the 2016 campaign, this should be especially evident on fora such as Reddit. Tracking Reddit discussions throughout all of 2015 and 2016, we find that online political discussions became increasingly more offensive as the general election campaign intensified. By comparison, discussions on non-political subreddits did not become increasingly offensive during this period. Additionally, we find that the presence of offensive comments did not subside even three months after the election. Datasets Our study makes use of multiple datasets in order to identify and characterize trends in offensive speech. The CrowdFlower hate speech dataset. The CrowdFlower hate speech dataset BIBREF7 contains 14.5K tweets, each receiving labels from at least three contributors. Contributors were allowed to classify each tweet into one of three classes: Not Offensive (NO), Offensive but not hateful (O), and Offensive and hateful (OH). Of the 14.5K tweets, only 37.6% had a decisive class – i.e., the same class was assigned by all contributors. For indecisive cases, the majority class was selected and a class confidence score (fraction of contributors that selected the majority class) was made available. Using this approach, 50.4%, 33.1%, and 16.5% of the tweets were categorized as NO, O, and OH, respectively. Since our goal is to identify any offensive speech (not just hate speech), we consolidate assigned classes into Offensive and Not Offensive by relabeling OH tweets as Offensive. We use this modified dataset to train, validate, and test our offensive speech classifier. To the best of our knowledge, this is the only dataset that provides offensive and not offensive annotations to a large dataset. Offensive word lists. We also use two offensive word lists as auxiliary input to our classifier: (1) The Hatebase hate speech vocabulary BIBREF8 consisting of 1122 hateful words and (2) 422 offensive words banned from Google's What Do You Love project BIBREF9 . Reddit comments dataset. Finally, after building our offensive speech classifier using the above datasets, we use it to classify comments made on Reddit. While the complete Reddit dataset contains 2B comments made between the period of January 2015 and January 2017, we only analyze only 168M. We select comments to be analyzed using the following process: (1) we exclude comments shorter than 10 characters in length, (2) we exclude comments made by [deleted] authors, and (3) we randomly sample and include 10% of all remaining comments. We categorize comments made in any of 21 popular political subreddits as political and the remainder as apolitical. Our final dataset contains 129M apolitical and 39M political comments. fig:comment-timeline shows the number of comments in our dataset that were made during each week included in our study. We see an increasing number of political comments per week starting in February 2016 – the start of the 2016 US presidential primaries. Offensive Speech Classification In order to identify offensive speech, we propose a fully automated technique that classifies comments into two classes: Offensive and Not Offensive. Classification approach At a high-level, our approach works as follows: Build a word embedding. We construct a 100-dimensional word embedding using all comments from our complete Reddit dataset (2B comments). Construct a hate vector. We construct a list of offensive and hateful words identified from external data and map them into a single vector within the high-dimensional word embedding. Text transformation and classification. Finally, we transform text to be classified into scalars representing their distance from the constructed hate vector and use these as input to a Random Forest classifier. Building a word embedding. At a high-level, a word embedding maps words into a high-dimensional continuous vector space in such a way that semantic similarities between words are maintained. This mapping is achieved by exploiting the distributional properties of words and their occurrences in the input corpus. Rather than using an off-the-shelf word embedding (e.g., the GloVe embeddings BIBREF10 trained using public domain data sources such as Wikipedia and news articles), we construct a 100-dimensional embedding using the complete Reddit dataset (2B comments) as the input corpus. The constructed embedding consists of over 400M unique words (words occurring less than 25 times in the entire corpus are excluded) using the Word2Vec BIBREF11 implementation provided by the Gensim library BIBREF12 . Prior to constructing the embedding, we perform stop-word removal and lemmatize each word in the input corpus using the SpaCy NLP framework BIBREF13 . The main reason for building a custom embedding is to ensure that our embeddings capture semantics specific to the data being measured (Reddit) – e.g., while the word “karma” in the non-Reddit context may be associated with spirituality, it is associated with points (comment and submission scores) on Reddit. Constructing a hate vector. We use two lists of words associated with hate BIBREF8 and offensive BIBREF9 speech to construct a hate vector in our word embedding. This is done by mapping each word in the list into the 100-dimensional embedding and computing the mean vector. This vector represents the average of all known offensive words. The main idea behind creating a hate vector is to capture the point (in our embedding) to which the most offensive observed comments are likely to be near. Although clustering our offensive word lists into similar groups and constructing multiple hate vectors – one for each cluster – results in marginally better accuracy for our classifier, we use this approach due to the fact that our classification cost grows linearly with the number of hate vectors – i.e., we need to perform INLINEFORM0 distance computations per hate vector to classify string INLINEFORM1 . Transforming and classifying text. We first remove stop-words and perform lemmatization of each word in the text to be classified. We then obtain the vector representing each word in the text and compute its similarity to the constructed hate vector using the cosine similarity metric. A 0-vector is used to represent words in the text that are not present in the embedding. Finally, the maximum cosine similarity score is used to represent the comment. Equation EQREF7 shows the transformation function on a string INLINEFORM0 = INLINEFORM1 where INLINEFORM2 is the vector representing the INLINEFORM3 lemmatized non-stop-word, INLINEFORM4 is the cosine-similarity function, and INLINEFORM5 is the hate vector. DISPLAYFORM0 In words, the numerical value assigned to a text is the cosine similarity between the hate vector and the vector representing the word (in the text) closest to the hate vector. This approach allows us to transform a string of text into a single numerical value that captures its semantic similarity to the most offensive comment. We use these scalars as input to a random forest classifier to perform classification into Offensive and Not Offensive classes. fig:reduced-dimension-classes shows the proximity of Offensive and Non Offensive comments to our constructed hate vector after using t-distributed Stochastic Neighbor Embedding (t-SNE) BIBREF14 to reduce our 100-dimension vector space into 2 dimensions. Classifier evaluation We now present results to (1) validate our choice of classifier and (2) demonstrate the impact of training/validation sample count on our classifiers performance. Classifier selection methodology. To identify the most suitable classifier for classifying the scalars associated with each text, we perform evaluations using the stochastic gradient descent, naive bayes, decision tree, and random forest classifiers. For each classifier, we split the CrowdFlower hate speech dataset into a training/validation set (75%), and a holdout set (25%). We perform 10-fold cross-validation on the training/validation set to identify the best classifier model and parameters (using a grid search). Based on the results of this evaluation, we select a 100-estimator entropy-based splitting random forest model as our classifier. tab:classifiers shows the mean accuracy and F1-score for each evaluated classifier during the 10-fold cross-validation. Real-world classifier performance. To evaluate real-world performance of our selected classifier (i.e., performance in the absence of model and parameter bias), we perform classification of the holdout set. On this set, our classifier had an accuracy and F1-score of 89.6% and 89.2%, respectively. These results show that in addition to superior accuracy during training and validation, our chosen classifier is also robust against over-fitting. Impact of dataset quality and size. To understand how the performance of our chosen classifier model and parameters are impacted by: (1) the quality and consistency of manually assigned classes in the CrowdFlower dataset and (2) the size of the dataset, we re-evaluate the classifier while only considering tweets having a minimum confidence score and varying the size of the holdout set. Specifically, our experiments considered confidence thresholds of 0 (all tweets considered), .35 (only tweets where at least 35% of contributors agreed on a class were considered), and .70 (only tweets where at least 70% of the contributors agreed on a class were considered) and varied the holdout set sizes between 5% and 95% of all tweets meeting the confidence threshold set for the experiment. The results illustrated in fig:classifier-performance show the performance of the classifier while evaluating the corresponding holdout set. We make several conclusions from these results: Beyond a (fairly low) threshold, the size of the training and validation set has little impact on classifier performance. We see that the accuracy, precision, and recall have, at best, marginal improvements with holdout set sizes smaller than 60%. This implies that the CrowdFlower dataset is sufficient for building an offensive speech classifier. Quality of manual labeling has a significant impact on the accuracy and precision of the classifier. Using only tweets which had at least 70% of contributors agreeing on a class resulted in between 5-7% higher accuracy and up to 5% higher precision. Our classifier achieves precision of over 95% and recall of over 85% when considering only high confidence samples. This implies that the classifier is more likely to underestimate the presence of offensive speech – i.e., our results likely provide a lower-bound on the quantity of observed offensive speech. Measurements In this section we quantify and characterize offensiveness in the political and general contexts using our offensive speech classifier and the Reddit comments dataset which considers a random sample of comments made between January 2015 and January 2017. Offensiveness over time. We find that on average 8.4% of all political comments are offensive compared to 7.8% of all apolitical comments. fig:offensive-speech-timeline illustrates the fraction of offensive political and apolitical comments made during each week in our study. We see that while the fraction of apolitical offensive comments has stayed steady, there has been an increase in the fraction of offensive political comments starting in July 2016. Notably, this increase is observed after the conclusion of the US presidential primaries and during the period of the Democratic and Republican National Conventions and does not reduce even after the conclusion of the US presidential elections held on November 8. Participants in political subreddits were 2.6% more likely to observe offensive comments prior to July 2016 but 14.9% more likely to observe offensive comments from July 2016 onwards. Reactions to offensive comments. We use the comment score, roughly the difference between up-votes and down-votes received, as a proxy for understanding how users reacted to offensive comments. We find that comments that were offensive: (1) on average, had a higher score than non-offensive comments (average scores: 8.9 vs. 6.7) and (2) were better received when they were posted in the general context than in the political context (average scores: 8.6 vs. 9.0). To understand how peoples reactions to offensive comments evolved over time, fig:offensive-scores-timeline shows the average scores received by offensive comments over time. Again, we observe an increasing trend in average scores received by offensive and political comments after July 2016. Characteristics of offensive authors. We now focus on understanding characteristics of authors of offensive comments. Specifically, we are interested in identifying the use of throwaway and troll accounts. For the purposes of this study, we characterize throwaway accounts as those with less than five total comments – i.e., accounts that are used to make a small number of comments. Similarly, we define troll accounts as those with over 15 comments of which over 75% are classified as offensive – i.e., accounts that are used to make a larger number of comments, of which a significant majority are offensive. We find that 93.7% of the accounts which have over 75% of their comments tagged as offensive are throwaways and 1.3% are trolls. Complete results are illustrated in fig:offensive-authors-cdf. Characteristics of offensive communities. We breakdown subreddits by their category (default, political, and other) and identify the most and least offensive communities in each. fig:subreddit-cdf shows the distribution of the fraction of offensive comments in each category and tab:subreddit-breakdown shows the most and least offensive subreddits in the political and default categories (we exclude the “other” category due to the inappropriateness of their names). We find that less than 19% of all subreddits (that account for over 23% of all comments) have over 10% offensive comments. Further, several default and political subreddits fall in this category, including r/the INLINEFORM0 donald – the most offensive political subreddit and the subreddit dedicated to the US President. Flow of offensive authors. Finally, we uncover patterns in the movement of offensive authors between communities. In fig:offensive-flow we show the communities in which large number of authors of offensive content on the r/politics subreddit had previously made offensive comments (we refer to these communities as sources). Unsurprisingly, the most popular sources belonged to the default subreddits (e.g., r/worldnews, r/wtf, r/videos, r/askreddit, and r/news). We find that several other political subreddits also serve as large sources of offensive authors. In fact, the subreddits dedicated to the three most popular US presidential candidates – r/the INLINEFORM0 donald, r/sandersforpresident, and r/hillaryclinton rank in the top three. Finally, outside of the default and political subreddits, we find that r/nfl, r/conspiracy, r/dota2, r/reactiongifs, r/blackpeopletwitter, and r/imgoingtohellforthis were the largest sources of offensive political authors. Conclusions and Future Work We develop and validate an offensive speech classifier to quantify the presence of offensive online comments from January 2015 through January 2017. We find that political discussions on Reddit became increasingly less civil – as measured by the incidence of offensive comments – during the 2016 general election campaign. In fact, during the height of the campaign, nearly one of every 10 comments posted on a political subreddit were classified as offensive. Offensive comments also received more positive feedback from the community, even though most of the accounts responsible for such comments appear to be throwaway accounts. While offensive posts were increasingly common on political subreddits as the campaign wore on, there was no such increase in non-political fora. This contrast provides additional evidence that the increasing use of offensive speech was directly related to the ramping up of the general election campaign for president. Even though our study relies on just a single source of online political discussions – Reddit, we believe that our findings generally present an upper-bound on the incidence of offensiveness in online political discussions for the following reasons: First, Reddit allows the use of pseudonymous identities that enables the online disinhibition effect (unlike social-media platforms such as Facebook). Second, Reddit enables users to engage in complex discussions that are unrestricted in length (unlike Twitter). Finally, Reddit is known for enabling a general culture of free speech and delegating content regulation to moderators of individual subreddits. This provides users holding fringe views a variety of subreddits in which their content is welcome. Our findings provide a unique and important mapping of the increasing incivility of online political discourse during the 2016 campaign. Such an investigation is important because scholars have outlined many consequences for incivility in political discourse. Incivility tends to “turn off” political moderates, leading to increasing polarization among those who are actively engaged in politics BIBREF4 . More importantly, a lack of civility in political discussions generally reduces the degree to which people view opponents as holding legitimate viewpoints. This dynamic makes it difficult for people to find common ground with those who disagree with them BIBREF15 and it may ultimately lead citizens to view electoral victories by opponents as lacking legitimacy BIBREF1 . Thus, from a normative standpoint, the fact that the 2016 campaign sparked a marked increase in the offensiveness of political comments posted to Reddit is of concern in its own right; that the incidence of offensive political comments has remained high even three months after the election is all the more troubling. In future work, we will extend our analysis of Reddit back to 2007 with the aim of formulating a more complete understanding of the dynamics of political incivility. For example, we seek to understand whether the high incidence of offensive speech we find in 2016 is unique to this particular campaign or if previous presidential campaigns witnessed similar spikes in incivility. We will also examine whether there is a more general long-term trend toward offensive online political speech, which would be consistent with what scholars have found when studying political elites BIBREF16 , BIBREF17 .
[ "accuracy and F1-score of 89.6% and 89.2%, respectively", "accuracy and F1-score of 89.6% and 89.2%, respectively" ]
3,313
qasper_e
en
null
40490daeb7eba9f934f4613aac7ec5d957d1482bce802c8a
What baseline approaches does this approach out-perform?
Introduction With the increasing popularity of the Internet, online texts provided by social media platform (e.g. Twitter) and news media sites (e.g. Google news) have become important sources of real-world events. Therefore, it is crucial to automatically extract events from online texts. Due to the high variety of events discussed online and the difficulty in obtaining annotated data for training, traditional template-based or supervised learning approaches for event extraction are no longer applicable in dealing with online texts. Nevertheless, newsworthy events are often discussed by many tweets or online news articles. Therefore, the same event could be mentioned by a high volume of redundant tweets or news articles. This property inspires the research community to devise clustering-based models BIBREF0 , BIBREF1 , BIBREF2 to discover new or previously unidentified events without extracting structured representations. To extract structured representations of events such as who did what, when, where and why, Bayesian approaches have made some progress. Assuming that each document is assigned to a single event, which is modeled as a joint distribution over the named entities, the date and the location of the event, and the event-related keywords, Zhou et al. zhou2014simple proposed an unsupervised Latent Event Model (LEM) for open-domain event extraction. To address the limitation that LEM requires the number of events to be pre-set, Zhou et al. zhou2017event further proposed the Dirichlet Process Event Mixture Model (DPEMM) in which the number of events can be learned automatically from data. However, both LEM and DPEMM have two limitations: (1) they assume that all words in a document are generated from a single event which can be represented by a quadruple INLINEFORM0 entity, location, keyword, date INLINEFORM1 . However, long texts such news articles often describe multiple events which clearly violates this assumption; (2) During the inference process of both approaches, the Gibbs sampler needs to compute the conditional posterior distribution and assigns an event for each document. This is time consuming and takes long time to converge. To deal with these limitations, in this paper, we propose the Adversarial-neural Event Model (AEM) based on adversarial training for open-domain event extraction. The principle idea is to use a generator network to learn the projection function between the document-event distribution and four event related word distributions (entity distribution, location distribution, keyword distribution and date distribution). Instead of providing an analytic approximation, AEM uses a discriminator network to discriminate between the reconstructed documents from latent events and the original input documents. This essentially helps the generator to construct a more realistic document from a random noise drawn from a Dirichlet distribution. Due to the flexibility of neural networks, the generator is capable of learning complicated nonlinear distributions. And the supervision signal provided by the discriminator will help generator to capture the event-related patterns. Furthermore, the discriminator also provides low-dimensional discriminative features which can be used to visualize documents and events. The main contributions of the paper are summarized below: Related Work Our work is related to two lines of research, event extraction and Generative Adversarial Nets. Event Extraction Recently there has been much interest in event extraction from online texts, and approaches could be categorized as domain-specific and open-domain event extraction. Domain-specific event extraction often focuses on the specific types of events (e.g. sports events or city events). Panem et al. panem2014structured devised a novel algorithm to extract attribute-value pairs and mapped them to manually generated schemes for extracting the natural disaster events. Similarly, to extract the city-traffic related event, Anantharam et al. anantharam2015extracting viewed the task as a sequential tagging problem and proposed an approach based on the conditional random fields. Zhang zhang2018event proposed an event extraction approach based on imitation learning, especially on inverse reinforcement learning. Open-domain event extraction aims to extract events without limiting the specific types of events. To analyze individual messages and induce a canonical value for each event, Benson et al. benson2011event proposed an approach based on a structured graphical model. By representing an event with a binary tuple which is constituted by a named entity and a date, Ritter et al. ritter2012open employed some statistic to measure the strength of associations between a named entity and a date. The proposed system relies on a supervised labeler trained on annotated data. In BIBREF1 , Abdelhaq et al. developed a real-time event extraction system called EvenTweet, and each event is represented as a triple constituted by time, location and keywords. To extract more information, Wang el al. wang2015seeft developed a system employing the links in tweets and combing tweets with linked articles to identify events. Xia el al. xia2015new combined texts with the location information to detect the events with low spatial and temporal deviations. Zhou et al. zhou2014simple,zhou2017event represented event as a quadruple and proposed two Bayesian models to extract events from tweets. Generative Adversarial Nets As a neural-based generative model, Generative Adversarial Nets BIBREF3 have been extensively researched in natural language processing (NLP) community. For text generation, the sequence generative adversarial network (SeqGAN) proposed in BIBREF4 incorporated a policy gradient strategy to optimize the generation process. Based on the policy gradient, Lin et al. lin2017adversarial proposed RankGAN to capture the rich structures of language by ranking and analyzing a collection of human-written and machine-written sentences. To overcome mode collapse when dealing with discrete data, Fedus et al. fedus2018maskgan proposed MaskGAN which used an actor-critic conditional GAN to fill in missing text conditioned on the surrounding context. Along this line, Wang et al. wang2018sentigan proposed SentiGAN to generate texts of different sentiment labels. Besides, Li et al. li2018learning improved the performance of semi-supervised text classification using adversarial training, BIBREF5 , BIBREF6 designed GAN-based models for distance supervision relation extraction. Although various GAN based approaches have been explored for many applications, none of these approaches tackles open-domain event extraction from online texts. We propose a novel GAN-based event extraction model called AEM. Compared with the previous models, AEM has the following differences: (1) Unlike most GAN-based text generation approaches, a generator network is employed in AEM to learn the projection function between an event distribution and the event-related word distributions (entity, location, keyword, date). The learned generator captures event-related patterns rather than generating text sequence; (2) Different from LEM and DPEMM, AEM uses a generator network to capture the event-related patterns and is able to mine events from different text sources (short and long). Moreover, unlike traditional inference procedure, such as Gibbs sampling used in LEM and DPEMM, AEM could extract the events more efficiently due to the CUDA acceleration; (3) The discriminative features learned by the discriminator of AEM provide a straightforward way to visualize the extracted events. Methodology We describe Adversarial-neural Event Model (AEM) in this section. An event is represented as a quadruple < INLINEFORM0 >, where INLINEFORM1 stands for non-location named entities, INLINEFORM2 for a location, INLINEFORM3 for event-related keywords, INLINEFORM4 for a date, and each component in the tuple is represented by component-specific representative words. AEM is constituted by three components: (1) The document representation module, as shown at the top of Figure FIGREF4 , defines a document representation approach which converts an input document from the online text corpus into INLINEFORM0 which captures the key event elements; (2) The generator INLINEFORM1 , as shown in the lower-left part of Figure FIGREF4 , generates a fake document INLINEFORM2 which is constituted by four multinomial distributions using an event distribution INLINEFORM3 drawn from a Dirichlet distribution as input; (3) The discriminator INLINEFORM4 , as shown in the lower-right part of Figure FIGREF4 , distinguishes the real documents from the fake ones and its output is subsequently employed as a learning signal to update the INLINEFORM5 and INLINEFORM6 . The details of each component are presented below. Document Representation Each document INLINEFORM0 in a given corpus INLINEFORM1 is represented as a concatenation of 4 multinomial distributions which are entity distribution ( INLINEFORM2 ), location distribution ( INLINEFORM3 ), keyword distribution ( INLINEFORM4 ) and date distribution ( INLINEFORM5 ) of the document. As four distributions are calculated in a similar way, we only describe the computation of the entity distribution below as an example. The entity distribution INLINEFORM0 is represented by a normalized INLINEFORM1 -dimensional vector weighted by TF-IDF, and it is calculated as: INLINEFORM2 where INLINEFORM0 is the pseudo corpus constructed by removing all non-entity words from INLINEFORM1 , INLINEFORM2 is the total number of distinct entities in a corpus, INLINEFORM3 denotes the number of INLINEFORM4 -th entity appeared in document INLINEFORM5 , INLINEFORM6 represents the number of documents in the corpus, and INLINEFORM7 is the number of documents that contain INLINEFORM8 -th entity, and the obtained INLINEFORM9 denotes the relevance between INLINEFORM10 -th entity and document INLINEFORM11 . Similarly, location distribution INLINEFORM0 , keyword distribution INLINEFORM1 and date distribution INLINEFORM2 of INLINEFORM3 could be calculated in the same way, and the dimensions of these distributions are denoted as INLINEFORM4 , INLINEFORM5 and INLINEFORM6 , respectively. Finally, each document INLINEFORM7 in the corpus is represented by a INLINEFORM8 -dimensional ( INLINEFORM9 = INLINEFORM10 + INLINEFORM11 + INLINEFORM12 + INLINEFORM13 ) vector INLINEFORM14 by concatenating four computed distributions. Network Architecture The generator network INLINEFORM0 is designed to learn the projection function between the document-event distribution INLINEFORM1 and the four document-level word distributions (entity distribution, location distribution, keyword distribution and date distribution). More concretely, INLINEFORM0 consists of a INLINEFORM1 -dimensional document-event distribution layer, INLINEFORM2 -dimensional hidden layer and INLINEFORM3 -dimensional event-related word distribution layer. Here, INLINEFORM4 denotes the event number, INLINEFORM5 is the number of units in the hidden layer, INLINEFORM6 is the vocabulary size and equals to INLINEFORM7 + INLINEFORM8 + INLINEFORM9 + INLINEFORM10 . As shown in Figure FIGREF4 , INLINEFORM11 firstly employs a random document-event distribution INLINEFORM12 as an input. To model the multinomial property of the document-event distribution, INLINEFORM13 is drawn from a Dirichlet distribution parameterized with INLINEFORM14 which is formulated as: DISPLAYFORM0 where INLINEFORM0 is the hyper-parameter of the dirichlet distribution, INLINEFORM1 is the number of events which should be set in AEM, INLINEFORM2 , INLINEFORM3 represents the proportion of event INLINEFORM4 in the document and INLINEFORM5 . Subsequently, INLINEFORM0 transforms INLINEFORM1 into a INLINEFORM2 -dimensional hidden space using a linear layer followed by layer normalization, and the transformation is defined as: DISPLAYFORM0 where INLINEFORM0 represents the weight matrix of hidden layer, and INLINEFORM1 denotes the bias term, INLINEFORM2 is the parameter of LeakyReLU activation and is set to 0.1, INLINEFORM3 and INLINEFORM4 denote the normalized hidden states and the outputs of the hidden layer, and INLINEFORM5 represents the layer normalization. Then, to project INLINEFORM0 into four document-level event related word distributions ( INLINEFORM1 , INLINEFORM2 , INLINEFORM3 and INLINEFORM4 shown in Figure FIGREF4 ), four subnets (each contains a linear layer, a batch normalization layer and a softmax layer) are employed in INLINEFORM5 . And the exact transformation is based on the formulas below: DISPLAYFORM0 where INLINEFORM0 means softmax layer, INLINEFORM1 , INLINEFORM2 , INLINEFORM3 and INLINEFORM4 denote the weight matrices of the linear layers in subnets, INLINEFORM5 , INLINEFORM6 , INLINEFORM7 and INLINEFORM8 represent the corresponding bias terms, INLINEFORM9 , INLINEFORM10 , INLINEFORM11 and INLINEFORM12 are state vectors. INLINEFORM13 , INLINEFORM14 , INLINEFORM15 and INLINEFORM16 denote the generated entity distribution, location distribution, keyword distribution and date distribution, respectively, that correspond to the given event distribution INLINEFORM17 . And each dimension represents the relevance between corresponding entity/location/keyword/date term and the input event distribution. Finally, four generated distributions are concatenated to represent the generated document INLINEFORM0 corresponding to the input INLINEFORM1 : DISPLAYFORM0 The discriminator network INLINEFORM0 is designed as a fully-connected network which contains an input layer, a discriminative feature layer (discriminative features are employed for event visualization) and an output layer. In AEM, INLINEFORM1 uses fake document INLINEFORM2 and real document INLINEFORM3 as input and outputs the signal INLINEFORM4 to indicate the source of the input data (lower value denotes that INLINEFORM5 is prone to predict the input data as a fake document and vice versa). As have previously been discussed in BIBREF7 , BIBREF8 , lipschitz continuity of INLINEFORM0 network is crucial to the training of the GAN-based approaches. To ensure the lipschitz continuity of INLINEFORM1 , we employ the spectral normalization technique BIBREF9 . More concretely, for each linear layer INLINEFORM2 (bias term is omitted for simplicity) in INLINEFORM3 , the weight matrix INLINEFORM4 is normalized by INLINEFORM5 . Here, INLINEFORM6 is the spectral norm of the weight matrix INLINEFORM7 with the definition below: DISPLAYFORM0 which is equivalent to the largest singular value of INLINEFORM0 . The weight matrix INLINEFORM1 is then normalized using: DISPLAYFORM0 Obviously, the normalized weight matrix INLINEFORM0 satisfies that INLINEFORM1 and further ensures the lipschitz continuity of the INLINEFORM2 network BIBREF9 . To reduce the high cost of computing spectral norm INLINEFORM3 using singular value decomposition at each iteration, we follow BIBREF10 and employ the power iteration method to estimate INLINEFORM4 instead. With this substitution, the spectral norm can be estimated with very small additional computational time. Objective and Training Procedure The real document INLINEFORM0 and fake document INLINEFORM1 shown in Figure FIGREF4 could be viewed as random samples from two distributions INLINEFORM2 and INLINEFORM3 , and each of them is a joint distribution constituted by four Dirichlet distributions (corresponding to entity distribution, location distribution, keyword distribution and date distribution). The training objective of AEM is to let the distribution INLINEFORM4 (produced by INLINEFORM5 network) to approximate the real data distribution INLINEFORM6 as much as possible. To compare the different GAN losses, Kurach kurach2018gan takes a sober view of the current state of GAN and suggests that the Jansen-Shannon divergence used in BIBREF3 performs more stable than variant objectives. Besides, Kurach also advocates that the gradient penalty (GP) regularization devised in BIBREF8 will further improve the stability of the model. Thus, the objective function of the proposed AEM is defined as: DISPLAYFORM0 where INLINEFORM0 denotes the discriminator loss, INLINEFORM1 represents the gradient penalty regularization loss, INLINEFORM2 is the gradient penalty coefficient which trade-off the two components of objective, INLINEFORM3 could be obtained by sampling uniformly along a straight line between INLINEFORM4 and INLINEFORM5 , INLINEFORM6 denotes the corresponding distribution. The training procedure of AEM is presented in Algorithm SECREF15 , where INLINEFORM0 is the event number, INLINEFORM1 denotes the number of discriminator iterations per generator iteration, INLINEFORM2 is the batch size, INLINEFORM3 represents the learning rate, INLINEFORM4 and INLINEFORM5 are hyper-parameters of Adam BIBREF11 , INLINEFORM6 denotes INLINEFORM7 . In this paper, we set INLINEFORM8 , INLINEFORM9 , INLINEFORM10 . Moreover, INLINEFORM11 , INLINEFORM12 and INLINEFORM13 are set as 0.0002, 0.5 and 0.999. [!h] Training procedure for AEM [1] INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 , INLINEFORM4 , INLINEFORM5 , INLINEFORM6 the trained INLINEFORM7 and INLINEFORM8 . Initial INLINEFORM9 parameters INLINEFORM10 and INLINEFORM11 parameter INLINEFORM12 INLINEFORM13 has not converged INLINEFORM14 INLINEFORM15 Sample INLINEFORM16 , Sample a random INLINEFORM17 Sample a random number INLINEFORM18 INLINEFORM19 INLINEFORM20 INLINEFORM21 INLINEFORM22 INLINEFORM23 INLINEFORM24 Sample INLINEFORM25 noise INLINEFORM26 INLINEFORM27 Event Generation After the model training, the generator INLINEFORM0 learns the mapping function between the document-event distribution and the document-level event-related word distributions (entity, location, keyword and date). In other words, with an event distribution INLINEFORM1 as input, INLINEFORM2 could generate the corresponding entity distribution, location distribution, keyword distribution and date distribution. In AEM, we employ event seed INLINEFORM0 , an INLINEFORM1 -dimensional vector with one-hot encoding, to generate the event related word distributions. For example, in ten event setting, INLINEFORM2 represents the event seed of the first event. With the event seed INLINEFORM3 as input, the corresponding distributions could be generated by INLINEFORM4 based on the equation below: DISPLAYFORM0 where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 and INLINEFORM3 denote the entity distribution, location distribution, keyword distribution and date distribution of the first event respectively. Experiments In this section, we firstly describe the datasets and baseline approaches used in our experiments and then present the experimental results. Experimental Setup To validate the effectiveness of AEM for extracting events from social media (e.g. Twitter) and news media sites (e.g. Google news), three datasets (FSD BIBREF12 , Twitter, and Google datasets) are employed. Details are summarized below: FSD dataset (social media) is the first story detection dataset containing 2,499 tweets. We filter out events mentioned in less than 15 tweets since events mentioned in very few tweets are less likely to be significant. The final dataset contains 2,453 tweets annotated with 20 events. Twitter dataset (social media) is collected from tweets published in the month of December in 2010 using Twitter streaming API. It contains 1,000 tweets annotated with 20 events. Google dataset (news article) is a subset of GDELT Event Database INLINEFORM0 , documents are retrieved by event related words. For example, documents which contain `malaysia', `airline', `search' and `plane' are retrieved for event MH370. By combining 30 events related documents, the dataset contains 11,909 news articles. We choose the following three models as the baselines: K-means is a well known data clustering algorithm, we implement the algorithm using sklearn toolbox, and represent documents using bag-of-words weighted by TF-IDF. LEM BIBREF13 is a Bayesian modeling approach for open-domain event extraction. It treats an event as a latent variable and models the generation of an event as a joint distribution of its individual event elements. We implement the algorithm with the default configuration. DPEMM BIBREF14 is a non-parametric mixture model for event extraction. It addresses the limitation of LEM that the number of events should be known beforehand. We implement the model with the default configuration. For social media text corpus (FSD and Twitter), a named entity tagger specifically built for Twitter is used to extract named entities including locations from tweets. A Twitter Part-of-Speech (POS) tagger BIBREF15 is used for POS tagging and only words tagged with nouns, verbs and adjectives are retained as keywords. For the Google dataset, we use the Stanford Named Entity Recognizer to identify the named entities (organization, location and person). Due to the `date' information not being provided in the Google dataset, we further divide the non-location named entities into two categories (`person' and `organization') and employ a quadruple <organization, location, person, keyword> to denote an event in news articles. We also remove common stopwords and only keep the recognized named entities and the tokens which are verbs, nouns or adjectives. Experimental Results To evaluate the performance of the proposed approach, we use the evaluation metrics such as precision, recall and F-measure. Precision is defined as the proportion of the correctly identified events out of the model generated events. Recall is defined as the proportion of correctly identified true events. For calculating the precision of the 4-tuple, we use following criteria: (1) Do the entity/organization, location, date/person and keyword that we have extracted refer to the same event? (2) If the extracted representation contains keywords, are they informative enough to tell us what happened? Table TABREF35 shows the event extraction results on the three datasets. The statistics are obtained with the default parameter setting that INLINEFORM0 is set to 5, number of hidden units INLINEFORM1 is set to 200, and INLINEFORM2 contains three fully-connected layers. The event number INLINEFORM3 for three datasets are set to 25, 25 and 35, respectively. The examples of extracted events are shown in Table. TABREF36 . It can be observed that K-means performs the worst over all three datasets. On the social media datasets, AEM outpoerforms both LEM and DPEMM by 6.5% and 1.7% respectively in F-measure on the FSD dataset, and 4.4% and 3.7% in F-measure on the Twitter dataset. We can also observe that apart from K-means, all the approaches perform worse on the Twitter dataset compared to FSD, possibly due to the limited size of the Twitter dataset. Moreover, on the Google dataset, the proposed AEM performs significantly better than LEM and DPEMM. It improves upon LEM by 15.5% and upon DPEMM by more than 30% in F-measure. This is because: (1) the assumption made by LEM and DPEMM that all words in a document are generated from a single event is not suitable for long text such as news articles; (2) DPEMM generates too many irrelevant events which leads to a very low precision score. Overall, we see the superior performance of AEM across all datasets, with more significant improvement on the for Google datasets (long text). We next visualize the detected events based on the discriminative features learned by the trained INLINEFORM0 network in AEM. The t-SNE BIBREF16 visualization results on the datasets are shown in Figure FIGREF19 . For clarity, each subplot is plotted on a subset of the dataset containing ten randomly selected events. It can be observed that documents describing the same event have been grouped into the same cluster. To further evaluate if a variation of the parameters INLINEFORM0 (the number of discriminator iterations per generator iteration), INLINEFORM1 (the number of units in hidden layer) and the structure of generator INLINEFORM2 will impact the extraction performance, additional experiments have been conducted on the Google dataset, with INLINEFORM3 set to 5, 7 and 10, INLINEFORM4 set to 100, 150 and 200, and three INLINEFORM5 structures (3, 4 and 5 layers). The comparison results on precision, recall and F-measure are shown in Figure FIGREF20 . From the results, it could be observed that AEM with the 5-layer generator performs the best and achieves 96.7% in F-measure, and the worst F-measure obtained by AEM is 85.7%. Overall, the AEM outperforms all compared approaches acorss various parameter settings, showing relatively stable performance. Finally, we compare in Figure FIGREF37 the training time required for each model, excluding the constant time required by each model to load the data. We could observe that K-means runs fastest among all four approaches. Both LEM and DPEMM need to sample the event allocation for each document and update the relevant counts during Gibbs sampling which are time consuming. AEM only requires a fraction of the training time compared to LEM and DPEMM. Moreover, on a larger dataset such as the Google dataset, AEM appears to be far more efficient compared to LEM and DPEMM. Conclusions and Future Work In this paper, we have proposed a novel approach based on adversarial training to extract the structured representation of events from online text. The experimental comparison with the state-of-the-art methods shows that AEM achieves improved extraction performance, especially on long text corpora with an improvement of 15% observed in F-measure. AEM only requires a fraction of training time compared to existing Bayesian graphical modeling approaches. In future work, we will explore incorporating external knowledge (e.g. word relatedness contained in word embeddings) into the learning framework for event extraction. Besides, exploring nonparametric neural event extraction approaches and detecting the evolution of events over time from news articles are other promising future directions. Acknowledgments We would like to thank anonymous reviewers for their valuable comments and helpful suggestions. This work was funded by the National Key Research and Development Program of China (2016YFC1306704), the National Natural Science Foundation of China (61772132), the Natural Science Foundation of Jiangsu Province of China (BK20161430).
[ "K-means, LEM BIBREF13, DPEMM BIBREF14", "K-means, LEM, DPEMM" ]
3,841
qasper_e
en
null
973f0a7a906c8f84200007bfe0c611f041fe389c316707b7
In what 8 languages is PolyResponse engine used for restourant search and booking system?
Introduction and Background Task-oriented dialogue systems are primarily designed to search and interact with large databases which contain information pertaining to a certain dialogue domain: the main purpose of such systems is to assist the users in accomplishing a well-defined task such as flight booking BIBREF0, tourist information BIBREF1, restaurant search BIBREF2, or booking a taxi BIBREF3. These systems are typically constructed around rigid task-specific ontologies BIBREF1, BIBREF4 which enumerate the constraints the users can express using a collection of slots (e.g., price range for restaurant search) and their slot values (e.g., cheap, expensive for the aforementioned slots). Conversations are then modelled as a sequence of actions that constrain slots to particular values. This explicit semantic space is manually engineered by the system designer. It serves as the output of the natural language understanding component as well as the input to the language generation component both in traditional modular systems BIBREF5, BIBREF6 and in more recent end-to-end task-oriented dialogue systems BIBREF7, BIBREF8, BIBREF9, BIBREF3. Working with such explicit semantics for task-oriented dialogue systems poses several critical challenges on top of the manual time-consuming domain ontology design. First, it is difficult to collect domain-specific data labelled with explicit semantic representations. As a consequence, despite recent data collection efforts to enable training of task-oriented systems across multiple domains BIBREF0, BIBREF3, annotated datasets are still few and far between, as well as limited in size and the number of domains covered. Second, the current approach constrains the types of dialogue the system can support, resulting in artificial conversations, and breakdowns when the user does not understand what the system can and cannot support. In other words, training a task-based dialogue system for voice-controlled search in a new domain always implies the complex, expensive, and time-consuming process of collecting and annotating sufficient amounts of in-domain dialogue data. In this paper, we present a demo system based on an alternative approach to task-oriented dialogue. Relying on non-generative response retrieval we describe the PolyResponse conversational search engine and its application in the task of restaurant search and booking. The engine is trained on hundreds of millions of real conversations from a general domain (i.e., Reddit), using an implicit representation of semantics that directly optimizes the task at hand. It learns what responses are appropriate in different conversational contexts, and consequently ranks a large pool of responses according to their relevance to the current user utterance and previous dialogue history (i.e., dialogue context). The technical aspects of the underlying conversational search engine are explained in detail in our recent work BIBREF11, while the details concerning the Reddit training data are also available in another recent publication BIBREF12. In this demo, we put focus on the actual practical usefulness of the search engine by demonstrating its potential in the task of restaurant search, and extending it to deal with multi-modal data. We describe a PolyReponse system that assists the users in finding a relevant restaurant according to their preference, and then additionally helps them to make a booking in the selected restaurant. Due to its retrieval-based design, with the PolyResponse engine there is no need to engineer a structured ontology, or to solve the difficult task of general language generation. This design also bypasses the construction of dedicated decision-making policy modules. The large ranking model already encapsulates a lot of knowledge about natural language and conversational flow. Since retrieved system responses are presented visually to the user, the PolyResponse restaurant search engine is able to combine text responses with relevant visual information (e.g., photos from social media associated with the current restaurant and related to the user utterance), effectively yielding a multi-modal response. This setup of using voice as input, and responding visually is becoming more and more prevalent with the rise of smart screens like Echo Show and even mixed reality. Finally, the PolyResponse restaurant search engine is multilingual: it is currently deployed in 8 languages enabling search over restaurants in 8 cities around the world. System snapshots in four different languages are presented in Figure FIGREF16, while screencast videos that illustrate the dialogue flow with the PolyResponse engine are available at: https://tinyurl.com/y3evkcfz. PolyResponse: Conversational Search The PolyResponse system is powered by a single large conversational search engine, trained on a large amount of conversational and image data, as shown in Figure FIGREF2. In simple words, it is a ranking model that learns to score conversational replies and images in a given conversational context. The highest-scoring responses are then retrieved as system outputs. The system computes two sets of similarity scores: 1) $S(r,c)$ is the score of a candidate reply $r$ given a conversational context $c$, and 2) $S(p,c)$ is the score of a candidate photo $p$ given a conversational context $c$. These scores are computed as a scaled cosine similarity of a vector that represents the context ($h_c$), and a vector that represents the candidate response: a text reply ($h_r$) or a photo ($h_p$). For instance, $S(r,c)$ is computed as $S(r,c)=C cos(h_r,h_c)$, where $C$ is a learned constant. The part of the model dealing with text input (i.e., obtaining the encodings $h_c$ and $h_r$) follows the architecture introduced recently by Henderson:2019acl. We provide only a brief recap here; see the original paper for further details. PolyResponse: Conversational Search ::: Text Representation. The model, implemented as a deep neural network, learns to respond by training on hundreds of millions context-reply $(c,r)$ pairs. First, similar to Henderson:2017arxiv, raw text from both $c$ and $r$ is converted to unigrams and bigrams. All input text is first lower-cased and tokenised, numbers with 5 or more digits get their digits replaced by a wildcard symbol #, while words longer than 16 characters are replaced by a wildcard token LONGWORD. Sentence boundary tokens are added to each sentence. The vocabulary consists of the unigrams that occur at least 10 times in a random 10M subset of the Reddit training set (see Figure FIGREF2) plus the 200K most frequent bigrams in the same random subset. During training, we obtain $d$-dimensional feature representations ($d=320$) shared between contexts and replies for each unigram and bigram jointly with other neural net parameters. A state-of-the-art architecture based on transformers BIBREF13 is then applied to unigram and bigram vectors separately, which are then averaged to form the final 320-dimensional encoding. That encoding is then passed through three fully-connected non-linear hidden layers of dimensionality $1,024$. The final layer is linear and maps the text into the final $l$-dimensional ($l=512$) representation: $h_c$ and $h_r$. Other standard and more sophisticated encoder models can also be used to provide final encodings $h_c$ and $h_r$, but the current architecture shows a good trade-off between speed and efficacy with strong and robust performance in our empirical evaluations on the response retrieval task using Reddit BIBREF14, OpenSubtitles BIBREF15, and AmazonQA BIBREF16 conversational test data, see BIBREF12 for further details. In training the constant $C$ is constrained to lie between 0 and $\sqrt{l}$. Following Henderson:2017arxiv, the scoring function in the training objective aims to maximise the similarity score of context-reply pairs that go together, while minimising the score of random pairings: negative examples. Training proceeds via SGD with batches comprising 500 pairs (1 positive and 499 negatives). PolyResponse: Conversational Search ::: Photo Representation. Photos are represented using convolutional neural net (CNN) models pretrained on ImageNet BIBREF17. We use a MobileNet model with a depth multiplier of 1.4, and an input dimension of $224 \times 224$ pixels as in BIBREF18. This provides a $1,280 \times 1.4 = 1,792$-dimensional representation of a photo, which is then passed through a single hidden layer of dimensionality $1,024$ with ReLU activation, before being passed to a hidden layer of dimensionality 512 with no activation to provide the final representation $h_p$. PolyResponse: Conversational Search ::: Data Source 1: Reddit. For training text representations we use a Reddit dataset similar to AlRfou:2016arxiv. Our dataset is large and provides natural conversational structure: all Reddit data from January 2015 to December 2018, available as a public BigQuery dataset, span almost 3.7B comments BIBREF12. We preprocess the dataset to remove uninformative and long comments by retaining only sentences containing more than 8 and less than 128 word tokens. After pairing all comments/contexts $c$ with their replies $r$, we obtain more than 727M context-reply $(c,r)$ pairs for training, see Figure FIGREF2. PolyResponse: Conversational Search ::: Data Source 2: Yelp. Once the text encoding sub-networks are trained, a photo encoder is learned on top of a pretrained MobileNet CNN, using data taken from the Yelp Open dataset: it contains around 200K photos and their captions. Training of the multi-modal sub-network then maximises the similarity of captions encoded with the response encoder $h_r$ to the photo representation $h_p$. As a result, we can compute the score of a photo given a context using the cosine similarity of the respective vectors. A photo will be scored highly if it looks like its caption would be a good response to the current context. PolyResponse: Conversational Search ::: Index of Responses. The Yelp dataset is used at inference time to provide text and photo candidates to display to the user at each step in the conversation. Our restaurant search is currently deployed separately for each city, and we limit the responses to a given city. For instance, for our English system for Edinburgh we work with 396 restaurants, 4,225 photos (these include additional photos obtained using the Google Places API without captions), 6,725 responses created from the structured information about restaurants that Yelp provides, converted using simple templates to sentences of the form such as “Restaurant X accepts credit cards.”, 125,830 sentences extracted from online reviews. PolyResponse: Conversational Search ::: PolyResponse in a Nutshell. The system jointly trains two encoding functions (with shared word embeddings) $f(context)$ and $g(reply)$ which produce encodings $h_c$ and $h_r$, so that the similarity $S(c,r)$ is high for all $(c,r)$ pairs from the Reddit training data and low for random pairs. The encoding function $g()$ is then frozen, and an encoding function $t(photo)$ is learnt which makes the similarity between a photo and its associated caption high for all (photo, caption) pairs from the Yelp dataset, and low for random pairs. $t$ is a CNN pretrained on ImageNet, with a shallow one-layer DNN on top. Given a new context/query, we then provide its encoding $h_c$ by applying $f()$, and find plausible text replies and photo responses according to functions $g()$ and $t()$, respectively. These should be responses that look like answers to the query, and photos that look like they would have captions that would be answers to the provided query. At inference, finding relevant candidates given a context reduces to computing $h_c$ for the context $c$ , and finding nearby $h_r$ and $h_p$ vectors. The response vectors can all be pre-computed, and the nearest neighbour search can be further optimised using standard libraries such as Faiss BIBREF19 or approximate nearest neighbour retrieval BIBREF20, giving an efficient search that scales to billions of candidate responses. The system offers both voice and text input and output. Speech-to-text and text-to-speech conversion in the PolyResponse system is currently supported by the off-the-shelf Google Cloud tools. Dialogue Flow The ranking model lends itself to the one-shot task of finding the most relevant responses in a given context. However, a restaurant-browsing system needs to support a dialogue flow where the user finds a restaurant, and then asks questions about it. The dialogue state for each search scenario is represented as the set of restaurants that are considered relevant. This starts off as all the restaurants in the given city, and is assumed to monotonically decrease in size as the conversation progresses until the user converges to a single restaurant. A restaurant is only considered valid in the context of a new user input if it has relevant responses corresponding to it. This flow is summarised here: S1. Initialise $R$ as the set of all restaurants in the city. Given the user's input, rank all the responses in the response pool pertaining to restaurants in $R$. S2. Retrieve the top $N$ responses $r_1, r_2, \ldots , r_N$ with corresponding (sorted) cosine similarity scores: $s_1 \ge s_2 \ge \ldots \ge s_N$. S3. Compute probability scores $p_i \propto \exp (a \cdot s_i)$ with $\sum _{i=1}^N p_i$, where $a>0$ is a tunable constant. S4. Compute a score $q_e$ for each restaurant/entity $e \in R$, $q_e = \sum _{i: r_i \in e} p_i$. S5. Update $R$ to the smallest set of restaurants with highest $q$ whose $q$-values sum up to more than a predefined threshold $t$. S6. Display the most relevant responses associated with the updated $R$, and return to S2. If there are multiple relevant restaurants, one response is shown from each. When only one restaurant is relevant, the top $N$ responses are all shown, and relevant photos are also displayed. The system does not require dedicated understanding, decision-making, and generation modules, and this dialogue flow does not rely on explicit task-tailored semantics. The set of relevant restaurants is kept internally while the system narrows it down across multiple dialogue turns. A simple set of predefined rules is used to provide a templatic spoken system response: e.g., an example rule is “One review of $e$ said $r$”, where $e$ refers to the restaurant, and $r$ to a relevant response associated with $e$. Note that while the demo is currently focused on the restaurant search task, the described “narrowing down” dialogue flow is generic and applicable to a variety of applications dealing with similar entity search. The system can use a set of intent classifiers to allow resetting the dialogue state, or to activate the separate restaurant booking dialogue flow. These classifiers are briefly discussed in §SECREF4. Other Functionality ::: Multilinguality. The PolyResponse restaurant search is currently available in 8 languages and for 8 cities around the world: English (Edinburgh), German (Berlin), Spanish (Madrid), Mandarin (Taipei), Polish (Warsaw), Russian (Moscow), Korean (Seoul), and Serbian (Belgrade). Selected snapshots are shown in Figure FIGREF16, while we also provide videos demonstrating the use and behaviour of the systems at: https://tinyurl.com/y3evkcfz. A simple MT-based translate-to-source approach at inference time is currently used to enable the deployment of the system in other languages: 1) the pool of responses in each language is translated to English by Google Translate beforehand, and pre-computed encodings of their English translations are used as representations of each foreign language response; 2) a provided user utterance (i.e., context) is translated to English on-the-fly and its encoding $h_c$ is then learned. We plan to experiment with more sophisticated multilingual models in future work. Other Functionality ::: Voice-Controlled Menu Search. An additional functionality enables the user to get parts of the restaurant menu relevant to the current user utterance as responses. This is achieved by performing an additional ranking step of available menu items and retrieving the ones that are semantically relevant to the user utterance using exactly the same methodology as with ranking other responses. An example of this functionality is shown in Figure FIGREF21. Other Functionality ::: Resetting and Switching to Booking. The restaurant search system needs to support the discrete actions of restarting the conversation (i.e., resetting the set $R$), and should enable transferring to the slot-based table booking flow. This is achieved using two binary intent classifiers, that are run at each step in the dialogue. These classifiers make use of the already-computed $h_c$ vector that represents the user's latest text. A single-layer neural net is learned on top of the 512-dimensional encoding, with a ReLU activation and 100 hidden nodes. To train the classifiers, sets of 20 relevant paraphrases (e.g., “Start again”) are provided as positive examples. Finally, when the system successfully switches to the booking scenario, it proceeds to the slot filling task: it aims to extract all the relevant booking information from the user (e.g., date, time, number of people to dine). The entire flow of the system illustrating both the search phase and the booking phase is provided as the supplemental video material. Conclusion and Future Work This paper has presented a general approach to search-based dialogue that does not rely on explicit semantic representations such as dialogue acts or slot-value ontologies, and allows for multi-modal responses. In future work, we will extend the current demo system to more tasks and languages, and work with more sophisticated encoders and ranking functions. Besides the initial dialogue flow from this work (§SECREF3), we will also work with more complex flows dealing, e.g., with user intent shifts.
[ "English, German, Spanish, Mandarin, Polish, Russian, Korean and Serbian", "English (Edinburgh), German (Berlin), Spanish (Madrid), Mandarin (Taipei), Polish (Warsaw), Russian (Moscow), Korean (Seoul), and Serbian (Belgrade)" ]
2,754
qasper_e
en
null
fb7bbe83a369588ad73dd99826afd3c20d38769f5b0c2a31
What are the sources of the datasets?
Introduction Following developing news stories is imperative to making real-time decisions on important political and public safety matters. Given the abundance of media providers and languages, this endeavor is an extremely difficult task. As such, there is a strong demand for automatic clustering of news streams, so that they can be organized into stories or themes for further processing. Performing this task in an online and efficient manner is a challenging problem, not only for newswire, but also for scientific articles, online reviews, forum posts, blogs, and microblogs. A key challenge in handling document streams is that the story clusters must be generated on the fly in an online fashion: this requires handling documents one-by-one as they appear in the document stream. In this paper, we provide a treatment to the problem of online document clustering, i.e. the task of clustering a stream of documents into themes. For example, for news articles, we would want to cluster them into related news stories. To this end, we introduce a system which aggregates news articles into fine-grained story clusters across different languages in a completely online and scalable fashion from a continuous stream. Our clustering approach is part of a larger media monitoring project to solve the problem of monitoring massive text and TV/Radio streams (speech-to-text). In particular, media monitors write intelligence reports about the most relevant events, and being able to search, visualize and explore news clusters assists in gathering more insight about a particular story. Since relevant events may be spawned from any part of the world (and from many multilingual sources), it becomes imperative to cluster news across different languages. In terms of granularity, the type of story clusters we are interested in are the group of articles which, for example : (i) Narrate recent air-strikes in Eastern Ghouta (Syria); (ii) Describe the recent launch of Space X's Falcon Heavy rocket. Problem Formulation We focus on clustering of a stream of documents, where the number of clusters is not fixed and learned automatically. We denote by INLINEFORM0 a (potentially infinite) space of multilingual documents. Each document INLINEFORM1 is associated with a language in which it is written through a function INLINEFORM2 where INLINEFORM3 is a set of languages. For example, INLINEFORM4 could return English, Spanish or German. (In the rest of the paper, for an integer INLINEFORM5 , we denote by INLINEFORM6 the set INLINEFORM7 .) We are interested in associating each document with a monolingual cluster via the function INLINEFORM0 , which returns the cluster label given a document. This is done independently for each language, such that the space of indices we use for each language is separate. Furthermore, we interlace the problem of monolingual clustering with crosslingual clustering. This means that as part of our problem formulation we are also interested in a function INLINEFORM0 that associates each monolingual cluster with a crosslingual cluster, such that each crosslingual cluster only groups one monolingual cluster per different language, at a given time. The crosslingual cluster for a document INLINEFORM1 is INLINEFORM2 . As such, a crosslingual cluster groups together monolingual clusters, at most one for each different language. Intuitively, building both monolingual and crosslingual clusters allows the system to leverage high-precision monolingual features (e.g., words, named entities) to cluster documents of the same language, while simplifying the task of crosslingual clustering to the computation of similarity scores across monolingual clusters - which is a smaller problem space, since there are (by definition) less clusters than articles. We validate this choice in § SECREF5 . The Clustering Algorithm Each document INLINEFORM0 is represented by two vectors in INLINEFORM1 and INLINEFORM2 . The first vector exists in a “monolingual space” (of dimensionality INLINEFORM3 ) and is based on a bag-of-words representation of the document. The second vector exists in a “crosslingual space” (of dimensionality INLINEFORM4 ) which is common to all languages. More details about these representations are discussed in § SECREF4 . Document Representation In this section, we give more details about the way we construct the document representations in the monolingual and crosslingual spaces. In particular, we introduce the definition of the similarity functions INLINEFORM0 and INLINEFORM1 that were referred in § SECREF3 . Similarity Metrics Our similarity metric computes weighted cosine similarity on the different subvectors, both in the case of monolingual clustering and crosslingual clustering. Formally, for the monolingual case, the similarity is given by a function defined as: DISPLAYFORM0 and is computed on the TF-IDF subvectors where INLINEFORM0 is the number of subvectors for the relevant document representation. For the crosslingual case, we discuss below the function INLINEFORM1 , which has a similar structure. Here, INLINEFORM0 is the INLINEFORM1 th document in the stream and INLINEFORM2 is a monolingual cluster. The function INLINEFORM3 returns the cosine similarity between the document representation of the INLINEFORM4 th document and the centroid for cluster INLINEFORM5 . The vector INLINEFORM6 denotes the weights through which each of the cosine similarity values for each subvectors are weighted, whereas INLINEFORM7 denotes the weights for the timestamp features, as detailed further. Details on learning the weights INLINEFORM8 and INLINEFORM9 are discussed in § SECREF26 . The function INLINEFORM0 that maps a pair of document and cluster to INLINEFORM1 is defined as follows. Let DISPLAYFORM0 for a given INLINEFORM0 and INLINEFORM1 . For each document INLINEFORM2 and cluster INLINEFORM3 , we generate the following three-dimensional vector INLINEFORM4 : INLINEFORM0 where INLINEFORM1 is the timestamp for document INLINEFORM2 and INLINEFORM3 is the timestamp for the newest document in cluster INLINEFORM4 . INLINEFORM0 where INLINEFORM1 is the average timestamp for all documents in cluster INLINEFORM2 . INLINEFORM0 where INLINEFORM1 is the timestamp for the oldest document in cluster INLINEFORM2 . These three timestamp features model the time aspect of the online stream of news data and help disambiguate clustering decisions, since time is a valuable indicator that a news story has changed, even if a cluster representation has a reasonable match in the textual features with the incoming document. The same way a news story becomes popular and fades over time BIBREF2 , we model the probability of a document belonging to a cluster (in terms of timestamp difference) with a probability distribution. For the case of crosslingual clustering, we introduce INLINEFORM0 , which has a similar definition to INLINEFORM1 , only instead of passing document/cluster similarity feature vectors, we pass cluster/cluster similarities, across all language pairs. Furthermore, the features are the crosslingual embedding vectors of the sections title, body and both combined (similarly to the monolingual case) and the timestamp features. For denoting the cluster timestamp, we use the average timestamps of all articles in it. Learning to Rank Candidates In § SECREF19 we introduced INLINEFORM0 and INLINEFORM1 as the weight vectors for the several document representation features. We experiment with both setting these weights to just 1 ( INLINEFORM2 and INLINEFORM3 ) and also learning these weights using support vector machines (SVMs). To generate the SVM training data, we simulate the execution of the algorithm on a training data partition (which we do not get evaluated on) and in which the gold standard labels are given. We run the algorithm using only the first subvector INLINEFORM4 , which is the TF-IDF vector with the words of the document in the body and title. For each incoming document, we create a collection of positive examples, for the document and the clusters which share at least one document in the gold labeling. We then generate 20 negative examples for the document from the 20 best-matching clusters which are not correct. To find out the best-matching clusters, we rank them according to their similarity to the input document using only the first subvector INLINEFORM5 . Using this scheme we generate a collection of ranking examples (one for each document in the dataset, with the ranking of the best cluster matches), which are then trained using the SVMRank algorithm BIBREF3 . We run 5-fold cross-validation on this data to select the best model, and train both a separate model for each language according to INLINEFORM0 and a crosslingual model according to INLINEFORM1 . Experiments Our system was designed to cluster documents from a (potentially infinite) real-word data stream. The datasets typically used in the literature (TDT, Reuters) have a small number of clusters ( INLINEFORM0 20) with coarse topics (economy, society, etc.), and therefore are not relevant to the use case of media monitoring we treat - as it requires much more fine-grained story clusters about particular events. To evaluate our approach, we adapted a dataset constructed for the different purpose of binary classification of joining cluster pairs. We processed it to become a collection of articles annotated with monolingual and crosslingual cluster labels. Statistics about this dataset are given in Table TABREF30 . As described further, we tune the hyper-parameter INLINEFORM0 on the development set. As for the hyper-parameters related to the timestamp features, we fixed INLINEFORM1 and tuned INLINEFORM2 on the development set, yielding INLINEFORM3 . To compute IDF scores (which are global numbers computed across a corpus), we used a different and much larger dataset that we collected from Deutsche Welle's news website (http://www.dw.com/). The dataset consists of 77,268, 118,045 and 134,243 documents for Spanish, English and German, respectively. The conclusions from our experiments are: (a) the weighting of the similarity metric features using SVM significantly outperforms unsupervised baselines such as CluStream (Table TABREF35 ); (b) the SVM approach significantly helps to learn when to create a new cluster, compared to simple grid search for the optimal INLINEFORM0 (Table TABREF39 ); (c) separating the feature space into one for monolingual clusters in the form of keywords and the other for crosslingual clusters based on crosslingual embeddings significantly helps performance. Monolingual Results In our first set of experiments, we report results on monolingual clustering for each language separately. Monolingual clustering of a stream of documents is an important problem that has been inspected by others, such as by ahmed2011unified and by aggarwal2006framework. We compare our results to our own implementation of the online micro-clustering routine presented by aggarwal2006framework, which shall be referred to as CluStream. We note that CluStream of aggarwal2006framework has been a widely used state-of-the-art system in media monitoring companies as well as academia, and serves as a strong baseline to this day. In our preliminary experiments, we also evaluated an online latent semantic analysis method, in which the centroids we keep for the function INLINEFORM0 (see § SECREF3 ) are the average of reduced dimensional vectors of the incoming documents as generated by an incremental singular value decomposition (SVD) of a document-term matrix that is updated after each incoming document. However, we discovered that online LSA performs significantly worse than representing the documents the way is described in § SECREF4 . Furthermore, it was also significantly slower than our algorithm due to the time it took to perform singular value decomposition. Table TABREF35 gives the final monolingual results on the three datasets. For English, we see that the significant improvement we get using our algorithm over the algorithm of aggarwal2006framework is due to an increased recall score. We also note that the trained models surpass the baseline for all languages, and that the timestamp feature (denoted by TS), while not required to beat the baseline, has a very relevant contribution in all cases. Although the results for both the baseline and our models seem to differ across languages, one can verify a consistent improvement from the latter to the former, suggesting that the score differences should be mostly tied to the different difficulty found across the datasets for each language. The presented scores show that our learning framework generalizes well to different languages and enables high quality clustering results. To investigate the impact of the timestamp features, we ran an additional experiment using only the same three timestamp features as used in the best model on the English dataset. This experiment yielded scores of INLINEFORM0 , INLINEFORM1 and INLINEFORM2 , which lead us to conclude that while these features are not competitive when used alone (hence temporal information by itself is not sufficient to predict the clusters), they contribute significantly to recall with the final feature ensemble. We note that as described in § SECREF3 , the optimization of the INLINEFORM0 parameter is part of the development process. The parameter INLINEFORM1 is a similarity threshold used to decide when an incoming document should merge to the best cluster or create a new one. We tune INLINEFORM2 on the development set for each language, and the sensitivity to it is demonstrated in Figure FIGREF36 (this process is further referred to as INLINEFORM3 ). Although applying grid-search on this parameter is the most immediate approach to this problem, we experimented with a different method which yielded superior results: as described further, we discuss how to do this process with an additional classifier (denoted SVM-merge), which captures more information about the incoming documents and the existing clusters. Additionally, we also experimented with computing the monolingual clusters with the same embeddings as used in the crosslingual clustering phase, which yielded poor results. In particular, this system achieved INLINEFORM0 score of INLINEFORM1 for English, which is below the bag-of-words baseline presented in Table TABREF35 . This result supports the approach we then followed of having two separate feature spaces for the monolingual and crosslingual clustering systems, where the monolingual space is discrete and the crosslingual space is based on embeddings. To investigate the importance of each feature, we now consider in Table TABREF37 the accuracy of the SVM ranker for English as described in § SECREF19 . We note that adding features increases the accuracy of the SVM ranker, especially the timestamp features. However, the timestamp feature actually interferes with our optimization of INLINEFORM0 to identify when new clusters are needed, although they improve the SVM reranking accuracy. We speculate this is true because high accuracy in the reranking problem does not necessarily help with identifying when new clusters need to be opened. To investigate this issue, we experimented with a different technique to learn when to create a new cluster. To this end, we trained another SVM classifier just to learn this decision, this time a binary classifier using LIBLINEAR BIBREF4 , by passing the max of the similarity of each feature between the incoming document and the current clustering pool as the input feature vector. This way, the classifier learns when the current clusters, as a whole, are of a different news story than the incoming document. As presented in Table TABREF39 , this method, which we refer to as SVM-merge, solved the issue of searching for the optimal INLINEFORM0 parameter for the SVM-rank model with timestamps, by greatly improving the F INLINEFORM1 score in respect to the original grid-search approach ( INLINEFORM2 ). Crosslingual Results As mentioned in § SECREF3 , crosslingual embeddings are used for crosslingual clustering. We experimented with the crosslingual embeddings of gardner2015translation and ammar2016massively. In our preliminary experiments we found that the former worked better for our use-case than the latter. We test two different scenarios for optimizing the similarity threshold INLINEFORM0 for the crosslingual case. Table TABREF41 shows the results for these experiments. First, we consider the simpler case of adjusting a global INLINEFORM1 parameter for the crosslingual distances, as also described for the monolingual case. As shown, this method works poorly, since the INLINEFORM2 grid-search could not find a reasonable INLINEFORM3 which worked well for every possible language pair. Subsequently, we also consider the case of using English as a pivot language (see § SECREF3 ), where distances for every other language are only compared to English, and crosslingual clustering decisions are made only based on this distance. This yielded our best crosslingual score of INLINEFORM0 , confirming that crosslingual similarity is of higher quality between each language and English, for the embeddings we used. This score represents only a small degradation in respect to the monolingual results, since clustering across different languages is a harder problem. Related Work Early research efforts, such as the TDT program BIBREF5 , have studied news clustering for some time. The problem of online monolingual clustering algorithms (for English) has also received a fair amount of attention in the literature. One of the earlier papers by aggarwal2006framework introduced a two-step clustering system with both offline and online components, where the online model is based on a streaming implementation of INLINEFORM0 -means and a bag-of-words document representation. Other authors have experimented with distributed representations, such as ahmed2011unified, who cluster news into storylines using Markov chain Monte Carlo methods, rehureklrec who used incremental Singular Value Decomposition (SVD) to find relevant topics from streaming data, and sato2017distributed who used the paragraph vector model BIBREF6 in an offline clustering setting. More recently, crosslingual linking of clusters has been discussed by rupnik2016news in the context of linking existing clusters from the Event Registry BIBREF7 in a batch fashion, and by steinberger2016mediagist who also present a batch clustering linking system. However, these are not “truly” online crosslingual clustering systems since they only decide on the linking of already-built monolingual clusters. In particular, rupnik2016news compute distances of document pairs across clusters using nearest neighbors, which might not scale well in an online setting. As detailed before, we adapted the cluster-linking dataset from rupnik2016news to evaluate our online crosslingual clustering approach. Preliminary work makes use of deep learning techniques BIBREF8 , BIBREF9 to cluster documents while learning their representations, but not in an online or multilingual fashion, and with a very small number of cluster labels (4, in the case of the text benchmark). In our work, we studied the problem of monolingual and crosslingual clustering, having experimented several directions and methods and the impact they have on the final clustering quality. We described the first system which aggregates news articles into fine-grained story clusters across different languages in a completely online and scalable fashion from a continuous stream. Conclusion We described a method for monolingual and crosslingual clustering of an incoming stream of documents. The method works by maintaining centroids for the monolingual and crosslingual clusters, where a monolingual cluster groups a set of documents and a crosslingual cluster groups a set of monolingual clusters. We presented an online crosslingual clustering method which auto-corrects past decisions in an efficient way. We showed that our method gives state-of-the-art results on a multilingual news article dataset for English, Spanish and German. Finally, we discussed how to leverage different SVM training procedures for ranking and classification to improve monolingual and crosslingual clustering decisions. Our system is integrated in a larger media monitoring project BIBREF10 , BIBREF11 and solving the use-cases of monitors and journalists, having been validated with qualitative user testing. Acknowledgments We would like to thank Esma Balkır, Nikos Papasarantopoulos, Afonso Mendes, Shashi Narayan and the anonymous reviewers for their feedback. This project was supported by the European H2020 project SUMMA, grant agreement 688139 (see http://www.summa-project.eu) and by a grant from Bloomberg.
[ "rupnik2016news", "rupnik2016news, Deutsche Welle's news website" ]
3,160
qasper_e
en
null
37c0ca334bad96a1cf17879108e112591f7088f3794b6533
Is the lexicon the same for all languages?
Introduction Accurate language identification (LID) is the first step in many natural language processing and machine comprehension pipelines. If the language of a piece of text is known then the appropriate downstream models like parts of speech taggers and language models can be applied as required. LID is further also an important step in harvesting scarce language resources. Harvested data can be used to bootstrap more accurate LID models and in doing so continually improve the quality of the harvested data. Availability of data is still one of the big roadblocks for applying data driven approaches like supervised machine learning in developing countries. Having 11 official languages of South Africa has lead to initiatives (discussed in the next section) that have had positive effect on the availability of language resources for research. However, many of the South African languages are still under resourced from the point of view of building data driven models for machine comprehension and process automation. Table TABREF2 shows the percentages of first language speakers for each of the official languages of South Africa. These are four conjunctively written Nguni languages (zul, xho, nbl, ssw), Afrikaans (afr) and English (eng), three disjunctively written Sotho languages (nso, sot, tsn), as well as tshiVenda (ven) and Xitsonga (tso). The Nguni languages are similar to each other and harder to distinguish. The same is true of the Sotho languages. This paper presents a hierarchical naive Bayesian and lexicon based classifier for LID of short pieces of text of 15-20 characters long. The algorithm is evaluated against recent approaches using existing test sets from previous works on South African languages as well as the Discriminating between Similar Languages (DSL) 2015 and 2017 shared tasks. Section SECREF2 reviews existing works on the topic and summarises the remaining research problems. Section SECREF3 of the paper discusses the proposed algorithm and Section SECREF4 presents comparative results. Related Works The focus of this section is on recently published datasets and LID research applicable to the South African context. An in depth survey of algorithms, features, datasets, shared tasks and evaluation methods may be found in BIBREF0. The datasets for the DSL 2015 & DSL 2017 shared tasks BIBREF1 are often used in LID benchmarks and also available on Kaggle . The DSL datasets, like other LID datasets, consists of text sentences labelled by language. The 2017 dataset, for example, contains 14 languages over 6 language groups with 18000 training samples and 1000 testing samples per language. The recently published JW300 parallel corpus BIBREF2 covers over 300 languages with around 100 thousand parallel sentences per language pair on average. In South Africa, a multilingual corpus of academic texts produced by university students with different mother tongues is being developed BIBREF3. The WiLI-2018 benchmark dataset BIBREF4 for monolingual written natural language identification includes around 1000 paragraphs of 235 languages. A possibly useful link can also be made BIBREF5 between Native Language Identification (NLI) (determining the native language of the author of a text) and Language Variety Identification (LVI) (classification of different varieties of a single language) which opens up more datasets. The Leipzig Corpora Collection BIBREF6, the Universal Declaration of Human Rights and Tatoeba are also often used sources of data. The NCHLT text corpora BIBREF7 is likely a good starting point for a shared LID task dataset for the South African languages BIBREF8. The NCHLT text corpora contains enough data to have 3500 training samples and 600 testing samples of 300+ character sentences per language. Researchers have recently started applying existing algorithms for tasks like neural machine translation in earnest to such South African language datasets BIBREF9. Existing NLP datasets, models and services BIBREF10 are available for South African languages. These include an LID algorithm BIBREF11 that uses a character level n-gram language model. Multiple papers have shown that 'shallow' naive Bayes classifiers BIBREF12, BIBREF8, BIBREF13, BIBREF14, SVMs BIBREF15 and similar models work very well for doing LID. The DSL 2017 paper BIBREF1, for example, gives an overview of the solutions of all of the teams that competed on the shared task and the winning approach BIBREF16 used an SVM with character n-gram, parts of speech tag features and some other engineered features. The winning approach for DSL 2015 used an ensemble naive Bayes classifier. The fasttext classifier BIBREF17 is perhaps one of the best known efficient 'shallow' text classifiers that have been used for LID . Multiple papers have proposed hierarchical stacked classifiers (including lexicons) that would for example first classify a piece of text by language group and then by exact language BIBREF18, BIBREF19, BIBREF8, BIBREF0. Some work has also been done on classifying surnames between Tshivenda, Xitsonga and Sepedi BIBREF20. Additionally, data augmentation BIBREF21 and adversarial training BIBREF22 approaches are potentially very useful to reduce the requirement for data. Researchers have investigated deeper LID models like bidirectional recurrent neural networks BIBREF23 or ensembles of recurrent neural networks BIBREF24. The latter is reported to achieve 95.12% in the DSL 2015 shared task. In these models text features can include character and word n-grams as well as informative character and word-level features learnt BIBREF25 from the training data. The neural methods seem to work well in tasks where more training data is available. In summary, LID of short texts, informal styles and similar languages remains a difficult problem which is actively being researched. Increased confusion can in general be expected between shorter pieces of text and languages that are more closely related. Shallow methods still seem to work well compared to deeper models for LID. Other remaining research opportunities seem to be data harvesting, building standardised datasets and creating shared tasks for South Africa and Africa. Support for language codes that include more languages seems to be growing and discoverability of research is improving with more survey papers coming out. Paywalls also seem to no longer be a problem; the references used in this paper was either openly published or available as preprint papers. Methodology The proposed LID algorithm builds on the work in BIBREF8 and BIBREF26. We apply a naive Bayesian classifier with character (2, 4 & 6)-grams, word unigram and word bigram features with a hierarchical lexicon based classifier. The naive Bayesian classifier is trained to predict the specific language label of a piece of text, but used to first classify text as belonging to either the Nguni family, the Sotho family, English, Afrikaans, Xitsonga or Tshivenda. The scikit-learn multinomial naive Bayes classifier is used for the implementation with an alpha smoothing value of 0.01 and hashed text features. The lexicon based classifier is then used to predict the specific language within a language group. For the South African languages this is done for the Nguni and Sotho groups. If the lexicon prediction of the specific language has high confidence then its result is used as the final label else the naive Bayesian classifier's specific language prediction is used as the final result. The lexicon is built over all the data and therefore includes the vocabulary from both the training and testing sets. The lexicon based classifier is designed to trade higher precision for lower recall. The proposed implementation is considered confident if the number of words from the winning language is at least one more than the number of words considered to be from the language scored in second place. The stacked classifier is tested against three public LID implementations BIBREF17, BIBREF23, BIBREF8. The LID implementation described in BIBREF17 is available on GitHub and is trained and tested according to a post on the fasttext blog. Character (5-6)-gram features with 16 dimensional vectors worked the best. The implementation discussed in BIBREF23 is available from https://github.com/tomkocmi/LanideNN. Following the instructions for an OSX pip install of an old r0.8 release of TensorFlow, the LanideNN code could be executed in Python 3.7.4. Settings were left at their defaults and a learning rate of 0.001 was used followed by a refinement with learning rate of 0.0001. Only one code modification was applied to return the results from a method that previously just printed to screen. The LID algorithm described in BIBREF8 is also available on GitHub. The stacked classifier is also tested against the results reported for four other algorithms BIBREF16, BIBREF26, BIBREF24, BIBREF15. All the comparisons are done using the NCHLT BIBREF7, DSL 2015 BIBREF19 and DSL 2017 BIBREF1 datasets discussed in Section SECREF2. Results and Analysis The average classification accuracy results are summarised in Table TABREF9. The accuracies reported are for classifying a piece of text by its specific language label. Classifying text only by language group or family is a much easier task as reported in BIBREF8. Different variations of the proposed classifier were evaluated. A single NB classifier (NB), a stack of two NB classifiers (NB+NB), a stack of a NB classifier and lexicon (NB+Lex) and a lexicon (Lex) by itself. A lexicon with a 50% training token dropout is also listed to show the impact of the lexicon support on the accuracy. From the results it seems that the DSL 2017 task might be harder than the DSL 2015 and NCHLT tasks. Also, the results for the implementation discussed in BIBREF23 might seem low, but the results reported in that paper is generated on longer pieces of text so lower scores on the shorter pieces of text derived from the NCHLT corpora is expected. The accuracy of the proposed algorithm seems to be dependent on the support of the lexicon. Without a good lexicon a non-stacked naive Bayesian classifier might even perform better. The execution performance of some of the LID implementations are shown in Table TABREF10. Results were generated on an early 2015 13-inch Retina MacBook Pro with a 2.9 GHz CPU (Turbo Boosted to 3.4 GHz) and 8GB RAM. The C++ implementation in BIBREF17 is the fastest. The implementation in BIBREF8 makes use of un-hashed feature representations which causes it to be slower than the proposed sklearn implementation. The execution performance of BIBREF23 might improve by a factor of five to ten when executed on a GPU. Conclusion LID of short texts, informal styles and similar languages remains a difficult problem which is actively being researched. The proposed algorithm was evaluated on three existing datasets and compared to the implementations of three public LID implementations as well as to reported results of four other algorithms. It performed well relative to the other methods beating their results. However, the performance is dependent on the support of the lexicon. We would like to investigate the value of a lexicon in a production system and how to possibly maintain it using self-supervised learning. We are investigating the application of deeper language models some of which have been used in more recent DSL shared tasks. We would also like to investigate data augmentation strategies to reduce the amount of training data that is required. Further research opportunities include data harvesting, building standardised datasets and shared tasks for South Africa as well as the rest of Africa. In general, the support for language codes that include more languages seems to be growing, discoverability of research is improving and paywalls seem to no longer be a big problem in getting access to published research.
[ "Yes", "Yes" ]
1,868
qasper_e
en
null
f183167177d0ddfc059aff91b6bd78bd5e9861b998261a2d
What other sentence embeddings methods are evaluated?
Introduction In this publication, we present Sentence-BERT (SBERT), a modification of the BERT network using siamese and triplet networks that is able to derive semantically meaningful sentence embeddings. This enables BERT to be used for certain new tasks, which up-to-now were not applicable for BERT. These tasks include large-scale semantic similarity comparison, clustering, and information retrieval via semantic search. BERT set new state-of-the-art performance on various sentence classification and sentence-pair regression tasks. BERT uses a cross-encoder: Two sentences are passed to the transformer network and the target value is predicted. However, this setup is unsuitable for various pair regression tasks due to too many possible combinations. Finding in a collection of $n=10\,000$ sentences the pair with the highest similarity requires with BERT $n\cdot (n-1)/2=49\,995\,000$ inference computations. On a modern V100 GPU, this requires about 65 hours. Similar, finding which of the over 40 million existent questions of Quora is the most similar for a new question could be modeled as a pair-wise comparison with BERT, however, answering a single query would require over 50 hours. A common method to address clustering and semantic search is to map each sentence to a vector space such that semantically similar sentences are close. Researchers have started to input individual sentences into BERT and to derive fixed-size sentence embeddings. The most commonly used approach is to average the BERT output layer (known as BERT embeddings) or by using the output of the first token (the [CLS] token). As we will show, this common practice yields rather bad sentence embeddings, often worse than averaging GloVe embeddings BIBREF2. To alleviate this issue, we developed SBERT. The siamese network architecture enables that fixed-sized vectors for input sentences can be derived. Using a similarity measure like cosine-similarity or Manhatten / Euclidean distance, semantically similar sentences can be found. These similarity measures can be performed extremely efficient on modern hardware, allowing SBERT to be used for semantic similarity search as well as for clustering. The complexity for finding the most similar sentence pair in a collection of 10,000 sentences is reduced from 65 hours with BERT to the computation of 10,000 sentence embeddings (5 seconds with SBERT) and computing cosine-similarity (0.01 seconds). By using optimized index structures, finding the most similar Quora question can be reduced from 50 hours to a few milliseconds BIBREF3. We fine-tune SBERT on NLI data, which creates sentence embeddings that significantly outperform other state-of-the-art sentence embedding methods like InferSent BIBREF4 and Universal Sentence Encoder BIBREF5. On seven Semantic Textual Similarity (STS) tasks, SBERT achieves an improvement of 11.7 points compared to InferSent and 5.5 points compared to Universal Sentence Encoder. On SentEval BIBREF6, an evaluation toolkit for sentence embeddings, we achieve an improvement of 2.1 and 2.6 points, respectively. SBERT can be adapted to a specific task. It sets new state-of-the-art performance on a challenging argument similarity dataset BIBREF7 and on a triplet dataset to distinguish sentences from different sections of a Wikipedia article BIBREF8. The paper is structured in the following way: Section SECREF3 presents SBERT, section SECREF4 evaluates SBERT on common STS tasks and on the challenging Argument Facet Similarity (AFS) corpus BIBREF7. Section SECREF5 evaluates SBERT on SentEval. In section SECREF6, we perform an ablation study to test some design aspect of SBERT. In section SECREF7, we compare the computational efficiency of SBERT sentence embeddings in contrast to other state-of-the-art sentence embedding methods. Related Work We first introduce BERT, then, we discuss state-of-the-art sentence embedding methods. BERT BIBREF0 is a pre-trained transformer network BIBREF9, which set for various NLP tasks new state-of-the-art results, including question answering, sentence classification, and sentence-pair regression. The input for BERT for sentence-pair regression consists of the two sentences, separated by a special [SEP] token. Multi-head attention over 12 (base-model) or 24 layers (large-model) is applied and the output is passed to a simple regression function to derive the final label. Using this setup, BERT set a new state-of-the-art performance on the Semantic Textual Semilarity (STS) benchmark BIBREF10. RoBERTa BIBREF1 showed, that the performance of BERT can further improved by small adaptations to the pre-training process. We also tested XLNet BIBREF11, but it led in general to worse results than BERT. A large disadvantage of the BERT network structure is that no independent sentence embeddings are computed, which makes it difficult to derive sentence embeddings from BERT. To bypass this limitations, researchers passed single sentences through BERT and then derive a fixed sized vector by either averaging the outputs (similar to average word embeddings) or by using the output of the special CLS token (for example: bertsentenceembeddings1,bertsentenceembeddings2,bertsentenceembeddings3). These two options are also provided by the popular bert-as-a-service-repository. Up to our knowledge, there is so far no evaluation if these methods lead to useful sentence embeddings. Sentence embeddings are a well studied area with dozens of proposed methods. Skip-Thought BIBREF12 trains an encoder-decoder architecture to predict the surrounding sentences. InferSent BIBREF4 uses labeled data of the Stanford Natural Language Inference dataset BIBREF13 and the Multi-Genre NLI dataset BIBREF14 to train a siamese BiLSTM network with max-pooling over the output. Conneau et al. showed, that InferSent consistently outperforms unsupervised methods like SkipThought. Universal Sentence Encoder BIBREF5 trains a transformer network and augments unsupervised learning with training on SNLI. hill-etal-2016-learning showed, that the task on which sentence embeddings are trained significantly impacts their quality. Previous work BIBREF4, BIBREF5 found that the SNLI datasets are suitable for training sentence embeddings. yang-2018-learning presented a method to train on conversations from Reddit using siamese DAN and siamese transformer networks, which yielded good results on the STS benchmark dataset. polyencoders addresses the run-time overhead of the cross-encoder from BERT and present a method (poly-encoders) to compute a score between $m$ context vectors and pre-computed candidate embeddings using attention. This idea works for finding the highest scoring sentence in a larger collection. However, poly-encoders have the drawback that the score function is not symmetric and the computational overhead is too large for use-cases like clustering, which would require $O(n^2)$ score computations. Previous neural sentence embedding methods started the training from a random initialization. In this publication, we use the pre-trained BERT and RoBERTa network and only fine-tune it to yield useful sentence embeddings. This reduces significantly the needed training time: SBERT can be tuned in less than 20 minutes, while yielding better results than comparable sentence embedding methods. Model SBERT adds a pooling operation to the output of BERT / RoBERTa to derive a fixed sized sentence embedding. We experiment with three pooling strategies: Using the output of the CLS-token, computing the mean of all output vectors (MEAN-strategy), and computing a max-over-time of the output vectors (MAX-strategy). The default configuration is MEAN. In order to fine-tune BERT / RoBERTa, we create siamese and triplet networks BIBREF15 to update the weights such that the produced sentence embeddings are semantically meaningful and can be compared with cosine-similarity. The network structure depends on the available training data. We experiment with the following structures and objective functions. Classification Objective Function. We concatenate the sentence embeddings $u$ and $v$ with the element-wise difference $|u-v|$ and multiply it with the trainable weight $W_t \in \mathbb {R}^{3n \times k}$: where $n$ is the dimension of the sentence embeddings and $k$ the number of labels. We optimize cross-entropy loss. This structure is depicted in Figure FIGREF4. Regression Objective Function. The cosine-similarity between the two sentence embeddings $u$ and $v$ is computed (Figure FIGREF5). We use mean-squared-error loss as the objective function. Triplet Objective Function. Given an anchor sentence $a$, a positive sentence $p$, and a negative sentence $n$, triplet loss tunes the network such that the distance between $a$ and $p$ is smaller than the distance between $a$ and $n$. Mathematically, we minimize the following loss function: with $s_x$ the sentence embedding for $a$/$n$/$p$, $||\cdot ||$ a distance metric and margin $\epsilon $. Margin $\epsilon $ ensures that $s_p$ is at least $\epsilon $ closer to $s_a$ than $s_n$. As metric we use Euclidean distance and we set $\epsilon =1$ in our experiments. Model ::: Training Details We train SBERT on the combination of the SNLI BIBREF13 and the Multi-Genre NLI BIBREF14 dataset. The SNLI is a collection of 570,000 sentence pairs annotated with the labels contradiction, eintailment, and neutral. MultiNLI contains 430,000 sentence pairs and covers a range of genres of spoken and written text. We fine-tune SBERT with a 3-way softmax-classifier objective function for one epoch. We used a batch-size of 16, Adam optimizer with learning rate $2\mathrm {e}{-5}$, and a linear learning rate warm-up over 10% of the training data. Our default pooling strategy is MEAN. Evaluation - Semantic Textual Similarity We evaluate the performance of SBERT for common Semantic Textual Similarity (STS) tasks. State-of-the-art methods often learn a (complex) regression function that maps sentence embeddings to a similarity score. However, these regression functions work pair-wise and due to the combinatorial explosion those are often not scalable if the collection of sentences reaches a certain size. Instead, we always use cosine-similarity to compare the similarity between two sentence embeddings. We ran our experiments also with negative Manhatten and negative Euclidean distances as similarity measures, but the results for all approaches remained roughly the same. Evaluation - Semantic Textual Similarity ::: Unsupervised STS We evaluate the performance of SBERT for STS without using any STS specific training data. We use the STS tasks 2012 - 2016 BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, the STS benchmark BIBREF10, and the SICK-Relatedness dataset BIBREF21. These datasets provide labels between 0 and 5 on the semantic relatedness of sentence pairs. We showed in BIBREF22 that Pearson correlation is badly suited for STS. Instead, we compute the Spearman's rank correlation between the cosine-similarity of the sentence embeddings and the gold labels. The setup for the other sentence embedding methods is equivalent, the similarity is computed by cosine-similarity. The results are depicted in Table TABREF6. The results shows that directly using the output of BERT leads to rather poor performances. Averaging the BERT embeddings achieves an average correlation of only 54.81, and using the CLS-token output only achieves an average correlation of 29.19. Both are worse than computing average GloVe embeddings. Using the described siamese network structure and fine-tuning mechanism substantially improves the correlation, outperforming both InferSent and Universal Sentence Encoder substantially. The only dataset where SBERT performs worse than Universal Sentence Encoder is SICK-R. Universal Sentence Encoder was trained on various datasets, including news, question-answer pages and discussion forums, which appears to be more suitable to the data of SICK-R. In contrast, SBERT was pre-trained only on Wikipedia (via BERT) and on NLI data. While RoBERTa was able to improve the performance for several supervised tasks, we only observe minor difference between SBERT and SRoBERTa for generating sentence embeddings. Evaluation - Semantic Textual Similarity ::: Supervised STS The STS benchmark (STSb) BIBREF10 provides is a popular dataset to evaluate supervised STS systems. The data includes 8,628 sentence pairs from the three categories captions, news, and forums. It is divided into train (5,749), dev (1,500) and test (1,379). BERT set a new state-of-the-art performance on this dataset by passing both sentences to the network and using a simple regression method for the output. We use the training set to fine-tune SBERT using the regression objective function. At prediction time, we compute the cosine-similarity between the sentence embeddings. All systems are trained with 10 random seeds to counter variances BIBREF23. The results are depicted in Table TABREF10. We experimented with two setups: Only training on STSb, and first training on NLI, then training on STSb. We observe that the later strategy leads to a slight improvement of 1-2 points. This two-step approach had an especially large impact for the BERT cross-encoder, which improved the performance by 3-4 points. We do not observe a significant difference between BERT and RoBERTa. Evaluation - Semantic Textual Similarity ::: Argument Facet Similarity We evaluate SBERT on the Argument Facet Similarity (AFS) corpus by MisraEW16. The AFS corpus annotated 6,000 sentential argument pairs from social media dialogs on three controversial topics: gun control, gay marriage, and death penalty. The data was annotated on a scale from 0 (“different topic") to 5 (“completely equivalent"). The similarity notion in the AFS corpus is fairly different to the similarity notion in the STS datasets from SemEval. STS data is usually descriptive, while AFS data are argumentative excerpts from dialogs. To be considered similar, arguments must not only make similar claims, but also provide a similar reasoning. Further, the lexical gap between the sentences in AFS is much larger. Hence, simple unsupervised methods as well as state-of-the-art STS systems perform badly on this dataset BIBREF24. We evaluate SBERT on this dataset in two scenarios: 1) As proposed by Misra et al., we evaluate SBERT using 10-fold cross-validation. A draw-back of this evaluation setup is that it is not clear how well approaches generalize to different topics. Hence, 2) we evaluate SBERT in a cross-topic setup. Two topics serve for training and the approach is evaluated on the left-out topic. We repeat this for all three topics and average the results. SBERT is fine-tuned using the Regression Objective Function. The similarity score is computed using cosine-similarity based on the sentence embeddings. We also provide the Pearson correlation $r$ to make the results comparable to Misra et al. However, we showed BIBREF22 that Pearson correlation has some serious drawbacks and should be avoided for comparing STS systems. The results are depicted in Table TABREF12. Unsupervised methods like tf-idf, average GloVe embeddings or InferSent perform rather badly on this dataset with low scores. Training SBERT in the 10-fold cross-validation setup gives a performance that is nearly on-par with BERT. However, in the cross-topic evaluation, we observe a performance drop of SBERT by about 7 points Spearman correlation. To be considered similar, arguments should address the same claims and provide the same reasoning. BERT is able to use attention to compare directly both sentences (e.g. word-by-word comparison), while SBERT must map individual sentences from an unseen topic to a vector space such that arguments with similar claims and reasons are close. This is a much more challenging task, which appears to require more than just two topics for training to work on-par with BERT. Evaluation - Semantic Textual Similarity ::: Wikipedia Sections Distinction ein-dor-etal-2018-learning use Wikipedia to create a thematically fine-grained train, dev and test set for sentence embeddings methods. Wikipedia articles are separated into distinct sections focusing on certain aspects. Dor et al. assume that sentences in the same section are thematically closer than sentences in different sections. They use this to create a large dataset of weakly labeled sentence triplets: The anchor and the positive example come from the same section, while the negative example comes from a different section of the same article. For example, from the Alice Arnold article: Anchor: Arnold joined the BBC Radio Drama Company in 1988., positive: Arnold gained media attention in May 2012., negative: Balding and Arnold are keen amateur golfers. We use the dataset from Dor et al. We use the Triplet Objective, train SBERT for one epoch on the about 1.8 Million training triplets and evaluate it on the 222,957 test triplets. Test triplets are from a distinct set of Wikipedia articles. As evaluation metric, we use accuracy: Is the positive example closer to the anchor than the negative example? Results are presented in Table TABREF14. Dor et al. fine-tuned a BiLSTM architecture with triplet loss to derive sentence embeddings for this dataset. As the table shows, SBERT clearly outperforms the BiLSTM approach by Dor et al. Evaluation - SentEval SentEval BIBREF6 is a popular toolkit to evaluate the quality of sentence embeddings. Sentence embeddings are used as features for a logistic regression classifier. The logistic regression classifier is trained on various tasks in a 10-fold cross-validation setup and the prediction accuracy is computed for the test-fold. The purpose of SBERT sentence embeddings are not to be used for transfer learning for other tasks. Here, we think fine-tuning BERT as described by devlin2018bert for new tasks is the more suitable method, as it updates all layers of the BERT network. However, SentEval can still give an impression on the quality of our sentence embeddings for various tasks. We compare the SBERT sentence embeddings to other sentence embeddings methods on the following seven SentEval transfer tasks: MR: Sentiment prediction for movie reviews snippets on a five start scale BIBREF25. CR: Sentiment prediction of customer product reviews BIBREF26. SUBJ: Subjectivity prediction of sentences from movie reviews and plot summaries BIBREF27. MPQA: Phrase level opinion polarity classification from newswire BIBREF28. SST: Stanford Sentiment Treebank with binary labels BIBREF29. TREC: Fine grained question-type classification from TREC BIBREF30. MRPC: Microsoft Research Paraphrase Corpus from parallel news sources BIBREF31. The results can be found in Table TABREF15. SBERT is able to achieve the best performance in 5 out of 7 tasks. The average performance increases by about 2 percentage points compared to InferSent as well as the Universal Sentence Encoder. Even though transfer learning is not the purpose of SBERT, it outperforms other state-of-the-art sentence embeddings methods on this task. It appears that the sentence embeddings from SBERT capture well sentiment information: We observe large improvements for all sentiment tasks (MR, CR, and SST) from SentEval in comparison to InferSent and Universal Sentence Encoder. The only dataset where SBERT is significantly worse than Universal Sentence Encoder is the TREC dataset. Universal Sentence Encoder was pre-trained on question-answering data, which appears to be beneficial for the question-type classification task of the TREC dataset. Average BERT embeddings or using the CLS-token output from a BERT network achieved bad results for various STS tasks (Table TABREF6), worse than average GloVe embeddings. However, for SentEval, average BERT embeddings and the BERT CLS-token output achieves decent results (Table TABREF15), outperforming average GloVe embeddings. The reason for this are the different setups. For the STS tasks, we used cosine-similarity to estimate the similarities between sentence embeddings. Cosine-similarity treats all dimensions equally. In contrast, SentEval fits a logistic regression classifier to the sentence embeddings. This allows that certain dimensions can have higher or lower impact on the classification result. We conclude that average BERT embeddings / CLS-token output from BERT return sentence embeddings that are infeasible to be used with cosine-similarity or with Manhatten / Euclidean distance. For transfer learning, they yield slightly worse results than InferSent or Universal Sentence Encoder. However, using the described fine-tuning setup with a siamese network structure on NLI datasets yields sentence embeddings that achieve a new state-of-the-art for the SentEval toolkit. Ablation Study We have demonstrated strong empirical results for the quality of SBERT sentence embeddings. In this section, we perform an ablation study of different aspects of SBERT in order to get a better understanding of their relative importance. We evaluated different pooling strategies (MEAN, MAX, and CLS). For the classification objective function, we evaluate different concatenation methods. For each possible configuration, we train SBERT with 10 different random seeds and average the performances. The objective function (classification vs. regression) depends on the annotated dataset. For the classification objective function, we train SBERT-base on the SNLI and the Multi-NLI dataset. For the regression objective function, we train on the training set of the STS benchmark dataset. Performances are measured on the development split of the STS benchmark dataset. Results are shown in Table TABREF23. When trained with the classification objective function on NLI data, the pooling strategy has a rather minor impact. The impact of the concatenation mode is much larger. InferSent BIBREF4 and Universal Sentence Encoder BIBREF5 both use $(u, v, |u-v|, u*v)$ as input for a softmax classifier. However, in our architecture, adding the element-wise $u*v$ decreased the performance. The most important component is the element-wise difference $|u-v|$. Note, that the concatenation mode is only relevant for training the softmax classifier. At inference, when predicting similarities for the STS benchmark dataset, only the sentence embeddings $u$ and $v$ are used in combination with cosine-similarity. The element-wise difference measures the distance between the dimensions of the two sentence embeddings, ensuring that similar pairs are closer and dissimilar pairs are further apart. When trained with the regression objective function, we observe that the pooling strategy has a large impact. There, the MAX strategy perform significantly worse than MEAN or CLS-token strategy. This is in contrast to BIBREF4, who found it beneficial for the BiLSTM-layer of InferSent to use MAX instead of MEAN pooling. Computational Efficiency Sentence embeddings need potentially be computed for Millions of sentences, hence, a high computation speed is desired. In this section, we compare SBERT to average GloVe embeddings, InferSent BIBREF4, and Universal Sentence Encoder BIBREF5. For our comparison we use the sentences from the STS benchmark BIBREF10. We compute average GloVe embeddings using a simple for-loop with python dictionary lookups and NumPy. InferSent is based on PyTorch. For Universal Sentence Encoder, we use the TensorFlow Hub version, which is based on TensorFlow. SBERT is based on PyTorch. For improved computation of sentence embeddings, we implemented a smart batching strategy: Sentences with similar lengths are grouped together and are only padded to the longest element in a mini-batch. This drastically reduces computational overhead from padding tokens. Performances were measured on a server with Intel i7-5820K CPU @ 3.30GHz, Nvidia Tesla V100 GPU, CUDA 9.2 and cuDNN. The results are depicted in Table TABREF26. On CPU, InferSent is about 65% faster than SBERT. This is due to the much simpler network architecture. InferSent uses a single BiLSTM layer, while BERT uses 12 stacked transformer layers. However, an advantage of transformer networks is the computational efficiency on GPUs. There, SBERT with smart batching is about 9% faster than InferSent and about 55% faster than Universal Sentence Encoder. Smart batching achieves a speed-up of 89% on CPU and 48% on GPU. Average GloVe embeddings is obviously by a large margin the fastest method to compute sentence embeddings. Conclusion We showed that BERT out-of-the-box maps sentences to a vector space that is rather unsuitable to be used with common similarity measures like cosine-similarity. The performance for seven STS tasks was below the performance of average GloVe embeddings. To overcome this shortcoming, we presented Sentence-BERT (SBERT). SBERT fine-tunes BERT in a siamese / triplet network architecture. We evaluated the quality on various common benchmarks, where it could achieve a significant improvement over state-of-the-art sentence embeddings methods. Replacing BERT with RoBERTa did not yield a significant improvement in our experiments. SBERT is computationally efficient. On a GPU, it is about 9% faster than InferSent and about 55% faster than Universal Sentence Encoder. SBERT can be used for tasks which are computationally not feasible to be modeled with BERT. For example, clustering of 10,000 sentences with hierarchical clustering requires with BERT about 65 hours, as around 50 Million sentence combinations must be computed. With SBERT, we were able to reduce the effort to about 5 seconds. Acknowledgments This work has been supported by the German Research Foundation through the German-Israeli Project Cooperation (DIP, grant DA 1600/1-1 and grant GU 798/17-1). It has been co-funded by the German Federal Ministry of Education and Research (BMBF) under the promotional references 03VP02540 (ArgumenText).
[ "GloVe, BERT, Universal Sentence Encoder, TF-IDF, InferSent", "Avg. GloVe embeddings, Avg. fast-text embeddings, Avg. BERT embeddings, BERT CLS-vector, InferSent - GloVe and Universal Sentence Encoder." ]
3,862
qasper_e
en
null
a3a74b9eba905def46e1b99988f18ad74a451de8a53b1b15
What transfer learning tasks are evaluated?
Introduction In this publication, we present Sentence-BERT (SBERT), a modification of the BERT network using siamese and triplet networks that is able to derive semantically meaningful sentence embeddings. This enables BERT to be used for certain new tasks, which up-to-now were not applicable for BERT. These tasks include large-scale semantic similarity comparison, clustering, and information retrieval via semantic search. BERT set new state-of-the-art performance on various sentence classification and sentence-pair regression tasks. BERT uses a cross-encoder: Two sentences are passed to the transformer network and the target value is predicted. However, this setup is unsuitable for various pair regression tasks due to too many possible combinations. Finding in a collection of $n=10\,000$ sentences the pair with the highest similarity requires with BERT $n\cdot (n-1)/2=49\,995\,000$ inference computations. On a modern V100 GPU, this requires about 65 hours. Similar, finding which of the over 40 million existent questions of Quora is the most similar for a new question could be modeled as a pair-wise comparison with BERT, however, answering a single query would require over 50 hours. A common method to address clustering and semantic search is to map each sentence to a vector space such that semantically similar sentences are close. Researchers have started to input individual sentences into BERT and to derive fixed-size sentence embeddings. The most commonly used approach is to average the BERT output layer (known as BERT embeddings) or by using the output of the first token (the [CLS] token). As we will show, this common practice yields rather bad sentence embeddings, often worse than averaging GloVe embeddings BIBREF2. To alleviate this issue, we developed SBERT. The siamese network architecture enables that fixed-sized vectors for input sentences can be derived. Using a similarity measure like cosine-similarity or Manhatten / Euclidean distance, semantically similar sentences can be found. These similarity measures can be performed extremely efficient on modern hardware, allowing SBERT to be used for semantic similarity search as well as for clustering. The complexity for finding the most similar sentence pair in a collection of 10,000 sentences is reduced from 65 hours with BERT to the computation of 10,000 sentence embeddings (5 seconds with SBERT) and computing cosine-similarity (0.01 seconds). By using optimized index structures, finding the most similar Quora question can be reduced from 50 hours to a few milliseconds BIBREF3. We fine-tune SBERT on NLI data, which creates sentence embeddings that significantly outperform other state-of-the-art sentence embedding methods like InferSent BIBREF4 and Universal Sentence Encoder BIBREF5. On seven Semantic Textual Similarity (STS) tasks, SBERT achieves an improvement of 11.7 points compared to InferSent and 5.5 points compared to Universal Sentence Encoder. On SentEval BIBREF6, an evaluation toolkit for sentence embeddings, we achieve an improvement of 2.1 and 2.6 points, respectively. SBERT can be adapted to a specific task. It sets new state-of-the-art performance on a challenging argument similarity dataset BIBREF7 and on a triplet dataset to distinguish sentences from different sections of a Wikipedia article BIBREF8. The paper is structured in the following way: Section SECREF3 presents SBERT, section SECREF4 evaluates SBERT on common STS tasks and on the challenging Argument Facet Similarity (AFS) corpus BIBREF7. Section SECREF5 evaluates SBERT on SentEval. In section SECREF6, we perform an ablation study to test some design aspect of SBERT. In section SECREF7, we compare the computational efficiency of SBERT sentence embeddings in contrast to other state-of-the-art sentence embedding methods. Related Work We first introduce BERT, then, we discuss state-of-the-art sentence embedding methods. BERT BIBREF0 is a pre-trained transformer network BIBREF9, which set for various NLP tasks new state-of-the-art results, including question answering, sentence classification, and sentence-pair regression. The input for BERT for sentence-pair regression consists of the two sentences, separated by a special [SEP] token. Multi-head attention over 12 (base-model) or 24 layers (large-model) is applied and the output is passed to a simple regression function to derive the final label. Using this setup, BERT set a new state-of-the-art performance on the Semantic Textual Semilarity (STS) benchmark BIBREF10. RoBERTa BIBREF1 showed, that the performance of BERT can further improved by small adaptations to the pre-training process. We also tested XLNet BIBREF11, but it led in general to worse results than BERT. A large disadvantage of the BERT network structure is that no independent sentence embeddings are computed, which makes it difficult to derive sentence embeddings from BERT. To bypass this limitations, researchers passed single sentences through BERT and then derive a fixed sized vector by either averaging the outputs (similar to average word embeddings) or by using the output of the special CLS token (for example: bertsentenceembeddings1,bertsentenceembeddings2,bertsentenceembeddings3). These two options are also provided by the popular bert-as-a-service-repository. Up to our knowledge, there is so far no evaluation if these methods lead to useful sentence embeddings. Sentence embeddings are a well studied area with dozens of proposed methods. Skip-Thought BIBREF12 trains an encoder-decoder architecture to predict the surrounding sentences. InferSent BIBREF4 uses labeled data of the Stanford Natural Language Inference dataset BIBREF13 and the Multi-Genre NLI dataset BIBREF14 to train a siamese BiLSTM network with max-pooling over the output. Conneau et al. showed, that InferSent consistently outperforms unsupervised methods like SkipThought. Universal Sentence Encoder BIBREF5 trains a transformer network and augments unsupervised learning with training on SNLI. hill-etal-2016-learning showed, that the task on which sentence embeddings are trained significantly impacts their quality. Previous work BIBREF4, BIBREF5 found that the SNLI datasets are suitable for training sentence embeddings. yang-2018-learning presented a method to train on conversations from Reddit using siamese DAN and siamese transformer networks, which yielded good results on the STS benchmark dataset. polyencoders addresses the run-time overhead of the cross-encoder from BERT and present a method (poly-encoders) to compute a score between $m$ context vectors and pre-computed candidate embeddings using attention. This idea works for finding the highest scoring sentence in a larger collection. However, poly-encoders have the drawback that the score function is not symmetric and the computational overhead is too large for use-cases like clustering, which would require $O(n^2)$ score computations. Previous neural sentence embedding methods started the training from a random initialization. In this publication, we use the pre-trained BERT and RoBERTa network and only fine-tune it to yield useful sentence embeddings. This reduces significantly the needed training time: SBERT can be tuned in less than 20 minutes, while yielding better results than comparable sentence embedding methods. Model SBERT adds a pooling operation to the output of BERT / RoBERTa to derive a fixed sized sentence embedding. We experiment with three pooling strategies: Using the output of the CLS-token, computing the mean of all output vectors (MEAN-strategy), and computing a max-over-time of the output vectors (MAX-strategy). The default configuration is MEAN. In order to fine-tune BERT / RoBERTa, we create siamese and triplet networks BIBREF15 to update the weights such that the produced sentence embeddings are semantically meaningful and can be compared with cosine-similarity. The network structure depends on the available training data. We experiment with the following structures and objective functions. Classification Objective Function. We concatenate the sentence embeddings $u$ and $v$ with the element-wise difference $|u-v|$ and multiply it with the trainable weight $W_t \in \mathbb {R}^{3n \times k}$: where $n$ is the dimension of the sentence embeddings and $k$ the number of labels. We optimize cross-entropy loss. This structure is depicted in Figure FIGREF4. Regression Objective Function. The cosine-similarity between the two sentence embeddings $u$ and $v$ is computed (Figure FIGREF5). We use mean-squared-error loss as the objective function. Triplet Objective Function. Given an anchor sentence $a$, a positive sentence $p$, and a negative sentence $n$, triplet loss tunes the network such that the distance between $a$ and $p$ is smaller than the distance between $a$ and $n$. Mathematically, we minimize the following loss function: with $s_x$ the sentence embedding for $a$/$n$/$p$, $||\cdot ||$ a distance metric and margin $\epsilon $. Margin $\epsilon $ ensures that $s_p$ is at least $\epsilon $ closer to $s_a$ than $s_n$. As metric we use Euclidean distance and we set $\epsilon =1$ in our experiments. Model ::: Training Details We train SBERT on the combination of the SNLI BIBREF13 and the Multi-Genre NLI BIBREF14 dataset. The SNLI is a collection of 570,000 sentence pairs annotated with the labels contradiction, eintailment, and neutral. MultiNLI contains 430,000 sentence pairs and covers a range of genres of spoken and written text. We fine-tune SBERT with a 3-way softmax-classifier objective function for one epoch. We used a batch-size of 16, Adam optimizer with learning rate $2\mathrm {e}{-5}$, and a linear learning rate warm-up over 10% of the training data. Our default pooling strategy is MEAN. Evaluation - Semantic Textual Similarity We evaluate the performance of SBERT for common Semantic Textual Similarity (STS) tasks. State-of-the-art methods often learn a (complex) regression function that maps sentence embeddings to a similarity score. However, these regression functions work pair-wise and due to the combinatorial explosion those are often not scalable if the collection of sentences reaches a certain size. Instead, we always use cosine-similarity to compare the similarity between two sentence embeddings. We ran our experiments also with negative Manhatten and negative Euclidean distances as similarity measures, but the results for all approaches remained roughly the same. Evaluation - Semantic Textual Similarity ::: Unsupervised STS We evaluate the performance of SBERT for STS without using any STS specific training data. We use the STS tasks 2012 - 2016 BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, the STS benchmark BIBREF10, and the SICK-Relatedness dataset BIBREF21. These datasets provide labels between 0 and 5 on the semantic relatedness of sentence pairs. We showed in BIBREF22 that Pearson correlation is badly suited for STS. Instead, we compute the Spearman's rank correlation between the cosine-similarity of the sentence embeddings and the gold labels. The setup for the other sentence embedding methods is equivalent, the similarity is computed by cosine-similarity. The results are depicted in Table TABREF6. The results shows that directly using the output of BERT leads to rather poor performances. Averaging the BERT embeddings achieves an average correlation of only 54.81, and using the CLS-token output only achieves an average correlation of 29.19. Both are worse than computing average GloVe embeddings. Using the described siamese network structure and fine-tuning mechanism substantially improves the correlation, outperforming both InferSent and Universal Sentence Encoder substantially. The only dataset where SBERT performs worse than Universal Sentence Encoder is SICK-R. Universal Sentence Encoder was trained on various datasets, including news, question-answer pages and discussion forums, which appears to be more suitable to the data of SICK-R. In contrast, SBERT was pre-trained only on Wikipedia (via BERT) and on NLI data. While RoBERTa was able to improve the performance for several supervised tasks, we only observe minor difference between SBERT and SRoBERTa for generating sentence embeddings. Evaluation - Semantic Textual Similarity ::: Supervised STS The STS benchmark (STSb) BIBREF10 provides is a popular dataset to evaluate supervised STS systems. The data includes 8,628 sentence pairs from the three categories captions, news, and forums. It is divided into train (5,749), dev (1,500) and test (1,379). BERT set a new state-of-the-art performance on this dataset by passing both sentences to the network and using a simple regression method for the output. We use the training set to fine-tune SBERT using the regression objective function. At prediction time, we compute the cosine-similarity between the sentence embeddings. All systems are trained with 10 random seeds to counter variances BIBREF23. The results are depicted in Table TABREF10. We experimented with two setups: Only training on STSb, and first training on NLI, then training on STSb. We observe that the later strategy leads to a slight improvement of 1-2 points. This two-step approach had an especially large impact for the BERT cross-encoder, which improved the performance by 3-4 points. We do not observe a significant difference between BERT and RoBERTa. Evaluation - Semantic Textual Similarity ::: Argument Facet Similarity We evaluate SBERT on the Argument Facet Similarity (AFS) corpus by MisraEW16. The AFS corpus annotated 6,000 sentential argument pairs from social media dialogs on three controversial topics: gun control, gay marriage, and death penalty. The data was annotated on a scale from 0 (“different topic") to 5 (“completely equivalent"). The similarity notion in the AFS corpus is fairly different to the similarity notion in the STS datasets from SemEval. STS data is usually descriptive, while AFS data are argumentative excerpts from dialogs. To be considered similar, arguments must not only make similar claims, but also provide a similar reasoning. Further, the lexical gap between the sentences in AFS is much larger. Hence, simple unsupervised methods as well as state-of-the-art STS systems perform badly on this dataset BIBREF24. We evaluate SBERT on this dataset in two scenarios: 1) As proposed by Misra et al., we evaluate SBERT using 10-fold cross-validation. A draw-back of this evaluation setup is that it is not clear how well approaches generalize to different topics. Hence, 2) we evaluate SBERT in a cross-topic setup. Two topics serve for training and the approach is evaluated on the left-out topic. We repeat this for all three topics and average the results. SBERT is fine-tuned using the Regression Objective Function. The similarity score is computed using cosine-similarity based on the sentence embeddings. We also provide the Pearson correlation $r$ to make the results comparable to Misra et al. However, we showed BIBREF22 that Pearson correlation has some serious drawbacks and should be avoided for comparing STS systems. The results are depicted in Table TABREF12. Unsupervised methods like tf-idf, average GloVe embeddings or InferSent perform rather badly on this dataset with low scores. Training SBERT in the 10-fold cross-validation setup gives a performance that is nearly on-par with BERT. However, in the cross-topic evaluation, we observe a performance drop of SBERT by about 7 points Spearman correlation. To be considered similar, arguments should address the same claims and provide the same reasoning. BERT is able to use attention to compare directly both sentences (e.g. word-by-word comparison), while SBERT must map individual sentences from an unseen topic to a vector space such that arguments with similar claims and reasons are close. This is a much more challenging task, which appears to require more than just two topics for training to work on-par with BERT. Evaluation - Semantic Textual Similarity ::: Wikipedia Sections Distinction ein-dor-etal-2018-learning use Wikipedia to create a thematically fine-grained train, dev and test set for sentence embeddings methods. Wikipedia articles are separated into distinct sections focusing on certain aspects. Dor et al. assume that sentences in the same section are thematically closer than sentences in different sections. They use this to create a large dataset of weakly labeled sentence triplets: The anchor and the positive example come from the same section, while the negative example comes from a different section of the same article. For example, from the Alice Arnold article: Anchor: Arnold joined the BBC Radio Drama Company in 1988., positive: Arnold gained media attention in May 2012., negative: Balding and Arnold are keen amateur golfers. We use the dataset from Dor et al. We use the Triplet Objective, train SBERT for one epoch on the about 1.8 Million training triplets and evaluate it on the 222,957 test triplets. Test triplets are from a distinct set of Wikipedia articles. As evaluation metric, we use accuracy: Is the positive example closer to the anchor than the negative example? Results are presented in Table TABREF14. Dor et al. fine-tuned a BiLSTM architecture with triplet loss to derive sentence embeddings for this dataset. As the table shows, SBERT clearly outperforms the BiLSTM approach by Dor et al. Evaluation - SentEval SentEval BIBREF6 is a popular toolkit to evaluate the quality of sentence embeddings. Sentence embeddings are used as features for a logistic regression classifier. The logistic regression classifier is trained on various tasks in a 10-fold cross-validation setup and the prediction accuracy is computed for the test-fold. The purpose of SBERT sentence embeddings are not to be used for transfer learning for other tasks. Here, we think fine-tuning BERT as described by devlin2018bert for new tasks is the more suitable method, as it updates all layers of the BERT network. However, SentEval can still give an impression on the quality of our sentence embeddings for various tasks. We compare the SBERT sentence embeddings to other sentence embeddings methods on the following seven SentEval transfer tasks: MR: Sentiment prediction for movie reviews snippets on a five start scale BIBREF25. CR: Sentiment prediction of customer product reviews BIBREF26. SUBJ: Subjectivity prediction of sentences from movie reviews and plot summaries BIBREF27. MPQA: Phrase level opinion polarity classification from newswire BIBREF28. SST: Stanford Sentiment Treebank with binary labels BIBREF29. TREC: Fine grained question-type classification from TREC BIBREF30. MRPC: Microsoft Research Paraphrase Corpus from parallel news sources BIBREF31. The results can be found in Table TABREF15. SBERT is able to achieve the best performance in 5 out of 7 tasks. The average performance increases by about 2 percentage points compared to InferSent as well as the Universal Sentence Encoder. Even though transfer learning is not the purpose of SBERT, it outperforms other state-of-the-art sentence embeddings methods on this task. It appears that the sentence embeddings from SBERT capture well sentiment information: We observe large improvements for all sentiment tasks (MR, CR, and SST) from SentEval in comparison to InferSent and Universal Sentence Encoder. The only dataset where SBERT is significantly worse than Universal Sentence Encoder is the TREC dataset. Universal Sentence Encoder was pre-trained on question-answering data, which appears to be beneficial for the question-type classification task of the TREC dataset. Average BERT embeddings or using the CLS-token output from a BERT network achieved bad results for various STS tasks (Table TABREF6), worse than average GloVe embeddings. However, for SentEval, average BERT embeddings and the BERT CLS-token output achieves decent results (Table TABREF15), outperforming average GloVe embeddings. The reason for this are the different setups. For the STS tasks, we used cosine-similarity to estimate the similarities between sentence embeddings. Cosine-similarity treats all dimensions equally. In contrast, SentEval fits a logistic regression classifier to the sentence embeddings. This allows that certain dimensions can have higher or lower impact on the classification result. We conclude that average BERT embeddings / CLS-token output from BERT return sentence embeddings that are infeasible to be used with cosine-similarity or with Manhatten / Euclidean distance. For transfer learning, they yield slightly worse results than InferSent or Universal Sentence Encoder. However, using the described fine-tuning setup with a siamese network structure on NLI datasets yields sentence embeddings that achieve a new state-of-the-art for the SentEval toolkit. Ablation Study We have demonstrated strong empirical results for the quality of SBERT sentence embeddings. In this section, we perform an ablation study of different aspects of SBERT in order to get a better understanding of their relative importance. We evaluated different pooling strategies (MEAN, MAX, and CLS). For the classification objective function, we evaluate different concatenation methods. For each possible configuration, we train SBERT with 10 different random seeds and average the performances. The objective function (classification vs. regression) depends on the annotated dataset. For the classification objective function, we train SBERT-base on the SNLI and the Multi-NLI dataset. For the regression objective function, we train on the training set of the STS benchmark dataset. Performances are measured on the development split of the STS benchmark dataset. Results are shown in Table TABREF23. When trained with the classification objective function on NLI data, the pooling strategy has a rather minor impact. The impact of the concatenation mode is much larger. InferSent BIBREF4 and Universal Sentence Encoder BIBREF5 both use $(u, v, |u-v|, u*v)$ as input for a softmax classifier. However, in our architecture, adding the element-wise $u*v$ decreased the performance. The most important component is the element-wise difference $|u-v|$. Note, that the concatenation mode is only relevant for training the softmax classifier. At inference, when predicting similarities for the STS benchmark dataset, only the sentence embeddings $u$ and $v$ are used in combination with cosine-similarity. The element-wise difference measures the distance between the dimensions of the two sentence embeddings, ensuring that similar pairs are closer and dissimilar pairs are further apart. When trained with the regression objective function, we observe that the pooling strategy has a large impact. There, the MAX strategy perform significantly worse than MEAN or CLS-token strategy. This is in contrast to BIBREF4, who found it beneficial for the BiLSTM-layer of InferSent to use MAX instead of MEAN pooling. Computational Efficiency Sentence embeddings need potentially be computed for Millions of sentences, hence, a high computation speed is desired. In this section, we compare SBERT to average GloVe embeddings, InferSent BIBREF4, and Universal Sentence Encoder BIBREF5. For our comparison we use the sentences from the STS benchmark BIBREF10. We compute average GloVe embeddings using a simple for-loop with python dictionary lookups and NumPy. InferSent is based on PyTorch. For Universal Sentence Encoder, we use the TensorFlow Hub version, which is based on TensorFlow. SBERT is based on PyTorch. For improved computation of sentence embeddings, we implemented a smart batching strategy: Sentences with similar lengths are grouped together and are only padded to the longest element in a mini-batch. This drastically reduces computational overhead from padding tokens. Performances were measured on a server with Intel i7-5820K CPU @ 3.30GHz, Nvidia Tesla V100 GPU, CUDA 9.2 and cuDNN. The results are depicted in Table TABREF26. On CPU, InferSent is about 65% faster than SBERT. This is due to the much simpler network architecture. InferSent uses a single BiLSTM layer, while BERT uses 12 stacked transformer layers. However, an advantage of transformer networks is the computational efficiency on GPUs. There, SBERT with smart batching is about 9% faster than InferSent and about 55% faster than Universal Sentence Encoder. Smart batching achieves a speed-up of 89% on CPU and 48% on GPU. Average GloVe embeddings is obviously by a large margin the fastest method to compute sentence embeddings. Conclusion We showed that BERT out-of-the-box maps sentences to a vector space that is rather unsuitable to be used with common similarity measures like cosine-similarity. The performance for seven STS tasks was below the performance of average GloVe embeddings. To overcome this shortcoming, we presented Sentence-BERT (SBERT). SBERT fine-tunes BERT in a siamese / triplet network architecture. We evaluated the quality on various common benchmarks, where it could achieve a significant improvement over state-of-the-art sentence embeddings methods. Replacing BERT with RoBERTa did not yield a significant improvement in our experiments. SBERT is computationally efficient. On a GPU, it is about 9% faster than InferSent and about 55% faster than Universal Sentence Encoder. SBERT can be used for tasks which are computationally not feasible to be modeled with BERT. For example, clustering of 10,000 sentences with hierarchical clustering requires with BERT about 65 hours, as around 50 Million sentence combinations must be computed. With SBERT, we were able to reduce the effort to about 5 seconds. Acknowledgments This work has been supported by the German Research Foundation through the German-Israeli Project Cooperation (DIP, grant DA 1600/1-1 and grant GU 798/17-1). It has been co-funded by the German Federal Ministry of Education and Research (BMBF) under the promotional references 03VP02540 (ArgumenText).
[ "MR, CR, SUBJ, MPQA, SST, TREC, MRPC", "MR: Sentiment prediction for movie reviews snippets on a five start scale BIBREF25.\n\nCR: Sentiment prediction of customer product reviews BIBREF26.\n\nSUBJ: Subjectivity prediction of sentences from movie reviews and plot summaries BIBREF27.\n\nMPQA: Phrase level opinion polarity classification from newswire BIBREF28.\n\nSST: Stanford Sentiment Treebank with binary labels BIBREF29.\n\nTREC: Fine grained question-type classification from TREC BIBREF30.\n\nMRPC: Microsoft Research Paraphrase Corpus from parallel news sources BIBREF31.", "Semantic Textual Similarity, sentiment prediction, subjectivity prediction, phrase level opinion polarity classification, Stanford Sentiment Treebank, fine grained question-type classification." ]
3,861
qasper_e
en
null
214680ec183fe8bec3a65633d71c3690f5c3b35e5e8e41a6
how large is the vocabulary?
Introduction When people shop for books online in e-book stores such as, e.g., the Amazon Kindle store, they enter search terms with the goal to find e-books that meet their preferences. Such e-books have a variety of metadata such as, e.g., title, author or keywords, which can be used to retrieve e-books that are relevant to the query. As a consequence, from the perspective of e-book publishers and editors, annotating e-books with tags that best describe the content and which meet the vocabulary of users (e.g., when searching and reviewing e-books) is an essential task BIBREF0 . Problem and aim of this work. Annotating e-books with suitable tags is, however, a complex task as users' vocabulary may differ from the one of editors. Such a vocabulary mismatch yet hinders effective organization and retrieval BIBREF1 of e-books. For example, while editors mostly annotate e-books with descriptive tags that reflect the book's content, Amazon users often search for parts of the book title. In the data we use for the present study (see Section SECREF2 ), we find that around 30% of the Amazon search terms contain parts of e-book titles. In this paper, we present our work to support editors in the e-book annotation process with tag recommendations BIBREF2 , BIBREF3 . Our idea is to exploit user-generated search query terms in Amazon to mimic the vocabulary of users in Amazon, who search for e-books. We combine these search terms with tags assigned by editors in a hybrid tag recommendation approach. Thus, our aim is to show that we can improve the performance of tag recommender systems for e-books both concerning recommendation accuracy as well as semantic similarity and tag recommendation diversity. Related work. In tag recommender systems, mostly content-based algorithms (e.g., BIBREF4 , BIBREF5 ) are used to recommend tags to annotate resources such as e-books. In our work, we incorporate both content features of e-books (i.e., title and description text) as well as Amazon search terms to account for the vocabulary of e-book readers. Concerning the evaluation of tag recommendation systems, most studies focus on measuring the accuracy of tag recommendations (e.g., BIBREF2 ). However, the authors of BIBREF6 suggest also to use beyond-accuracy metrics such as diversity to evaluate the quality of tag recommendations. In our work, we measure recommendation diversity in addition to recommendation accuracy and propose a novel metric termed semantic similarity to validate semantic matches of tag recommendations. Approach and findings. We exploit editor tags and user-generated search terms as input for tag recommendation approaches. Our evaluation comprises of a rich set of 19 different algorithms to recommend tags for e-books, which we group into (i) popularity-based, (ii) similarity-based (i.e., using content information), and (iii) hybrid approaches. We evaluate our approaches in terms of accuracy, semantic similarity and diversity on the review content of Amazon users, which reflects the readers' vocabulary. With semantic similarity, we measure how semantically similar (based on learned Doc2Vec BIBREF7 embeddings) the list of recommended tags is to the list of relevant tags. We use this additional metric to measure not only exact “hits” of our recommendations but also semantic matches. Our evaluation results show that combining both data sources enhances the quality of tag recommendations for annotating e-books. Furthermore, approaches that solely train on Amazon search terms provide poor performance in terms of accuracy but deliver good results in terms of semantic similarity and recommendation diversity. Method In this section, we describe our dataset as well as our tag recommendation approaches we propose to annotate e-books. Dataset Our dataset contains two sources of data, one to generate tag recommendations and another one to evaluate tag recommendations. HGV GmbH has collected all data sources and we provide the dataset statistics in Table TABREF3 . Data used to generate recommendations. We employ two sources of e-book annotation data: (i) editor tags, and (ii) Amazon search terms. For editor tags, we collect data of 48,705 e-books from 13 publishers, namely Kunstmann, Delius-Klasnig, VUR, HJR, Diogenes, Campus, Kiwi, Beltz, Chbeck, Rowohlt, Droemer, Fischer and Neopubli. Apart from the editor tags, this data contains metadata fields of e-books such as the ISBN, the title, a description text, the author and a list of BISACs, which are identifiers for book categories. For the Amazon search terms, we collect search query logs of 21,243 e-books for 12 months (i.e., November 2017 to October 2018). Apart from the search terms, this data contains the e-books' ISBNs, titles and description texts. Table TABREF3 shows that the overlap of e-books that have editor tags and Amazon search terms is small (i.e., only 497). Furthermore, author and BISAC (i.e., the book category identifier) information are primarily available for e-books that contain editor tags. Consequently, both data sources provide complementary information, which underpins the intention of this work, i.e., to evaluate tag recommendation approaches using annotation sources from different contexts. Data used to evaluate recommendations. For evaluation, we use a third set of e-book annotations, namely Amazon review keywords. These review keywords are extracted from the Amazon review texts and are typically provided in the review section of books on Amazon. Our idea is to not favor one or the other data source (i.e., editor tags and Amazon search terms) when evaluating our approaches against expected tags. At the same time, we consider Amazon review keywords to be a good mixture of editor tags and search terms as they describe both the content and the users' opinions on the e-books (i.e., the readers' vocabulary). As shown in Table TABREF3 , we collect Amazon review keywords for 2,896 e-books (publishers: Kiwi, Rowohlt, Fischer, and Droemer), which leads to 33,663 distinct review keywords and on average 30 keyword assignments per e-book. Tag Recommendation Approaches We implement three types of tag recommendation approaches, i.e., (i) popularity-based, (ii) similarity-based (i.e., using content information), and (iii) hybrid approaches. Due to the lack of personalized tags (i.e., we do not know which user has assigned a tag), we do not implement other types of algorithms such as collaborative filtering BIBREF8 . In total, we evaluate 19 different algorithms to recommend tags for annotating e-books. Popularity-based approaches. We recommend the most frequently used tags in the dataset, which is a common strategy for tag recommendations BIBREF9 . That is, a most popular INLINEFORM0 approach for editor tags and a most popular INLINEFORM1 approach for Amazon search terms. For e-books, for which we also have author (= INLINEFORM2 and INLINEFORM3 ) or BISAC (= INLINEFORM4 and INLINEFORM5 ) information, we use these features to further filter the recommended tags, i.e., to only recommend tags that were used to annotate e-books of a specific author or a specific BISAC. We combine both data sources (i.e., editor tags and Amazon search terms) using a round-robin combination strategy, which ensures an equal weight for both sources. This gives us three additional popularity-based algorithms (= INLINEFORM0 , INLINEFORM1 and INLINEFORM2 ). Similarity-based approaches. We exploit the textual content of e-books (i.e., description or title) to recommend relevant tags BIBREF10 . For this, we first employ a content-based filtering approach BIBREF11 based on TF-IDF BIBREF12 to find top- INLINEFORM0 similar e-books. For each of the similar e-books, we then either extract the assigned editor tags (= INLINEFORM2 and INLINEFORM3 ) or the Amazon search terms (= INLINEFORM4 and INLINEFORM5 ). To combine the tags of the top- INLINEFORM6 similar e-books, we use the cross-source algorithm BIBREF13 , which favors tags that were used to annotate more than one similar e-book (i.e., tags that come from multiple recommendation sources). The final tag relevancy is calculated as: DISPLAYFORM0 where INLINEFORM0 denotes the number of distinct e-books, which yielded the recommendation of tag INLINEFORM1 , to favor tags that come from multiple sources and INLINEFORM2 is the similarity score of the corresponding e-book. We again use a round-robin strategy to combine both data sources (= INLINEFORM3 and INLINEFORM4 ). Hybrid approaches. We use the previously mentioned cross-source algorithm BIBREF13 to construct four hybrid recommendation approaches. In this case, tags are favored that are recommended by more than one algorithm. Hence, to create a popularity-based hybrid (= INLINEFORM0 ), we combine the best three performing popularity-based approaches from the ones (i) without any contextual signal, (ii) with the author as context, and (iii) with BISAC as context. In the case of the similarity-based hybrid (= INLINEFORM1 ), we utilize the two best performing similarity-based approaches from the ones (i) which use the title, and (ii) which use the description text. We further define INLINEFORM2 , a hybrid approach that combines the three popularity-based methods of INLINEFORM3 and the two similarity-based approaches of INLINEFORM4 . Finally, we define INLINEFORM5 as a hybrid approach that uses the best performing popularity-based and the best performing similarity-based approach (see Figure FIGREF11 in Section SECREF4 for more details about the particular algorithm combinations). Experimental Setup In this section, we describe our evaluation protocol as well as the measures we use to evaluate and compare our tag recommendation approaches. Evaluation Protocol For evaluation, we use the third set of e-book annotations, namely Amazon review keywords. As described in Section SECREF1 , these review keywords are extracted from the Amazon review texts and thus, reflect the users' vocabulary. We evaluate our approaches for the 2,896 e-books, for whom we got review keywords. To follow common practice for tag recommendation evaluation BIBREF14 , we predict the assigned review keywords (= our test set) for respective e-books. Evaluation Metrics In this work, we measure (i) recommendation accuracy, (ii) semantic similarity, and (iii) recommendation diversity to evaluate the quality of our approaches from different perspectives. Recommendation accuracy. We use Normalized Discounted Cumulative Gain (nDCG) BIBREF15 to measure the accuracy of the tag recommendation approaches. The nDCG measure is a standard ranking-dependent metric that not only measures how many tags can be correctly predicted but also takes into account their position in the recommendation list with length of INLINEFORM0 . It is based on the Discounted Cummulative Gain, which is given by: DISPLAYFORM0 where INLINEFORM0 is a function that returns 1 if the recommended tag at position INLINEFORM1 in the recommended list is relevant. We then calculate DCG@ INLINEFORM2 for every evaluated e-book by dividing DCG@ INLINEFORM3 with the ideal DCG value iDCG@ INLINEFORM4 , which is the highest possible DCG value that can be achieved if all the relevant tags would be recommended in the correct order. It is given by the following formula BIBREF15 : DISPLAYFORM0 Semantic similarity. One precondition of standard recommendation accuracy measures is that to generate a “hit”, the recommended tag needs to be an exact syntactical match to the one from the test set. When tags are recommended from one data source and compared to tags from another source, this can be problematic. For example, if we recommend the tag “victim” but expect the tag “prey”, we would mark this as a mismatch, therefore being a bad recommendation. But if we know that the corresponding e-book is a crime novel, the recommended tag would be (semantically) descriptive to reflect the book's content. Hence, in this paper, we propose to additionally measure the semantic similarity between recommended tags and tags from the test set (i.e., the Amazon review keywords). Over the last four years, there have been several notable publications in the area of applying deep learning to uncover semantic relationships between textual content (e.g., by learning word embeddings with Word2Vec BIBREF16 , BIBREF17 ). Based on this, we propose an alternative measure of recommendation quality by learning the semantic relationships from both vocabularies and then using it to compare how semantically similar the recommended tags are to the expected review keywords. For this, we first extract the textual content in the form of the description text, title, editor tags and Amazon search terms of e-books from our dataset. We then train a Doc2Vec BIBREF7 model on the content. Then, we use the model to infer the latent representation for both the complete list of recommended tags as well as the list of expected tags from the test set. Finally, we use the cosine similarity measure to calculate how semantically similar these two lists are. Recommendation diversity. As defined in BIBREF18 , we calculate recommendation diversity as the average dissimilarity of all pairs of tags in the list of recommended tags. Thus, given a distance function INLINEFORM0 that corresponds to the dissimilarity between two tags INLINEFORM1 and INLINEFORM2 in the list of recommended tags, INLINEFORM3 is given as the average dissimilarity of all pairs of tags: DISPLAYFORM0 where INLINEFORM0 is the number of evaluated e-books and the dissimilarity function is defined as INLINEFORM1 . In our experiments, we use the previously trained Doc2Vec model to extract the latent representation of a specific tag. The similarity of two tags INLINEFORM2 is then calculated with the Cosine similarity measure using the latent vector representations of respective tags INLINEFORM3 and INLINEFORM4 . Results Concerning tag recommendation accuracy, in this section, we report results for different values of INLINEFORM0 (i.e., number of recommended tags). For the beyond-accuracy experiment, we use the full list of recommended tags (i.e., INLINEFORM1 ). Recommendation Accuracy Evaluation Figure FIGREF11 shows the results of the accuracy experiment for the (i) popularity-based, (ii) similarity-based, and (iii) hybrid tag recommendation approaches. Popularity-based approaches. In Figure FIGREF11 , we see that popularity-based approaches based on editor tags tend to perform better than if trained on Amazon search terms. If we take into account contextual information like BISAC or author, we can further improve accuracy in terms of INLINEFORM0 . That is, we find that using popular tags from e-books of a specific author leads to the best accuracy of the popularity-based approaches. This suggests that editors and readers do seem to reuse tags for e-books of same authors. If we use both editor tags and Amazon search terms, we can further increase accuracy, especially for higher values of INLINEFORM1 like in the case of INLINEFORM2 . This is, however, not the case for INLINEFORM3 as the accuracy of the integrated INLINEFORM4 approach is low. The reason for this is the limited amount of e-books from within the Amazon search query logs that have BISAC information (i.e., only INLINEFORM5 ). Similarity-based approaches. We further improve accuracy if we first find similar e-books and then extract their top- INLINEFORM0 tags in a cross-source manner as described in Section SECREF4 . As shown in Figure FIGREF11 , using the description text to find similar e-books results in more accurate tag recommendations than using the title (i.e., INLINEFORM0 for INLINEFORM1 ). This is somehow expected as the description text consists of a bigger corpus of words (i.e., multiple sentences) than the title. Concerning the collected Amazon search query logs, extracting and then recommending tags from this source results in a much lower accuracy performance. Thus, these results also suggest to investigate beyond-accuracy metrics as done in Section SECREF17 . Hybrid approaches. Figure FIGREF11 shows the accuracy results of the four hybrid approaches. By combining the best three popularity-based approaches, we outperform all of the initially evaluated popularity algorithms (i.e., INLINEFORM0 for INLINEFORM1 ). On the contrary, the combination of the two best performing similarity-based approaches INLINEFORM2 and INLINEFORM3 does not yield better accuracy. The negative impact of using a lower-performing approach such as INLINEFORM4 within a hybrid combination can also be observed in INLINEFORM5 for lower values of INLINEFORM6 . Overall, this confirms our initial intuition that combining the best performing popularity-based approach with the best similarity-based approach should result in the highest accuracy (i.e., INLINEFORM7 for INLINEFORM8 ). Moreover, our goal, namely to exploit editor tags in combination with search terms used by readers to increase the metadata quality of e-books, is shown to be best supported by applying hybrid approaches as they provide the best prediction results. Beyond-Accuracy Evaluation Figure FIGREF16 illustrates the results of the experiments, which measure the recommendation impact beyond-accuracy. Semantic similarity. Figure FIGREF16 illustrates the results of our proposed semantic similarity measure. To compare our proposed measure to standard accuracy measures such as INLINEFORM0 , we use Kendall's Tau rank correlation BIBREF19 as suggested by BIBREF20 for automatic evaluation of information-ordering tasks. From that, we rank our recommendation approaches according to both accuracy and semantic similarity and calculate the relation between both rankings. This results in INLINEFORM1 with a p-value < INLINEFORM2 , which suggests a high correlation between the semantic similarity and the standard accuracy measure. Therefore, the semantic similarity measure helps us interpret the recommendation quality. For instance, we achieve the lowest INLINEFORM0 values with the similarity-based approaches that recommend Amazon search terms (i.e., INLINEFORM1 and INLINEFORM2 ). When comparing these results with others from Figure FIGREF11 , a conclusion could be quickly drawn that the recommended tags are merely unusable. However, by looking at Figure FIGREF16 , we see that, although these approaches do not provide the highest recommendation accuracy, they still result in tag recommendations that are semantically related at a high degree to the expected annotations from the test set. Overall, this suggests that approaches, which provide a poor accuracy performance concerning INLINEFORM4 but provide a good performance regarding semantic similarity could still be helpful for annotating e-books. Recommendation diversity. Figure FIGREF16 shows the diversity of the tag recommendation approaches. We achieve the highest diversity with the similarity-based approaches, which extract Amazon search terms. Their accuracy is, however, very low. Thus, the combination of the two vocabularies can provide a good trade-off between recommendation accuracy and diversity. Conclusion and Future Work In this paper, we present our work to support editors in the e-book annotation process. Specifically, we aim to provide tag recommendations that incorporate both the vocabulary of the editors and e-book readers. Therefore, we train various configurations of tag recommender approaches on editors' tags and Amazon search terms and evaluate them on a dataset containing Amazon review keywords. We find that combining both data sources enhances the quality of tag recommendations for annotating e-books. Furthermore, while approaches that train only on Amazon search terms provide poor performance concerning recommendation accuracy, we show that they still offer helpful annotations concerning recommendation diversity as well as our novel semantic similarity metric. Future work. For future work, we plan to validate our findings using another dataset, e.g., by recommending tags for scientific articles and books in BibSonomy. With this, we aim to demonstrate the usefulness of the proposed approach in a similar domain and to enhance the reproducibility of our results by using an open dataset. Moreover, we plan to evaluate our tag recommendation approaches in a study with domain users. Also, we want to improve our similarity-based approaches by integrating novel embedding approaches BIBREF16 , BIBREF17 as we did, for example, with our proposed semantic similarity evaluation metric. Finally, we aim to incorporate explanations for recommended tags so that editors of e-book annotations receive additional support in annotating e-books BIBREF21 . By making the underlying (semantic) reasoning visible to the editor who is in charge of tailoring annotations, we aim to support two goals: (i) allowing readers to discover e-books more efficiently, and (ii) enabling publishers to leverage semi-automatic categorization processes for e-books. In turn, providing explanations fosters control over which vocabulary to choose when tagging e-books for different application contexts. Acknowledgments. The authors would like to thank Peter Langs, Jan-Philipp Wolf and Alyona Schraa from HGV GmbH for providing the e-book annotation data. This work was funded by the Know-Center GmbH (FFG COMET Program), the FFG Data Market Austria project and the AI4EU project (EU grant 825619). The Know-Center GmbH is funded within the Austrian COMET Program - Competence Centers for Excellent Technologies - under the auspices of the Austrian Ministry of Transport, Innovation and Technology, the Austrian Ministry of Economics and Labor and by the State of Styria. COMET is managed by the Austrian Research Promotion Agency (FFG).
[ "33,663", "33,663 distinct review keywords " ]
3,305
qasper_e
en
null
7b4d3a64f9dd6e23da5f1530bd92ced5c5adf84ce3c1e2ee
What additional features and context are proposed?
Introduction Abusive language refers to any type of insult, vulgarity, or profanity that debases the target; it also can be anything that causes aggravation BIBREF0 , BIBREF1 . Abusive language is often reframed as, but not limited to, offensive language BIBREF2 , cyberbullying BIBREF3 , othering language BIBREF4 , and hate speech BIBREF5 . Recently, an increasing number of users have been subjected to harassment, or have witnessed offensive behaviors online BIBREF6 . Major social media companies (i.e. Facebook, Twitter) have utilized multiple resources—artificial intelligence, human reviewers, user reporting processes, etc.—in effort to censor offensive language, yet it seems nearly impossible to successfully resolve the issue BIBREF7 , BIBREF8 . The major reason of the failure in abusive language detection comes from its subjectivity and context-dependent characteristics BIBREF9 . For instance, a message can be regarded as harmless on its own, but when taking previous threads into account it may be seen as abusive, and vice versa. This aspect makes detecting abusive language extremely laborious even for human annotators; therefore it is difficult to build a large and reliable dataset BIBREF10 . Previously, datasets openly available in abusive language detection research on Twitter ranged from 10K to 35K in size BIBREF9 , BIBREF11 . This quantity is not sufficient to train the significant number of parameters in deep learning models. Due to this reason, these datasets have been mainly studied by traditional machine learning methods. Most recently, Founta et al. founta2018large introduced Hate and Abusive Speech on Twitter, a dataset containing 100K tweets with cross-validated labels. Although this corpus has great potential in training deep models with its significant size, there are no baseline reports to date. This paper investigates the efficacy of different learning models in detecting abusive language. We compare accuracy using the most frequently studied machine learning classifiers as well as recent neural network models. Reliable baseline results are presented with the first comparative study on this dataset. Additionally, we demonstrate the effect of different features and variants, and describe the possibility for further improvements with the use of ensemble models. Related Work The research community introduced various approaches on abusive language detection. Razavi et al. razavi2010offensive applied Naïve Bayes, and Warner and Hirschberg warner2012detecting used Support Vector Machine (SVM), both with word-level features to classify offensive language. Xiang et al. xiang2012detecting generated topic distributions with Latent Dirichlet Allocation BIBREF12 , also using word-level features in order to classify offensive tweets. More recently, distributed word representations and neural network models have been widely applied for abusive language detection. Djuric et al. djuric2015hate used the Continuous Bag Of Words model with paragraph2vec algorithm BIBREF13 to more accurately detect hate speech than that of the plain Bag Of Words models. Badjatiya et al. badjatiya2017deep implemented Gradient Boosted Decision Trees classifiers using word representations trained by deep learning models. Other researchers have investigated character-level representations and their effectiveness compared to word-level representations BIBREF14 , BIBREF15 . As traditional machine learning methods have relied on feature engineering, (i.e. n-grams, POS tags, user information) BIBREF1 , researchers have proposed neural-based models with the advent of larger datasets. Convolutional Neural Networks and Recurrent Neural Networks have been applied to detect abusive language, and they have outperformed traditional machine learning classifiers such as Logistic Regression and SVM BIBREF15 , BIBREF16 . However, there are no studies investigating the efficiency of neural models with large-scale datasets over 100K. Methodology This section illustrates our implementations on traditional machine learning classifiers and neural network based models in detail. Furthermore, we describe additional features and variant models investigated. Traditional Machine Learning Models We implement five feature engineering based machine learning classifiers that are most often used for abusive language detection. In data preprocessing, text sequences are converted into Bag Of Words (BOW) representations, and normalized with Term Frequency-Inverse Document Frequency (TF-IDF) values. We experiment with word-level features using n-grams ranging from 1 to 3, and character-level features from 3 to 8-grams. Each classifier is implemented with the following specifications: Naïve Bayes (NB): Multinomial NB with additive smoothing constant 1 Logistic Regression (LR): Linear LR with L2 regularization constant 1 and limited-memory BFGS optimization Support Vector Machine (SVM): Linear SVM with L2 regularization constant 1 and logistic loss function Random Forests (RF): Averaging probabilistic predictions of 10 randomized decision trees Gradient Boosted Trees (GBT): Tree boosting with learning rate 1 and logistic loss function Neural Network based Models Along with traditional machine learning approaches, we investigate neural network based models to evaluate their efficacy within a larger dataset. In particular, we explore Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), and their variant models. A pre-trained GloVe BIBREF17 representation is used for word-level features. CNN: We adopt Kim's kim2014convolutional implementation as the baseline. The word-level CNN models have 3 convolutional filters of different sizes [1,2,3] with ReLU activation, and a max-pooling layer. For the character-level CNN, we use 6 convolutional filters of various sizes [3,4,5,6,7,8], then add max-pooling layers followed by 1 fully-connected layer with a dimension of 1024. Park and Fung park2017one proposed a HybridCNN model which outperformed both word-level and character-level CNNs in abusive language detection. In order to evaluate the HybridCNN for this dataset, we concatenate the output of max-pooled layers from word-level and character-level CNN, and feed this vector to a fully-connected layer in order to predict the output. All three CNN models (word-level, character-level, and hybrid) use cross entropy with softmax as their loss function and Adam BIBREF18 as the optimizer. RNN: We use bidirectional RNN BIBREF19 as the baseline, implementing a GRU BIBREF20 cell for each recurrent unit. From extensive parameter-search experiments, we chose 1 encoding layer with 50 dimensional hidden states and an input dropout probability of 0.3. The RNN models use cross entropy with sigmoid as their loss function and Adam as the optimizer. For a possible improvement, we apply a self-matching attention mechanism on RNN baseline models BIBREF21 so that they may better understand the data by retrieving text sequences twice. We also investigate a recently introduced method, Latent Topic Clustering (LTC) BIBREF22 . The LTC method extracts latent topic information from the hidden states of RNN, and uses it for additional information in classifying the text data. Feature Extension While manually analyzing the raw dataset, we noticed that looking at the tweet one has replied to or has quoted, provides significant contextual information. We call these, “context tweets". As humans can better understand a tweet with the reference of its context, our assumption is that computers also benefit from taking context tweets into account in detecting abusive language. As shown in the examples below, (2) is labeled abusive due to the use of vulgar language. However, the intention of the user can be better understood with its context tweet (1). (1) I hate when I'm sitting in front of the bus and somebody with a wheelchair get on. INLINEFORM0 (2) I hate it when I'm trying to board a bus and there's already an as**ole on it. Similarly, context tweet (3) is important in understanding the abusive tweet (4), especially in identifying the target of the malice. (3) Survivors of #Syria Gas Attack Recount `a Cruel Scene'. INLINEFORM0 (4) Who the HELL is “LIKE" ING this post? Sick people.... Huang et al. huang2016modeling used several attributes of context tweets for sentiment analysis in order to improve the baseline LSTM model. However, their approach was limited because the meta-information they focused on—author information, conversation type, use of the same hashtags or emojis—are all highly dependent on data. In order to avoid data dependency, text sequences of context tweets are directly used as an additional feature of neural network models. We use the same baseline model to convert context tweets to vectors, then concatenate these vectors with outputs of their corresponding labeled tweets. More specifically, we concatenate max-pooled layers of context and labeled tweets for the CNN baseline model. As for RNN, the last hidden states of context and labeled tweets are concatenated. Dataset Hate and Abusive Speech on Twitter BIBREF10 classifies tweets into 4 labels, “normal", “spam", “hateful" and “abusive". We were only able to crawl 70,904 tweets out of 99,996 tweet IDs, mainly because the tweet was deleted or the user account had been suspended. Table shows the distribution of labels of the crawled data. Data Preprocessing In the data preprocessing steps, user IDs, URLs, and frequently used emojis are replaced as special tokens. Since hashtags tend to have a high correlation with the content of the tweet BIBREF23 , we use a segmentation library BIBREF24 for hashtags to extract more information. For character-level representations, we apply the method Zhang et al. zhang2015character proposed. Tweets are transformed into one-hot encoded vectors using 70 character dimensions—26 lower-cased alphabets, 10 digits, and 34 special characters including whitespace. Training and Evaluation In training the feature engineering based machine learning classifiers, we truncate vector representations according to the TF-IDF values (the top 14,000 and 53,000 for word-level and character-level representations, respectively) to avoid overfitting. For neural network models, words that appear only once are replaced as unknown tokens. Since the dataset used is not split into train, development, and test sets, we perform 10-fold cross validation, obtaining the average of 5 tries; we divide the dataset randomly by a ratio of 85:5:10, respectively. In order to evaluate the overall performance, we calculate the weighted average of precision, recall, and F1 scores of all four labels, “normal”, “spam”, “hateful”, and “abusive”. Empirical Results As shown in Table , neural network models are more accurate than feature engineering based models (i.e. NB, SVM, etc.) except for the LR model—the best LR model has the same F1 score as the best CNN model. Among traditional machine learning models, the most accurate in classifying abusive language is the LR model followed by ensemble models such as GBT and RF. Character-level representations improve F1 scores of SVM and RF classifiers, but they have no positive effect on other models. For neural network models, RNN with LTC modules have the highest accuracy score, but there are no significant improvements from its baseline model and its attention-added model. Similarly, HybridCNN does not improve the baseline CNN model. For both CNN and RNN models, character-level features significantly decrease the accuracy of classification. The use of context tweets generally have little effect on baseline models, however they noticeably improve the scores of several metrics. For instance, CNN with context tweets score the highest recall and F1 for “hateful" labels, and RNN models with context tweets have the highest recall for “abusive" tweets. Discussion and Conclusion While character-level features are known to improve the accuracy of neural network models BIBREF16 , they reduce classification accuracy for Hate and Abusive Speech on Twitter. We conclude this is because of the lack of labeled data as well as the significant imbalance among the different labels. Unlike neural network models, character-level features in traditional machine learning classifiers have positive results because we have trained the models only with the most significant character elements using TF-IDF values. Variants of neural network models also suffer from data insufficiency. However, these models show positive performances on “spam" (14%) and “hateful" (4%) tweets—the lower distributed labels. The highest F1 score for “spam" is from the RNN-LTC model (0.551), and the highest for “hateful" is CNN with context tweets (0.309). Since each variant model excels in different metrics, we expect to see additional improvements with the use of ensemble models of these variants in future works. In this paper, we report the baseline accuracy of different learning models as well as their variants on the recently introduced dataset, Hate and Abusive Speech on Twitter. Experimental results show that bidirectional GRU networks with LTC provide the most accurate results in detecting abusive language. Additionally, we present the possibility of using ensemble models of variant models and features for further improvements. Acknowledgments K. Jung is with the Department of Electrical and Computer Engineering, ASRI, Seoul National University, Seoul, Korea. This work was supported by the National Research Foundation of Korea (NRF) funded by the Korea government (MSIT) (No. 2016M3C4A7952632), the Technology Innovation Program (10073144) funded by the Ministry of Trade, Industry & Energy (MOTIE, Korea). We would also like to thank Yongkeun Hwang and Ji Ho Park for helpful discussions and their valuable insights.
[ "using tweets that one has replied or quoted to as contextual information", "text sequences of context tweets" ]
2,060
qasper_e
en
null
cd873a30eb42011afd9007db9d38aad584c2734f48330dce
Do they evaluate their learned representations on downstream tasks?
Introduction Twitter is a widely used microblogging platform, where users post and interact with messages, “tweets”. Understanding the semantic representation of tweets can benefit a plethora of applications such as sentiment analysis BIBREF0 , BIBREF1 , hashtag prediction BIBREF2 , paraphrase detection BIBREF3 and microblog ranking BIBREF4 , BIBREF5 . However, tweets are difficult to model as they pose several challenges such as short length, informal words, unusual grammar and misspellings. Recently, researchers are focusing on leveraging unsupervised representation learning methods based on neural networks to solve this problem. Once these representations are learned, we can use off-the-shelf predictors taking the representation as input to solve the downstream task BIBREF6 , BIBREF7 . These methods enjoy several advantages: (1) they are cheaper to train, as they work with unlabelled data, (2) they reduce the dependence on domain level experts, and (3) they are highly effective across multiple applications, in practice. Despite this, there is a lack of prior work which surveys the tweet-specific unsupervised representation learning models. In this work, we attempt to fill this gap by investigating the models in an organized fashion. Specifically, we group the models based on the objective function it optimizes. We believe this work can aid the understanding of the existing literature. We conclude the paper by presenting interesting future research directions, which we believe are fruitful in advancing this field by building high-quality tweet representation learning models. Unsupervised Tweet Representation Models There are various models spanning across different model architectures and objective functions in the literature to compute tweet representation in an unsupervised fashion. These models work in a semi-supervised way - the representations generated by the model is fed to an off-the-shelf predictor like Support Vector Machines (SVM) to solve a particular downstream task. These models span across a wide variety of neural network based architectures including average of word vectors, convolutional-based, recurrent-based and so on. We believe that the performance of these models is highly dependent on the objective function it optimizes – predicting adjacent word (within-tweet relationships), adjacent tweet (inter-tweet relationships), the tweet itself (autoencoder), modeling from structured resources like paraphrase databases and weak supervision. In this section, we provide the first of its kind survey of the recent tweet-specific unsupervised models in an organized fashion to understand the literature. Specifically, we categorize each model based on the optimized objective function as shown in Figure FIGREF1 . Next, we study each category one by one. Modeling within-tweet relationships Motivation: Every tweet is assumed to have a latent topic vector, which influences the distribution of the words in the tweet. For example, though the appearance of the phrase catch the ball is frequent in the corpus, if we know that the topic of a tweet is about “technology”, we can expect words such as bug or exception after the word catch (ignoring the) instead of the word ball since catch the bug/exception is more plausible under the topic “technology”. On the other hand, if the topic of the tweet is about “sports”, then we can expect ball after catch. These intuitions indicate that the prediction of neighboring words for a given word strongly relies on the tweet also. Models: BIBREF8 's work is the first to exploit this idea to compute distributed document representations that are good at predicting words in the document. They propose two models: PV-DM and PV-DBOW, that are extensions of Continuous Bag Of Words (CBOW) and Skip-gram model variants of the popular Word2Vec model BIBREF9 respectively – PV-DM inserts an additional document token (which can be thought of as another word) which is shared across all contexts generated from the same document; PV-DBOW attempts to predict the sampled words from the document given the document representation. Although originally employed for paragraphs and documents, these models work better than the traditional models: BOW BIBREF10 and LDA BIBREF11 for tweet classification and microblog retrieval tasks BIBREF12 . The authors in BIBREF12 make the PV-DM and PV-DBOW models concept-aware (a rich semantic signal from a tweet) by augmenting two features: attention over contextual words and conceptual tweet embedding, which jointly exploit concept-level senses of tweets to compute better representations. Both the discussed works have the following characteristics: (1) they use a shallow architecture, which enables fast training, (2) computing representations for test tweets requires computing gradients, which is time-consuming for real-time Twitter applications, and (3) most importantly, they fail to exploit textual information from related tweets that can bear salient semantic signals. Modeling inter-tweet relationships Motivation: To capture rich tweet semantics, researchers are attempting to exploit a type of sentence-level Distributional Hypothesis BIBREF10 , BIBREF13 . The idea is to infer the tweet representation from the content of adjacent tweets in a related stream like users' Twitter timeline, topical, retweet and conversational stream. This approach significantly alleviates the context insufficiency problem caused due to the ambiguous and short nature of tweets BIBREF0 , BIBREF14 . Models: Skip-thought vectors BIBREF15 (STV) is a widely popular sentence encoder, which is trained to predict adjacent sentences in the book corpus BIBREF16 . Although the testing is cheap as it involves a cheap forward propagation of the test sentence, STV is very slow to train thanks to its complicated model architecture. To combat this computational inefficiency, FastSent BIBREF17 propose a simple additive (log-linear) sentence model, which predicts adjacent sentences (represented as BOW) taking the BOW representation of some sentence in context. This model can exploit the same signal, but at a much lower computational expense. Parallel to this work, Siamase CBOW BIBREF18 develop a model which directly compares the BOW representation of two sentence to bring the embeddings of a sentence closer to its adjacent sentence, away from a randomly occurring sentence in the corpus. For FastSent and Siamese CBOW, the test sentence representation is a simple average of word vectors obtained after training. Both of these models are general purpose sentence representation models trained on book corpus, yet give a competitive performance over previous models on the tweet semantic similarity computation task. BIBREF14 's model attempt to exploit these signals directly from Twitter. With the help of attention technique and learned user representation, this log-linear model is able to capture salient semantic information from chronologically adjacent tweets of a target tweet in users' Twitter timeline. Modeling from structured resources Motivation: In recent times, building representation models based on supervision from richly structured resources such as Paraphrase Database (PPDB) BIBREF19 (containing noisy phrase pairs) has yielded high quality sentence representations. These methods work by maximizing the similarity of the sentences in the learned semantic space. Models: CHARAGRAM BIBREF20 embeds textual sequences by learning a character-based compositional model that involves addition of the vectors of its character n-grams followed by an elementwise nonlinearity. This simpler architecture trained on PPDB is able to beat models with complex architectures like CNN, LSTM on SemEval 2015 Twitter textual similarity task by a large margin. This result emphasizes the importance of character-level models that address differences due to spelling variation and word choice. The authors in their subsequent work BIBREF21 conduct a comprehensive analysis of models spanning the range of complexity from word averaging to LSTMs for its ability to do transfer and supervised learning after optimizing a margin based loss on PPDB. For transfer learning, they find models based on word averaging perform well on both the in-domain and out-of-domain textual similarity tasks, beating LSTM model by a large margin. On the other hand, the word averaging models perform well for both sentence similarity and textual entailment tasks, outperforming the LSTM. However, for sentiment classification task, they find LSTM (trained on PPDB) to beat the averaging models to establish a new state of the art. The above results suggest that structured resources play a vital role in computing general-purpose embeddings useful in downstream applications. Modeling as an autoencoder Motivation: The autoencoder based approach learns latent (or compressed) representation by reconstructing its own input. Since textual data like tweets contain discrete input signals, sequence-to-sequence models BIBREF22 like STV can be used to build the solution. The encoder model which encodes the input tweet can typically be a CNN BIBREF23 , recurrent models like RNN, GRU, LSTM BIBREF24 or memory networks BIBREF25 . The decoder model which generates the output tweet can typically be a recurrent model that predicts a output token at every time step. Models: Sequential Denoising Autoencoders (SDAE) BIBREF17 is a LSTM-based sequence-to-sequence model, which is trained to recover the original data from the corrupted version. SDAE produces robust representations by learning to represent the data in terms of features that explain its important factors of variation. Tweet2Vec BIBREF3 is a recent model which uses a character-level CNN-LSTM encoder-decoder architecture trained to construct the input tweet directly. This model outperforms competitive models that work on word-level like PV-DM, PV-DBOW on semantic similarity computation and sentiment classification tasks, thereby showing that the character-level nature of Tweet2Vec is best-suited to deal with the noise and idiosyncrasies of tweets. Tweet2Vec controls the generalization error by using a data augmentation technique, wherein tweets are replicated and some of the words in the replicated tweets are replaced with their synonyms. Both SDAE and Tweet2Vec has the advantage that they don't need a coherent inter-sentence narrative (like STV), which is hard to obtain in Twitter. Modeling using weak supervision Motivation: In a weakly supervised setup, we create labels for a tweet automatically and predict them to learn potentially sophisticated models than those obtained by unsupervised learning alone. Examples of labels include sentiment of the overall tweet, words like hashtag present in the tweet and so on. This technique can create a huge labeled dataset especially for building data-hungry, sophisticated deep learning models. Models: BIBREF26 learns sentiment-specific word embedding (SSWE), which encodes the polarity information in the word representations so that words with contrasting polarities and similar syntactic context (like good and bad) are pushed away from each other in the semantic space that it learns. SSWE utilizes the massive distant-supervised tweets collected by positive and negative emoticons to build a powerful tweet representation, which are shown to be useful in tasks such as sentiment classification and word similarity computation in sentiment lexicon. BIBREF2 observes that hashtags in tweets can be considered as topics and hence tweets with similar hashtags must come closer to each other. Their model predicts the hashtags by using a Bi-GRU layer to embed the tweets from its characters. Due to subword modeling, such character-level models can approximate the representations for rare words and new words (words not seen during training) in the test tweets really well. This model outperforms the word-level baselines for hashtag prediction task, thereby concluding that exploring character-level models for tweets is a worthy research direction to pursue. Both these works fail to study the model's generality BIBREF27 , i.e., the ability of the model to transfer the learned representations to diverse tasks. Future Directions In this section we present the future research directions which we believe can be worth pursuing to generate high quality tweet embeddings. Conclusion In this work we study the problem of learning unsupervised tweet representations. We believe our survey of the existing works based on the objective function can give vital perspectives to researchers and aid their understanding of the field. We also believe the future research directions studied in this work can help in breaking the barriers in building high quality, general purpose tweet representation models.
[ "No", "No" ]
1,906
qasper_e
en
null
088e3b32e2812d8fcca9f4fad976de94317893b5cb210f4a
Do they build a model to automatically detect demographic, lingustic or psycological dimensons of people?
Introduction Blogging gained momentum in 1999 and became especially popular after the launch of freely available, hosted platforms such as blogger.com or livejournal.com. Blogging has progressively been used by individuals to share news, ideas, and information, but it has also developed a mainstream role to the extent that it is being used by political consultants and news services as a tool for outreach and opinion forming as well as by businesses as a marketing tool to promote products and services BIBREF0 . For this paper, we compiled a very large geolocated collection of blogs, written by individuals located in the U.S., with the purpose of creating insightful mappings of the blogging community. In particular, during May-July 2015, we gathered the profile information for all the users that have self-reported their location in the U.S., along with a number of posts for all their associated blogs. We utilize this blog collection to generate maps of the U.S. that reflect user demographics, language use, and distributions of psycholinguistic and semantic word classes. We believe that these maps can provide valuable insights and partial verification of previous claims in support of research in linguistic geography BIBREF1 , regional personality BIBREF2 , and language analysis BIBREF3 , BIBREF4 , as well as psychology and its relation to human geography BIBREF5 . Data Collection Our premise is that we can generate informative maps using geolocated information available on social media; therefore, we guide the blog collection process with the constraint that we only accept blogs that have specific location information. Moreover, we aim to find blogs belonging to writers from all 50 U.S. states, which will allow us to build U.S. maps for various dimensions of interest. We first started by collecting a set of profiles of bloggers that met our location specifications by searching individual states on the profile finder on http://www.blogger.com. Starting with this list, we can locate the profile page for a user, and subsequently extract additional information, which includes fields such as name, email, occupation, industry, and so forth. It is important to note that the profile finder only identifies users that have an exact match to the location specified in the query; we thus built and ran queries that used both state abbreviations (e.g., TX, AL), as well as the states' full names (e.g., Texas, Alabama). After completing all the processing steps, we identified 197,527 bloggers with state location information. For each of these bloggers, we found their blogs (note that a blogger can have multiple blogs), for a total of 335,698 blogs. For each of these blogs, we downloaded the 21 most recent blog postings, which were cleaned of HTML tags and tokenized, resulting in a collection of 4,600,465 blog posts. Maps from Blogs Our dataset provides mappings between location, profile information, and language use, which we can leverage to generate maps that reflect demographic, linguistic, and psycholinguistic properties of the population represented in the dataset. People Maps The first map we generate depicts the distribution of the bloggers in our dataset across the U.S. Figure FIGREF1 shows the density of users in our dataset in each of the 50 states. For instance, the densest state was found to be California with 11,701 users. The second densest is Texas, with 9,252 users, followed by New York, with 9,136. The state with the fewest bloggers is Delaware with 1,217 users. Not surprisingly, this distribution correlates well with the population of these states, with a Spearman's rank correlation INLINEFORM0 of 0.91 and a p-value INLINEFORM1 0.0001, and is very similar to the one reported in Lin and Halavais Lin04. Figure FIGREF1 also shows the cities mentioned most often in our dataset. In particular, it illustrates all 227 cities that have at least 100 bloggers. The bigger the dot on the map, the larger the number of users found in that city. The five top blogger-dense cities, in order, are: Chicago, New York, Portland, Seattle, and Atlanta. We also generate two maps that delineate the gender distribution in the dataset. Overall, the blogging world seems to be dominated by females: out of 153,209 users who self-reported their gender, only 52,725 are men and 100,484 are women. Figures FIGREF1 and FIGREF1 show the percentage of male and female bloggers in each of the 50 states. As seen in this figure, there are more than the average number of male bloggers in states such as California and New York, whereas Utah and Idaho have a higher percentage of women bloggers. Another profile element that can lead to interesting maps is the Industry field BIBREF6 . Using this field, we created different maps that plot the geographical distribution of industries across the country. As an example, Figure FIGREF2 shows the percentage of the users in each state working in the automotive and tourism industries respectively. Linguistic Maps Another use of the information found in our dataset is to build linguistic maps, which reflect the geographic lexical variation across the 50 states BIBREF7 . We generate maps that represent the relative frequency by which a word occurs in the different states. Figure FIGREF3 shows sample maps created for two different words. The figure shows the map generated for one location specific word, Maui, which unsurprisingly is found predominantly in Hawaii, and a map for a more common word, lake, which has a high occurrence rate in Minnesota (Land of 10,000 Lakes) and Utah (home of the Great Salt Lake). Our demo described in Section SECREF4 , can also be used to generate maps for function words, which can be very telling regarding people's personality BIBREF8 . Psycholinguistic and Semantic Maps LIWC. In addition to individual words, we can also create maps for word categories that reflect a certain psycholinguistic or semantic property. Several lexical resources, such as Roget or Linguistic Inquiry and Word Count BIBREF9 , group words into categories. Examples of such categories are Money, which includes words such as remuneration, dollar, and payment; or Positive feelings with words such as happy, cheerful, and celebration. Using the distribution of the individual words in a category, we can compile distributions for the entire category, and therefore generate maps for these word categories. For instance, figure FIGREF8 shows the maps created for two categories: Positive Feelings and Money. The maps are not surprising, and interestingly they also reflect an inverse correlation between Money and Positive Feelings . Values. We also measure the usage of words related to people's core values as reported by Boyd et al. boyd2015. The sets of words, or themes, were excavated using the Meaning Extraction Method (MEM) BIBREF10 . MEM is a topic modeling approach applied to a corpus of texts created by hundreds of survey respondents from the U.S. who were asked to freely write about their personal values. To illustrate, Figure FIGREF9 shows the geographical distributions of two of these value themes: Religion and Hard Work. Southeastern states often considered as the nation's “Bible Belt” BIBREF11 were found to have generally higher usage of Religion words such as God, bible, and church. Another broad trend was that western-central states (e.g., Wyoming, Nebraska, Iowa) commonly blogged about Hard Work, using words such as hard, work, and job more often than bloggers in other regions. Web Demonstration A prototype, interactive charting demo is available at http://lit.eecs.umich.edu/~geoliwc/. In addition to drawing maps of the geographical distributions on the different LIWC categories, the tool can report the three most and least correlated LIWC categories in the U.S. and compare the distributions of any two categories. Conclusions In this paper, we showed how we can effectively leverage a prodigious blog dataset. Not only does the dataset bring out the extensive linguistic content reflected in the blog posts, but also includes location information and rich metadata. These data allow for the generation of maps that reflect the demographics of the population, variations in language use, and differences in psycholinguistic and semantic categories. These mappings can be valuable to both psychologists and linguists, as well as lexicographers. A prototype demo has been made available together with the code used to collect our dataset. Acknowledgments This material is based in part upon work supported by the National Science Foundation (#1344257) and by the John Templeton Foundation (#48503). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation or the John Templeton Foundation. We would like to thank our colleagues Hengjing Wang, Jiatao Fan, Xinghai Zhang, and Po-Jung Huang who provided technical help with the implementation of the demo.
[ "No", "No" ]
1,443
qasper_e
en
null
ecdc1201351d1467d8b55b82ce8e54846f36e141284db44e
What is best performing model among author's submissions, what performance it had?
Introduction In the age of information dissemination without quality control, it has enabled malicious users to spread misinformation via social media and aim individual users with propaganda campaigns to achieve political and financial gains as well as advance a specific agenda. Often disinformation is complied in the two major forms: fake news and propaganda, where they differ in the sense that the propaganda is possibly built upon true information (e.g., biased, loaded language, repetition, etc.). Prior works BIBREF0, BIBREF1, BIBREF2 in detecting propaganda have focused primarily at document level, typically labeling all articles from a propagandistic news outlet as propaganda and thus, often non-propagandistic articles from the outlet are mislabeled. To this end, EMNLP19DaSanMartino focuses on analyzing the use of propaganda and detecting specific propagandistic techniques in news articles at sentence and fragment level, respectively and thus, promotes explainable AI. For instance, the following text is a propaganda of type `slogan'. Trump tweeted: $\underbrace{\text{`}`{\texttt {BUILD THE WALL!}"}}_{\text{slogan}}$ Shared Task: This work addresses the two tasks in propaganda detection BIBREF3 of different granularities: (1) Sentence-level Classification (SLC), a binary classification that predicts whether a sentence contains at least one propaganda technique, and (2) Fragment-level Classification (FLC), a token-level (multi-label) classification that identifies both the spans and the type of propaganda technique(s). Contributions: (1) To address SLC, we design an ensemble of different classifiers based on Logistic Regression, CNN and BERT, and leverage transfer learning benefits using the pre-trained embeddings/models from FastText and BERT. We also employed different features such as linguistic (sentiment, readability, emotion, part-of-speech and named entity tags, etc.), layout, topics, etc. (2) To address FLC, we design a multi-task neural sequence tagger based on LSTM-CRF and linguistic features to jointly detect propagandistic fragments and its type. Moreover, we investigate performing FLC and SLC jointly in a multi-granularity network based on LSTM-CRF and BERT. (3) Our system (MIC-CIS) is ranked 3rd (out of 12 participants) and 4th (out of 25 participants) in FLC and SLC tasks, respectively. System Description ::: Linguistic, Layout and Topical Features Some of the propaganda techniques BIBREF3 involve word and phrases that express strong emotional implications, exaggeration, minimization, doubt, national feeling, labeling , stereotyping, etc. This inspires us in extracting different features (Table TABREF1) including the complexity of text, sentiment, emotion, lexical (POS, NER, etc.), layout, etc. To further investigate, we use topical features (e.g., document-topic proportion) BIBREF4, BIBREF5, BIBREF6 at sentence and document levels in order to determine irrelevant themes, if introduced to the issue being discussed (e.g., Red Herring). For word and sentence representations, we use pre-trained vectors from FastText BIBREF7 and BERT BIBREF8. System Description ::: Sentence-level Propaganda Detection Figure FIGREF2 (left) describes the three components of our system for SLC task: features, classifiers and ensemble. The arrows from features-to-classifier indicate that we investigate linguistic, layout and topical features in the two binary classifiers: LogisticRegression and CNN. For CNN, we follow the architecture of DBLP:conf/emnlp/Kim14 for sentence-level classification, initializing the word vectors by FastText or BERT. We concatenate features in the last hidden layer before classification. One of our strong classifiers includes BERT that has achieved state-of-the-art performance on multiple NLP benchmarks. Following DBLP:conf/naacl/DevlinCLT19, we fine-tune BERT for binary classification, initializing with a pre-trained model (i.e., BERT-base, Cased). Additionally, we apply a decision function such that a sentence is tagged as propaganda if prediction probability of the classifier is greater than a threshold ($\tau $). We relax the binary decision boundary to boost recall, similar to pankajgupta:CrossRE2019. Ensemble of Logistic Regression, CNN and BERT: In the final component, we collect predictions (i.e., propaganda label) for each sentence from the three ($\mathcal {M}=3$) classifiers and thus, obtain $\mathcal {M}$ number of predictions for each sentence. We explore two ensemble strategies (Table TABREF1): majority-voting and relax-voting to boost precision and recall, respectively. System Description ::: Fragment-level Propaganda Detection Figure FIGREF2 (right) describes our system for FLC task, where we design sequence taggers BIBREF9, BIBREF10 in three modes: (1) LSTM-CRF BIBREF11 with word embeddings ($w\_e$) and character embeddings $c\_e$, token-level features ($t\_f$) such as polarity, POS, NER, etc. (2) LSTM-CRF+Multi-grain that jointly performs FLC and SLC with FastTextWordEmb and BERTSentEmb, respectively. Here, we add binary sentence classification loss to sequence tagging weighted by a factor of $\alpha $. (3) LSTM-CRF+Multi-task that performs propagandistic span/fragment detection (PFD) and FLC (fragment detection + 19-way classification). Ensemble of Multi-grain, Multi-task LSTM-CRF with BERT: Here, we build an ensemble by considering propagandistic fragments (and its type) from each of the sequence taggers. In doing so, we first perform majority voting at the fragment level for the fragment where their spans exactly overlap. In case of non-overlapping fragments, we consider all. However, when the spans overlap (though with the same label), we consider the fragment with the largest span. Experiments and Evaluation Data: While the SLC task is binary, the FLC consists of 18 propaganda techniques BIBREF3. We split (80-20%) the annotated corpus into 5-folds and 3-folds for SLC and FLC tasks, respectively. The development set of each the folds is represented by dev (internal); however, the un-annotated corpus used in leaderboard comparisons by dev (external). We remove empty and single token sentences after tokenization. Experimental Setup: We use PyTorch framework for the pre-trained BERT model (Bert-base-cased), fine-tuned for SLC task. In the multi-granularity loss, we set $\alpha = 0.1$ for sentence classification based on dev (internal, fold1) scores. We use BIO tagging scheme of NER in FLC task. For CNN, we follow DBLP:conf/emnlp/Kim14 with filter-sizes of [2, 3, 4, 5, 6], 128 filters and 16 batch-size. We compute binary-F1and macro-F1 BIBREF12 in SLC and FLC, respectively on dev (internal). Experiments and Evaluation ::: Results: Sentence-Level Propaganda Table TABREF10 shows the scores on dev (internal and external) for SLC task. Observe that the pre-trained embeddings (FastText or BERT) outperform TF-IDF vector representation. In row r2, we apply logistic regression classifier with BERTSentEmb that leads to improved scores over FastTextSentEmb. Subsequently, we augment the sentence vector with additional features that improves F1 on dev (external), however not dev (internal). Next, we initialize CNN by FastTextWordEmb or BERTWordEmb and augment the last hidden layer (before classification) with BERTSentEmb and feature vectors, leading to gains in F1 for both the dev sets. Further, we fine-tune BERT and apply different thresholds in relaxing the decision boundary, where $\tau \ge 0.35$ is found optimal. We choose the three different models in the ensemble: Logistic Regression, CNN and BERT on fold1 and subsequently an ensemble+ of r3, r6 and r12 from each fold1-5 (i.e., 15 models) to obtain predictions for dev (external). We investigate different ensemble schemes (r17-r19), where we observe that the relax-voting improves recall and therefore, the higher F1 (i.e., 0.673). In postprocess step, we check for repetition propaganda technique by computing cosine similarity between the current sentence and its preceding $w=10$ sentence vectors (i.e., BERTSentEmb) in the document. If the cosine-similarity is greater than $\lambda \in \lbrace .99, .95\rbrace $, then the current sentence is labeled as propaganda due to repetition. Comparing r19 and r21, we observe a gain in recall, however an overall decrease in F1 applying postprocess. Finally, we use the configuration of r19 on the test set. The ensemble+ of (r4, r7 r12) was analyzed after test submission. Table TABREF9 (SLC) shows that our submission is ranked at 4th position. Experiments and Evaluation ::: Results: Fragment-Level Propaganda Table TABREF11 shows the scores on dev (internal and external) for FLC task. Observe that the features (i.e., polarity, POS and NER in row II) when introduced in LSTM-CRF improves F1. We run multi-grained LSTM-CRF without BERTSentEmb (i.e., row III) and with it (i.e., row IV), where the latter improves scores on dev (internal), however not on dev (external). Finally, we perform multi-tasking with another auxiliary task of PFD. Given the scores on dev (internal and external) using different configurations (rows I-V), it is difficult to infer the optimal configuration. Thus, we choose the two best configurations (II and IV) on dev (internal) set and build an ensemble+ of predictions (discussed in section SECREF6), leading to a boost in recall and thus an improved F1 on dev (external). Finally, we use the ensemble+ of (II and IV) from each of the folds 1-3, i.e., $|{\mathcal {M}}|=6$ models to obtain predictions on test. Table TABREF9 (FLC) shows that our submission is ranked at 3rd position. Conclusion and Future Work Our system (Team: MIC-CIS) explores different neural architectures (CNN, BERT and LSTM-CRF) with linguistic, layout and topical features to address the tasks of fine-grained propaganda detection. We have demonstrated gains in performance due to the features, ensemble schemes, multi-tasking and multi-granularity architectures. Compared to the other participating systems, our submissions are ranked 3rd and 4th in FLC and SLC tasks, respectively. In future, we would like to enrich BERT models with linguistic, layout and topical features during their fine-tuning. Further, we would also be interested in understanding and analyzing the neural network learning, i.e., extracting salient fragments (or key-phrases) in the sentence that generate propaganda, similar to pankajgupta:2018LISA in order to promote explainable AI.
[ "For SLC task, the \"ltuorp\" team has the best performing model (0.6323/0.6028/0.6649 for F1/P/R respectively) and for FLC task the \"newspeak\" team has the best performing model (0.2488/0.2863/0.2201 for F1/P/R respectively)." ]
1,541
qasper_e
en
null
e2b21e6c72b2a258c8d8dd7b7f94ee08b4a1edd012ccce04
What is the corpus used for the task?
Introduction Natural languages evolve and words have always been subject to semantic change over time BIBREF1. With the rise of large digitized text resources recent NLP technologies have made it possible to capture such change with vector space models BIBREF2, BIBREF3, BIBREF4, BIBREF5, topic models BIBREF6, BIBREF7, BIBREF8, and sense clustering models BIBREF9. However, many approaches for detecting LSC differ profoundly from each other and therefore drawing comparisons between them can be challenging BIBREF10. Not only do architectures for detecting LSC vary, their performance is also often evaluated without access to evaluation data or too sparse data sets. In cases where evaluation data is available, oftentimes LSCD systems are not evaluated on the same data set which hinders the research community to draw comparisons. For this reason we report the results of the first shared task on unsupervised lexical semantic change detection in German that is based on an annotated data set to guarantee objective reasoning throughout different approaches. The task was organized as part of the seminar 'Lexical Semantic Change Detection' at the IMS Stuttgart in the summer term of 2019. Task The goal of the shared task was to create an architecture to detect semantic change and to rank words according to their degree of change between two different time periods. Given two corpora Ca and Cb, the target words had to be ranked according to their degree of lexical semantic change between Ca and Cb as annotated by human judges. A competition was set up on Codalab and teams mostly consisting of 2 people were formed to take part in the task. There was one group consisting of 3 team members and two individuals who entered the task on their own. In total there were 12 LSCD systems participating in the shared task. The shared task was divided into three phases, i.e., development, testing and analysis phase. In the development phase each team implemented a first version of their model based on a trial data set and submitted it subsequently. In the testing phase the testing data was made public and participants applied their models to the test data with a restriction of possible result uploads to 30. The leaderboard was public at all times. Eventually, the analysis phase was entered and the models of the testing phase were evaluated in terms of the predictions they made and parameters could be tuned further. The models and results will be discussed in detail in sections 7 and 8. Corpora The task, as framed above, requires to detect the semantic change between two corpora. The two corpora used in the shared task correspond to the diachronic corpus pair from BIBREF0: DTA18 and DTA19. They consist of subparts of DTA corpus BIBREF11 which is a freely available lemmatized, POS-tagged and spelling-normalized diachronic corpus of German containing texts from the 16th to the 20th century. DTA18 contains 26 million sentences published between 1750-1799 and DTA19 40 million between 1850-1899. The corpus version used in the task has the following format: "year [tab] lemma1 lemma2 lemma3 ...". Evaluation The Diachronic Usage Relatedness (DURel) gold standard data set includes 22 target words and their varying degrees of semantic change BIBREF12. For each of these target words a random sample of use pairs from the DTA corpus was retrieved and annotated. The annotators were required to rate the pairs according to their semantic relatedness on a scale from 1 to 4 (unrelated - identical meanings) for two time periods. The average Spearman's $\rho $ between the five annotators was 0.66 for 1,320 use paris. The resulting word ranking of the DURel data set is determined by the mean usage relatedness across two time periods and is used as the benchmark to compare the models’ performances in the shared task. Evaluation ::: Metric The output of a system with the target words in the predicted order is compared to the gold ranking of the DURel data set. As the metric to assess how well the model's output fits the gold ranking Spearman's $\rho $ was used. The higher Spearman's rank-order correlation the better the system's performance. Evaluation ::: Baselines Models were compared to two baselines for the shared task: log-transformed normalized frequency difference (FD) count vectors with column intersection and cosine distance (CNT + CI + CD) The window size for CNT + CI + CD was 10. Find more information on these models in BIBREF0. Participating Systems Participants mostly rely on the models compared in BIBREF0 and apply modifications to improve them. In particular, most teams make use of skip-gram with negative sampling (SGNS) based on BIBREF13 to learn the semantic spaces of the two time periods and orthogonal procrustes (OP) to align these vector spaces, similar to the approach by BIBREF14. Different meaning representations such as sense clusters are used as well. As measure to detect the degree of LSC all teams except one choose cosine distance (CD). This team uses Jensen-Shannon distance (JSD) instead, which computes the distance between probability distributions BIBREF15. The models of each team will be briefly introduced in this section. Participating Systems ::: sorensbn Team sorensbn makes use of SGNS + OP + CD to detect LSC. They use similar hyperparameters as in BIBREF0 to tune the SGNS model. They use an open-sourced noise-aware implementation to improve the OP alignment BIBREF16. Participating Systems ::: tidoe Team tidoe builds on SGNS + OP + CD, but they add a transformation step to receive binarized representations of matrices BIBREF17. This step is taken to counter the bias that can occur in vector-space models based on frequencies BIBREF18. Participating Systems ::: in vain The team applies a model based on SGNS with vector initialization alignment and cosine distance (SGNS + VI + CD). Vector initialization is an alignment strategy where the vector space learning model for $t_2$ is initialized with the vectors from $t_1$ BIBREF19. Since SGNS + VI + OP does not perform as well as other models in BIBREF0, they alter the vector initialization process by initializing on the complete model instead of only the word matrix of $t_1$ to receive improved results. Participating Systems ::: Evilly In line with previous approaches, team Evilly builds upon SGNS + OP + CD. They alter the OP step by using only high-frequency words for alignment. Participating Systems ::: DAF Team DAF uses an architecture based on learning vectors with fastText, alignment with unsupervised and supervised variations of OP, and CD, using the MUSE package BIBREF20, BIBREF21. For the supervised alignment stop words are used. The underlying assumption is that stop words serve as functional units of language and their usage should be consistent over time. Participating Systems ::: SnakesOnAPlane The team learns vector spaces with count vectors, positive pointwise mutual information (PPMI), SGNS and uses column intersection (CI) and OP as alignment techniques where applicable. Then they compare two distance measures (CD and JSD) for the different models CNT + CI, PPMI + CI and SGNS + OP to identify which measure performs better for these models. They also experiment with different ways to remove negative values from SGNS vectors, which is needed for JSD. Participating Systems ::: TeamKulkarni15 TeamKulkarni15 uses SGNS + OP + CD with the modification of local alignment with k nearest neighbors, since other models often use global alignment that can be prone to noise BIBREF22. Participating Systems ::: Bashmaistori They use word injection (WI) alignment on PPMI vectors with CD. This approach avoids the complex alignment procedure for embeddings and is applicable to embeddings and count-based methods. They compare two implementations of word injection BIBREF23, BIBREF0 as these showed different results on different data sets. Participating Systems ::: giki Team giki uses PPMI + CI + CD to detect LSC. They state that a word sense is determined by its context, but relevant context words can also be found outside a predefined window. Therefore, they use tf-idf to select relevant context BIBREF24. Participating Systems ::: Edu-Phil Similar to team DAF they also use fastText + OP + CD. Their hypothesis is that fastText may increase the performance for less frequent words in the corpus since generating word embeddings in fasttext is based on character n-grams. Participating Systems ::: orangefoxes They use the model by BIBREF5 which is based on SGNS, but avoids alignment by treating time as a vector that may be combined with word vectors to get time-specific word vectors. Participating Systems ::: Loud Whisper Loud Whisper base their approach on BIBREF9 which is a graph-based sense clustering model. They process the data set to receive bigrams, create a co-occurence graph representation and after clustering assess the type of change per word by comparing the results against an intersection table. Their motivation is not only to use a graph-based approach, but to extend the approach by enabling change detection for all parts of speech as opposed to the original model. Results and Discussion Table TABREF8 shows the results of the shared task. All teams receive better results than baseline 1 (FD), of which a total of 8 teams outperform baseline 2 (CNT + CI + CD). The 4 top scores with $\rho $ $>$ 0.7 are either modified versions of SGNS + OP + CD or use SGNS + VI + CD. The following 4 scores in the range of 0.5 $<$ $\rho $ $<$ 0.6 are generated by the models fastText + OP + CD, SGNS + OP + CD/JSD, and PPMI + WI + CD. Contrary to the results by BIBREF0 the modified version of vector initialization shows high performance similar to OP alignment, as previously reported by BIBREF14. Some modifications to the SGNS + OP + CD approach are able to yield better results than others, e.g. noise-aware alignment and binarized matrices as compared to frequency-driven OP alignment or local alignment with KNN. Team SnakesOnAPlane compare two distance measures and their results show that JSD ($\rho $ $=$ .561) performs minimally worse than CD ($\rho $ $=$ .565) as the semantic change measure for their model. The overall best-performing model is Skip-Gram with orthogonal alignment and cosine distance (SGNS + OP + CD) with similar hyperparameters as in the model architecture described previously BIBREF0. Said architecture was used as the basis for the two best performing models. Team tidoe reports that binarizing matrices leads to a generally worse performance ($\rho $ $=$ .811) compared to the unmodified version of SGNS + OP + CD ($\rho $ $=$ 0.9). The noise aware alignment approach applied by team sorensbn obtains a higher score ($\rho $ $=$ .854) compared to the result reported by tidoe, but is unable to exceed the performance of the unmodified SNGS + OP + CD for the same set of hyperparameters (window size = 10, negative sampling = 1; subsampling = None). Of the 8 scores above the second baseline, 5 use an architecture that builds upon SGNS + OP + CD. Whereas in the lower score segment $\rho $ $<$ 0.5 none of the models use SGNS + OP + CD. These findings are in line with the results reported by BIBREF0, however the overall best results are lower in this shared task, which is expected from the smaller number of parameter combinations explored. Additionally, in the shared task the objective was to report the best score and not to calculate the mean which makes it more difficult to compare the robustness of the models presented here.
[ "DTA18, DTA19", "Diachronic Usage Relatedness (DURel) gold standard data set" ]
1,908
qasper_e
en
null
b89bb41df843f9c8c292df5290d3e62e2ac62171991d5fbf
What is the size of this dataset?
Introduction Social media is now becoming an important real-time information source, especially during natural disasters and emergencies. It is now very common for traditional news media to frequently probe users and resort to social media platforms to obtain real-time developments of events. According to a recent survey by Pew Research Center, in 2017, more than two-thirds of Americans read some of their news on social media. Even for American people who are 50 or older, INLINEFORM0 of them report getting news from social media, which is INLINEFORM1 points higher than the number in 2016. Among all major social media sites, Twitter is most frequently used as a news source, with INLINEFORM2 of its users obtaining their news from Twitter. All these statistical facts suggest that understanding user-generated noisy social media text from Twitter is a significant task. In recent years, while several tools for core natural language understanding tasks involving syntactic and semantic analysis have been developed for noisy social media text BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , there is little work on question answering or reading comprehension over social media, with the primary bottleneck being the lack of available datasets. We observe that recently proposed QA datasets usually focus on formal domains, e.g. CNN/DailyMail BIBREF4 and NewsQA BIBREF5 on news articles; SQuAD BIBREF6 and WikiMovies BIBREF7 that use Wikipedia. In this paper, we propose the first large-scale dataset for QA over social media data. Rather than naively obtaining tweets from Twitter using the Twitter API which can yield irrelevant tweets with no valuable information, we restrict ourselves only to tweets which have been used by journalists in news articles thus implicitly implying that such tweets contain useful and relevant information. To obtain such relevant tweets, we crawled thousands of news articles that include tweet quotations and then employed crowd-sourcing to elicit questions and answers based on these event-aligned tweets. Table TABREF3 gives an example from our TweetQA dataset. It shows that QA over tweets raises challenges not only because of the informal nature of oral-style texts (e.g. inferring the answer from multiple short sentences, like the phrase “so young” that forms an independent sentence in the example), but also from tweet-specific expressions (such as inferring that it is “Jay Sean” feeling sad about Paul's death because he posted the tweet). Furthermore, we show the distinctive nature of TweetQA by comparing the collected data with traditional QA datasets collected primarily from formal domains. In particular, we demonstrate empirically that three strong neural models which achieve good performance on formal data do not generalize well to social media data, bringing out challenges to developing QA systems that work well on social media domains. In summary, our contributions are: TweetQA In this section, we first describe the three-step data collection process of TweetQA: tweet crawling, question-answer writing and answer validation. Next, we define the specific task of TweetQA and discuss several evaluation metrics. To better understand the characteristics of the TweetQA task, we also include our analysis on the answer and question characteristics using a subset of QA pairs from the development set. Data Collection One major challenge of building a QA dataset on tweets is the sparsity of informative tweets. Many users write tweets to express their feelings or emotions about their personal lives. These tweets are generally uninformative and also very difficult to ask questions about. Given the linguistic variance of tweets, it is generally hard to directly distinguish those tweets from informative ones. In terms of this, rather than starting from Twitter API Search, we look into the archived snapshots of two major news websites (CNN, NBC), and then extract the tweet blocks that are embedded in the news articles. In order to get enough data, we first extract the URLs of all section pages (e.g. World, Politics, Money, Tech) from the snapshot of each home page and then crawl all articles with tweets from these section pages. Note that another possible way to collect informative tweets is to download the tweets that are posted by the official Twitter accounts of news media. However, these tweets are often just the summaries of news articles, which are written in formal text. As our focus is to develop a dataset for QA on informal social media text, we do not consider this approach. After we extracted tweets from archived news articles, we observed that there is still a portion of tweets that have very simple semantic structures and thus are very difficult to raise meaningful questions. An example of such tweets can be like: “Wanted to share this today - @IAmSteveHarvey". This tweet is actually talking about an image attached to this tweet. Some other tweets with simple text structures may talk about an inserted link or even videos. To filter out these tweets that heavily rely on attached media to convey information, we utilize a state-of-the-art semantic role labeling model trained on CoNLL-2005 BIBREF15 to analyze the predicate-argument structure of the tweets collected from news articles and keep only the tweets with more than two labeled arguments. This filtering process also automatically filters out most of the short tweets. For the tweets collected from CNN, INLINEFORM0 of them were filtered via semantic role labeling. For tweets from NBC, INLINEFORM1 of the tweets were filtered. We then use Amazon Mechanical Turk to collect question-answer pairs for the filtered tweets. For each Human Intelligence Task (HIT), we ask the worker to read three tweets and write two question-answer pairs for each tweet. To ensure the quality, we require the workers to be located in major English speaking countries (i.e. Canada, US, and UK) and have an acceptance rate larger than INLINEFORM0 . Since we use tweets as context, lots of important information are contained in hashtags or even emojis. Instead of only showing the text to the workers, we use javascript to directly embed the whole tweet into each HIT. This gives workers the same experience as reading tweets via web browsers and help them to better compose questions. To avoid trivial questions that can be simply answered by superficial text matching methods or too challenging questions that require background knowledge. We explicitly state the following items in the HIT instructions for question writing: No Yes-no questions should be asked. The question should have at least five words. Videos, images or inserted links should not be considered. No background knowledge should be required to answer the question. To help the workers better follow the instructions, we also include a representative example showing both good and bad questions or answers in our instructions. Figure FIGREF14 shows the example we use to guide the workers. As for the answers, since the context we consider is relatively shorter than the context of previous datasets, we do not restrict the answers to be in the tweet, otherwise, the task may potentially be simplified as a classification problem. The workers are allowed to write their answers in their own words. We just require the answers to be brief and can be directly inferred from the tweets. After we retrieve the QA pairs from all HITs, we conduct further post-filtering to filter out the pairs from workers that obviously do not follow instructions. We remove QA pairs with yes/no answers. Questions with less than five words are also filtered out. This process filtered INLINEFORM0 of the QA pairs. The dataset now includes 10,898 articles, 17,794 tweets, and 13,757 crowdsourced question-answer pairs. The collected QA pairs will be directly available to the public, and we will provide a script to download the original tweets and detailed documentation on how we build our dataset. Also note that since we keep the original news article and news titles for each tweet, our dataset can also be used to explore more challenging generation tasks. Table TABREF19 shows the statistics of our current collection, and the frequency of different types of questions is shown in Table TABREF21 . All QA pairs were written by 492 individual workers. For the purposes of human performance evaluation and inter-annotator agreement checking, we launch a different set of HITs to ask workers to answer questions in the test and development set. The workers are shown with the tweet blocks as well as the questions collected in the previous step. At this step, workers are allowed to label the questions as “NA" if they think the questions are not answerable. We find that INLINEFORM0 of the questions are labeled as unanswerable by the workers (for SQuAD, the ratio is INLINEFORM1 ). Since the answers collected at this step and previous step are written by different workers, the answers can be written in different text forms even they are semantically equal to each other. For example, one answer can be “Hillary Clinton” while the other is “@HillaryClinton”. As it is not straightforward to automatically calculate the overall agreement, we manually check the agreement on a subset of 200 random samples from the development set and ask an independent human moderator to verify the result. It turns out that INLINEFORM2 of the answers pairs are semantically equivalent, INLINEFORM3 of them are partially equivalent (one of them is incomplete) and INLINEFORM4 are totally inconsistent. The answers collected at this step are also used to measure the human performance. We have 59 individual workers participated in this process. Task and Evaluation As described in the question-answer writing process, the answers in our dataset are different from those in some existing extractive datasets. Thus we consider the task of answer generation for TweetQA and we use several standard metrics for natural language generation to evaluate QA systems on our dataset, namely we consider BLEU-1 BIBREF16 , Meteor BIBREF17 and Rouge-L BIBREF18 in this paper. To evaluate machine systems, we compute the scores using both the original answer and validation answer as references. For human performance, we use the validation answers as generated ones and the original answers as references to calculate the scores. Analysis In this section, we analyze our dataset and outline the key properties that distinguish it from standard QA datasets like SQuAD BIBREF6 . First, our dataset is derived from social media text which can be quite informal and user-centric as opposed to SQuAD which is derived from Wikipedia and hence more formal in nature. We observe that the shared vocabulary between SQuAD and TweetQA is only INLINEFORM0 , suggesting a significant difference in their lexical content. Figure FIGREF25 shows the 1000 most distinctive words in each domain as extracted from SQuAD and TweetQA. Note the stark differences in the words seen in the TweetQA dataset, which include a large number of user accounts with a heavy tail. Examples include @realdonaldtrump, @jdsutter, @justinkirkland and #cnnworldcup, #goldenglobes. In contrast, the SQuAD dataset rarely has usernames or hashtags that are used to signify events or refer to the authors. It is also worth noting that the data collected from social media can not only capture events and developments in real-time but also capture individual opinions and thus requires reasoning related to the authorship of the content as is illustrated in Table TABREF3 . In addition, while SQuAD requires all answers to be spans from the given passage, we do not enforce any such restriction and answers can be free-form text. In fact, we observed that INLINEFORM1 of our QA pairs consists of answers which do not have an exact substring matching with their corresponding passages. All of the above distinguishing factors have implications to existing models which we analyze in upcoming sections. We conduct analysis on a subset of TweetQA to get a better understanding of the kind of reasoning skills that are required to answer these questions. We sample 150 questions from the development set, then manually label their reasoning categories. Table TABREF26 shows the analysis results. We use some of the categories in SQuAD BIBREF6 and also proposes some tweet-specific reasoning types. Our first observation is that almost half of the questions only require the ability to identify paraphrases. Although most of the “paraphrasing only” questions are considered as fairly easy questions, we find that a significant amount (about 3/4) of these questions are asked about event-related topics, such as information about “who did what to whom, when and where”. This is actually consistent with our motivation to create TweetQA, as we expect this dataset could be used to develop systems that automatically collect information about real-time events. Apart from these questions, there are also a group of questions that require understanding common sense, deep semantics (i.e. the answers cannot be derived from the literal meanings of the tweets), and relations of sentences (including co-reference resolution), which are also appeared in other RC datasets BIBREF6 . On the other hand, the TweetQA also has its unique properties. Specifically, a significant amount of questions require certain reasoning skills that are specific to social media data: [noitemsep] Understanding authorship: Since tweets are highly personal, it is critical to understand how questions/tweets related to the authors. Oral English & Tweet English: Tweets are often oral and informal. QA over tweets requires the understanding of common oral English. Our TweetQA also requires understanding some tweet-specific English, like conversation-style English. Understanding of user IDs & hashtags: Tweets often contains user IDs and hashtags, which are single special tokens. Understanding these special tokens is important to answer person- or event-related questions. Experiments To show the challenge of TweetQA for existing approaches, we consider four representative methods as baselines. For data processing, we first remove the URLs in the tweets and then tokenize the QA pairs and tweets using NLTK. This process is consistent for all baselines. Query Matching Baseline We first consider a simple query matching baseline similar to the IR baseline in Kocisk2017TheNR. But instead of only considering several genres of spans as potential answers, we try to match the question with all possible spans in the tweet context and choose the span with the highest BLEU-1 score as the final answer, which follows the method and implementation of answer span selection for open-domain QA BIBREF19 . We include this baseline to show that TweetQA is a nontrivial task which cannot be easily solved with superficial text matching. Neural Baselines We then explore three typical neural models that perform well on existing formal-text datasets. One takes a generative perspective and learns to decode the answer conditioned on the question and context, while the others learns to extract a text span from the context that best answers the question. RNN-based encoder-decoder models BIBREF20 , BIBREF21 have been widely used for natural language generation tasks. Here we consider a recently proposed generative model BIBREF22 that first encodes the context and question into a multi-perspective memory via four different neural matching layers, then decodes the answer using an attention-based model equipped with both copy and coverage mechanisms. The model is trained on our dataset for 15 epochs and we choose the model parameters that achieve the best BLEU-1 score on the development set. Unlike the aforementioned generative model, the Bi-Directional Attention Flow (BiDAF) BIBREF23 network learns to directly predict the answer span in the context. BiDAF first utilizes multi-level embedding layers to encode both the question and context, then uses bi-directional attention flow to get a query-aware context representation, which is further modeled by an RNN layer to make the span predictions. Since our TweetQA does not have labeled answer spans as in SQuAD, we need to use the human-written answers to retrieve the answer-span labels for training. To get the approximate answer spans, we consider the same matching approach as in the query matching baseline. But instead of using questions to do matching, we use the human-written answers to get the spans that achieve the best BLEU-1 scores. This is another extractive RC model that benefits from the recent advance in pretrained general language encoders BIBREF24 , BIBREF25 . In our work, we select the BERT model BIBREF25 which has achieved the best performance on SQuAD. In our experiments, we use the PyTorch reimplementation of the uncased base model. The batch size is set as 12 and we fine-tune the model for 2 epochs with learning rate 3e-5. Overall Performance We test the performance of all baseline systems using the three generative metrics mentioned in Section SECREF22 . As shown in Table TABREF40 , there is a large performance gap between human performance and all baseline methods, including BERT, which has achieved superhuman performance on SQuAD. This confirms than TweetQA is more challenging than formal-test RC tasks. We also show the upper bound of the extractive models (denoted as Extract-Upper). In the upper bound method, the answers are defined as n-grams from the tweets that maximize the BLEU-1/METEOR/ROUGE-L compared to the annotated groundtruth. From the results, we can see that the BERT model still lags behind the upper bound significantly, showing great potential for future research. It is also interesting to see that the Human performance is slightly worse compared to the upper bound. This indicates (1) the difficulty of our problem also exists for human-beings and (2) for the answer verification process, the workers tend to also extract texts from tweets as answers. According to the comparison between the two non-pretraining baselines, our generative baseline yields better results than BiDAF. We believe this is largely due to the abstractive nature of our dataset, since the workers can sometimes write the answers using their own words. Performance Analysis over Human-Labeled Question Types To better understand the difficulty of the TweetQA task for current neural models, we analyze the decomposed model performance on the different kinds of questions that require different types of reasoning (we tested on the subset which has been used for the analysis in Table TABREF26 ). Table TABREF42 shows the results of the best performed non-pretraining and pretraining approach, i.e., the generative QA baseline and the fine-tuned BERT. Our full comparison including the BiDAF performance and evaluation on more metrics can be found in Appendix SECREF7 . Following previous RC research, we also include analysis on automatically-labeled question types in Appendix SECREF8 . As indicated by the results on METEOR and ROUGE-L (also indicated by a third metric, BLEU-1, as shown in Appendix SECREF7 ), both baselines perform worse on questions that require the understanding deep semantics and userID&hashtags. The former kind of questions also appear in other benchmarks and is known to be challenging for many current models. The second kind of questions is tweet-specific and is related to specific properties of social media data. Since both models are designed for formal-text passages and there is no special treatment for understanding user IDs and hashtags, the performance is severely limited on the questions requiring such reasoning abilities. We believe that good segmentation, disambiguation and linking tools developed by the social media community for processing the userIDs and hashtags will significantly help these question types. Besides the easy questions requiring mainly paraphrasing skill, we also find that the questions requiring the understanding of authorship and oral/tweet English habits are not very difficult. We think this is due to the reason that, except for these tweet-specific tokens, the rest parts of the questions are rather simple, which may require only simple reasoning skill (e.g. paraphrasing). Although BERT was demonstrated to be a powerful tool for reading comprehension, this is the first time a detailed analysis has been done on its reasoning skills. From the results, the huge improvement of BERT mainly comes from two types. The first is paraphrasing, which is not surprising because that a well pretrained language model is expected to be able to better encode sentences. Thus the derived embedding space could work better for sentence comparison. The second type is commonsense, which is consistent with the good performance of BERT BIBREF25 on SWAG BIBREF26 . We believe that this provides further evidence about the connection between large-scaled deep neural language model and certain kinds of commonsense. Conclusion We present the first dataset for QA on social media data by leveraging news media and crowdsourcing. The proposed dataset informs us of the distinctiveness of social media from formal domains in the context of QA. Specifically, we find that QA on social media requires systems to comprehend social media specific linguistic patterns like informality, hashtags, usernames, and authorship. These distinguishing linguistic factors bring up important problems for the research of QA that currently focuses on formal text. We see our dataset as a first step towards enabling not only a deeper understanding of natural language in social media but also rich applications that can extract essential real-time knowledge from social media. Full results of Performance Analysis over Human-Labeled Question Types Table TABREF45 gives our full evaluation on human annotated question types. Compared with the BiDAF model, one interesting observation is that the generative baseline gets much worse results on ambiguous questions. We conjecture that although these questions are meaningless, they still have many words that overlapped with the contexts. This can give BiDAF potential advantage over the generative baseline. Performance Analysis over Automatically-Labeled Question Types Besides the analysis on different reasoning types, we also look into the performance over questions with different first tokens in the development set, which provide us an automatic categorization of questions. According to the results in Table TABREF46 , the three neural baselines all perform the best on “Who” and “Where” questions, to which the answers are often named entities. Since the tweet contexts are short, there are only a small number of named entities to choose from, which could make the answer pattern easy to learn. On the other hand, the neural models fail to perform well on the “Why” questions, and the results of neural baselines are even worse than that of the matching baseline. We find that these questions generally have longer answer phrases than other types of questions, with the average answer length being 3.74 compared to 2.13 for any other types. Also, since all the answers are written by humans instead of just spans from the context, these abstractive answers can make it even harder for current models to handle. We also observe that when people write “Why” questions, they tend to copy word spans from the tweet, potentially making the task easier for the matching baseline.
[ "13,757", "10,898 articles, 17,794 tweets, and 13,757 crowdsourced question-answer pairs" ]
3,704
qasper_e
en
null
a7281359720bac697e8f2d18c8d460b238d46b17205a5f4a
What classifiers have been trained?
Introduction A large portion of the car-buying experience in the United States involves interactions at a car dealership BIBREF0, BIBREF1, BIBREF2. Traditionally, a car dealer listens and understands the needs of the client and helps them find what car is right based on their needs. With the advent of the internet, many potential car buyers take to the web to research cars before going to a dealership in person BIBREF0, BIBREF2. However, nearly 50% of customers bought a car at the dealership based on the sales representative's advice, not their own research BIBREF1, BIBREF2. Throughout this interaction the dealer is acting as a type of translator or classifier. The dealer takes a natural language input (e.g. “I need a fast, family friendly, reliable car under $20k”) and returns a list of suggestions. The dealer understands the ideas of “fast”, “family friendly”, and “reliable” and is able to come up with a reasonable recommendation based on this knowledge. In this paper we aim to create a system that can understand car-speak based on some natural language input (we want to recreate the dealer from above). But how do we prepare a proper training set for a Natural Language model? What model is best suited to this problem? Can this model take a human out of the car-buying process? To answer these questions, the remainder of this paper makes the following contributions: Defining “car-speak” and its role in the car-buying process. Appropriate training data for a Natural Language model. A model that is able to properly classify car-speak and return a car. We aim to accomplish these goals in a scientific manner, using real data and modern methods. Related Work There has been some work done in the field of car-sales and dealer interactions. However, this is the first work that specifically focuses on the Deloitte has published a report on the entire car-buying process BIBREF0. The report goes into great depth about the methods potential buyers use to find new cars to buy, and how they go about buying them. The report tells us that there are several unique phases that a potential buyer goes through before buying a car. Verhoef et al. looked at the specifics of dealer interaction and how dealers retain customers BIBREF3. Verhoef tells us how important dealers are to the car-buying process. He also explains how influential a dealer can be on what car the buyer purchases. Jeff Kershner compiled a series of statistics about Dealership Sales BIBREF1. These statistics focus on small social interactions BIBREF4 between the dealer and the buyer. Barley explains the increasing role of technology in the car-buying process BIBREF2. Barley tells us that users prefer to use technology/robots to find the cars they want to buy instead of going to a dealer, due the distrust towards sales representatives. What is Car-speak? When a potential buyer begins to identify their next car-purchase they begin with identifying their needs. These needs often come in the form of an abstract situation, for instance, “I need a car that goes really fast”. This could mean that they need a car with a V8 engine type or a car that has 500 horsepower, but the buyer does not know that, all they know is that they need a “fast” car. The term “fast” is car-speak. Car-speak is abstract language that pertains to a car's physical attribute(s). In this instance the physical attributes that the term “fast” pertains to could be the horsepower, or it could be the car's form factor (how the car looks). However, we do not know exactly which attributes the term “fast” refers to. The use of car-speak is present throughout the car-buying process. It begins in the Research Phase where buyers identify their needs BIBREF0. When the buyer goes to a dealership to buy a car, they communicate with the dealer in similar car-speak BIBREF2 and convey their needs to the sales representative. Finally, the sales representative uses their internal classifier to translate this car-speak into actual physical attributes (e.g. `fast' $ \longrightarrow $ `700 horsepower & a sleek form factor') and offers a car to the buyer. Understanding car-speak is not a trivial task. Figure FIGREF4 shows two cars that have high top speeds, however both cars may not be considered “fast”. We need to mine the ideas that people have about cars in order to determine which cars are “fast” and which cars are not. Gathering Car-speak Data We aim to curate a data set of car-speak in order to train a model properly. However, there are a few challenges that present themselves: What is a good source of car-speak? How can we acquire the data? How can we be sure the data set is relevant? What is a good source of car-speak? We find plenty of car-speak in car reviews. Table TABREF5 provides excerpts from reviews with the car-speak terms bolded. Car reviews often describe cars in an abstract manner, which makes the review more useful for car-buyers. The reviews are often also about specific use-cases for each car (e.g. using the car to tow a trailer), so they capture all possible aspects of a car. The reviews are each written about a specific car, so we are able to map car-speak to a specific car model. We choose the reviews from the U.S. News & World Report because they have easily accessible full-length reviews about every car that has been sold in the United States since 2006 BIBREF5. How can we acquire the data? We can acquire this data using modern web-scraping tools such as beautiful-soup. The data is publicly available on https://cars.usnews.com/cars-trucks BIBREF5. These reviews also include a scorecard and justification of their reviews. How can we be sure the data set is relevant? On average vehicles on United States roads are 11.6 years old, making the average manufacturing year 2006-2007 BIBREF6, BIBREF7. In order to have a relevant data set we gather all of the available reviews for car models made between the years 2000 and 2018. Translating Car-Speak Our data set contains $3,209$ reviews about 553 different cars from 49 different car manufacturers. In order to accomplish our goal of translating and classifying car-speak we need to filter our data set so that we only have the most relevant terms. We then need to be able to weight each word in each review, so that we can determine the most relevant ideas in each document for the purpose of classification. Finally, we need to train various classification models and evaluate them. Translating Car-Speak ::: Filtering the Data We would like to be able to represent each car with the most relevant car-speak terms. We can do this by filtering each review using the NLTK library BIBREF8, only retaining the most relevant words. First we token-ize each review and then keep only the nouns and adjectives from each review since they are the most salient parts of speech BIBREF9. This leaves us with $10,867$ words across all reviews. Figure FIGREF6 shows the frequency of the top 20 words that remain. Words such as “saftey” and “luxury” are among the top words used in reviews. These words are very good examples of car-speak. Both words are abstract descriptions of cars, but both have physical characteristics that are associated with them as we discussed in Section SECREF3. Translating Car-Speak ::: TF-IDF So far we have compiled the most relevant terms in from the reviews. We now need to weight these terms for each review, so that we know the car-speak terms are most associated with a car. Using TF-IDF (Term Frequency-Inverse Document Frequency) has been used as a reliable metric for finding the relevant terms in a document BIBREF10. We represent each review as a vector of TF-IDF scores for each word in the review. The length of this vector is $10,867$. We label each review vector with the car it reviews. We ignore the year of the car being reviewed and focus specifically on the model (i.e Acura ILX, not 2013 Acura ILX). This is because there a single model of car generally retains the same characteristics over time BIBREF11, BIBREF12. Translating Car-Speak ::: Classification Experiments We train a series of classifiers in order to classify car-speak. We train three classifiers on the review vectors that we prepared in Section SECREF8. The classifiers we use are K Nearest Neighbors (KNN), Random Forest (RF), Support Vector Machine (SVM), and Multi-layer Perceptron (MLP) BIBREF13. In order to evaluate our classifiers, we perform 4-fold cross validation on a shuffled data set. Table TABREF10 shows the F1 micro and F1 macro scores for all the classifiers. The KNN classifier seem to perform the best across all four metrics. This is probably due to the multi-class nature of the data set. Conclusion & Future Work In this paper we aim to provide an introductory understanding of car-speak and a way to automate car dealers at dealerships. We first provide a definition of “car-speak” in Section SECREF3. We explore what constitutes car-speak and how to identify car-speak. We also gather a data set of car-speak to use for exploration and training purposes. This data set id full of vehicle reviews from U.S. News BIBREF5. These reviews provide a reasonable set of car-speak data that we can study. Finally, we create and test several classifiers that are trained on the data we gathered. While these classifiers did not perform particularly well, they provide a good starting point for future work on this subject. In the future we plan to use more complex models to attempt to understand car-speak. We also would like to test our classifiers on user-provided natural language queries. This would be a more practical evaluation of our classification. It would also satisfy the need for a computer system that understands car-speak.
[ "KNN\nRF\nSVM\nMLP", " K Nearest Neighbors (KNN), Random Forest (RF), Support Vector Machine (SVM), Multi-layer Perceptron (MLP)" ]
1,639
qasper_e
en
null
56600b6a10fb297db2b0c15a21c2d49f7a0a90f927387e3b
How do they obtain the new context represetation?
Introduction Relation classification is the task of assigning sentences with two marked entities to a predefined set of relations. The sentence “We poured the <e1>milk</e1> into the <e2>pumpkin mixture</e2>.”, for example, expresses the relation Entity-Destination(e1,e2). While early research mostly focused on support vector machines or maximum entropy classifiers BIBREF0 , BIBREF1 , recent research showed performance improvements by applying neural networks (NNs) BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 on the benchmark data from SemEval 2010 shared task 8 BIBREF8 . This study investigates two different types of NNs: recurrent neural networks (RNNs) and convolutional neural networks (CNNs) as well as their combination. We make the following contributions: (1) We propose extended middle context, a new context representation for CNNs for relation classification. The extended middle context uses all parts of the sentence (the relation arguments, left of the relation arguments, between the arguments, right of the arguments) and pays special attention to the middle part. (2) We present connectionist bi-directional RNN models which are especially suited for sentence classification tasks since they combine all intermediate hidden layers for their final decision. Furthermore, the ranking loss function is introduced for the RNN model optimization which has not been investigated in the literature for relation classification before. (3) Finally, we combine CNNs and RNNs using a simple voting scheme and achieve new state-of-the-art results on the SemEval 2010 benchmark dataset. Related Work In 2010, manually annotated data for relation classification was released in the context of a SemEval shared task BIBREF8 . Shared task participants used, i.a., support vector machines or maximum entropy classifiers BIBREF0 , BIBREF1 . Recently, their results on this data set were outperformed by applying NNs BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 . zeng2014 built a CNN based only on the context between the relation arguments and extended it with several lexical features. kim2014 and others used convolutional filters of different sizes for CNNs. nguyen applied this to relation classification and obtained improvements over single filter sizes. deSantos2015 replaced the softmax layer of the CNN with a ranking layer. They showed improvements and published the best result so far on the SemEval dataset, to our knowledge. socher used another NN architecture for relation classification: recursive neural networks that built recursive sentence representations based on syntactic parsing. In contrast, zhang investigated a temporal structured RNN with only words as input. They used a bi-directional model with a pooling layer on top. Convolutional Neural Networks (CNN) CNNs perform a discrete convolution on an input matrix with a set of different filters. For NLP tasks, the input matrix represents a sentence: Each column of the matrix stores the word embedding of the corresponding word. By applying a filter with a width of, e.g., three columns, three neighboring words (trigram) are convolved. Afterwards, the results of the convolution are pooled. Following collobertWeston, we perform max-pooling which extracts the maximum value for each filter and, thus, the most informative n-gram for the following steps. Finally, the resulting values are concatenated and used for classifying the relation expressed in the sentence. Input: Extended Middle Context One of our contributions is a new input representation especially designed for relation classification. The contexts are split into three disjoint regions based on the two relation arguments: the left context, the middle context and the right context. Since in most cases the middle context contains the most relevant information for the relation, we want to focus on it but not ignore the other regions completely. Hence, we propose to use two contexts: (1) a combination of the left context, the left entity and the middle context; and (2) a combination of the middle context, the right entity and the right context. Due to the repetition of the middle context, we force the network to pay special attention to it. The two contexts are processed by two independent convolutional and max-pooling layers. After pooling, the results are concatenated to form the sentence representation. Figure FIGREF3 depicts this procedure. It shows an examplary sentence: “He had chest pain and <e1>headaches</e1> from <e2>mold</e2> in the bedroom.” If we only considered the middle context “from”, the network might be tempted to predict a relation like Entity-Origin(e1,e2). However, by also taking the left and right context into account, the model can detect the relation Cause-Effect(e2,e1). While this could also be achieved by integrating the whole context into the model, using the whole context can have disadvantages for longer sentences: The max pooling step can easily choose a value from a part of the sentence which is far away from the mention of the relation. With splitting the context into two parts, we reduce this danger. Repeating the middle context increases the chance for the max pooling step to pick a value from the middle context. Convolutional Layer Following previous work (e.g., BIBREF5 , BIBREF6 ), we use 2D filters spanning all embedding dimensions. After convolution, a max pooling operation is applied that stores only the highest activation of each filter. We apply filters with different window sizes 2-5 (multi-windows) as in BIBREF5 , i.e. spanning a different number of input words. Recurrent Neural Networks (RNN) Traditional RNNs consist of an input vector, a history vector and an output vector. Based on the representation of the current input word and the previous history vector, a new history is computed. Then, an output is predicted (e.g., using a softmax layer). In contrast to most traditional RNN architectures, we use the RNN for sentence modeling, i.e., we predict an output vector only after processing the whole sentence and not after each word. Training is performed using backpropagation through time BIBREF9 which unfolds the recurrent computations of the history vector for a certain number of time steps. To avoid exploding gradients, we use gradient clipping with a threshold of 10 BIBREF10 . Input of the RNNs Initial experiments showed that using trigrams as input instead of single words led to superior results. Hence, at timestep INLINEFORM0 we do not only give word INLINEFORM1 to the model but the trigram INLINEFORM2 by concatenating the corresponding word embeddings. Connectionist Bi-directional RNNs Especially for relation classification, the processing of the relation arguments might be easier with knowledge of the succeeding words. Therefore in bi-directional RNNs, not only a history vector of word INLINEFORM0 is regarded but also a future vector. This leads to the following conditioned probability for the history INLINEFORM1 at time step INLINEFORM2 : DISPLAYFORM0 Thus, the network can be split into three parts: a forward pass which processes the original sentence word by word (Equation EQREF6 ); a backward pass which processes the reversed sentence word by word (Equation ); and a combination of both (Equation ). All three parts are trained jointly. This is also depicted in Figure FIGREF7 . Combining forward and backward pass by adding their hidden layer is similar to BIBREF7 . We, however, also add a connection to the previous combined hidden layer with weight INLINEFORM0 to be able to include all intermediate hidden layers into the final decision of the network (see Equation ). We call this “connectionist bi-directional RNN”. In our experiments, we compare this RNN with uni-directional RNNs and bi-directional RNNs without additional hidden layer connections. Word Representations Words are represented by concatenated vectors: a word embedding and a position feature vector. Pretrained word embeddings. In this study, we used the word2vec toolkit BIBREF11 to train embeddings on an English Wikipedia from May 2014. We only considered words appearing more than 100 times and added a special PADDING token for convolution. This results in an embedding training text of about 485,000 terms and INLINEFORM0 tokens. During model training, the embeddings are updated. Position features. We incorporate randomly initialized position embeddings similar to zeng2014, nguyen and deSantos2015. In our RNN experiments, we investigate different possibilities of integrating position information: position embeddings, position embeddings with entity presence flags (flags indicating whether the current word is one of the relation arguments), and position indicators BIBREF7 . Objective Function: Ranking Loss Ranking. We applied the ranking loss function proposed in deSantos2015 to train our models. It maximizes the distance between the true label INLINEFORM0 and the best competitive label INLINEFORM1 given a data point INLINEFORM2 . The objective function is DISPLAYFORM0 with INLINEFORM0 and INLINEFORM1 being the scores for the classes INLINEFORM2 and INLINEFORM3 respectively. The parameter INLINEFORM4 controls the penalization of the prediction errors and INLINEFORM5 and INLINEFORM6 are margins for the correct and incorrect classes. Following deSantos2015, we set INLINEFORM7 . We do not learn a pattern for the class Other but increase its difference to the best competitive label by using only the second summand in Equation EQREF10 during training. Experiments and Results We used the relation classification dataset of the SemEval 2010 task 8 BIBREF8 . It consists of sentences which have been manually labeled with 19 relations (9 directed relations and one artificial class Other). 8,000 sentences have been distributed as training set and 2,717 sentences served as test set. For evaluation, we applied the official scoring script and report the macro F1 score which also served as the official result of the shared task. RNN and CNN models were implemented with theano BIBREF12 , BIBREF13 . For all our models, we use L2 regularization with a weight of 0.0001. For CNN training, we use mini batches of 25 training examples while we perform stochastic gradient descent for the RNN. The initial learning rates are 0.2 for the CNN and 0.01 for the RNN. We train the models for 10 (CNN) and 50 (RNN) epochs without early stopping. As activation function, we apply tanh for the CNN and capped ReLU for the RNN. For tuning the hyperparameters, we split the training data into two parts: 6.5k (training) and 1.5k (development) sentences. We also tuned the learning rate schedule on dev. Beside of training single models, we also report ensemble results for which we combined the presented single models with a voting process. Performance of CNNs As a baseline system, we implemented a CNN similar to the one described by zeng2014. It consists of a standard convolutional layer with filters with only one window size, followed by a softmax layer. As input it uses the middle context. In contrast to zeng2014, our CNN does not have an additional fully connected hidden layer. Therefore, we increased the number of convolutional filters to 1200 to keep the number of parameters comparable. With this, we obtain a baseline result of 73.0. After including 5 dimensional position features, the performance was improved to 78.6 (comparable to 78.9 as reported by zeng2014 without linguistic features). In the next step, we investigate how this result changes if we successively add further features to our CNN: multi-windows for convolution (window sizes: 2,3,4,5 and 300 feature maps each), ranking layer instead of softmax and our proposed extended middle context. Table TABREF12 shows the results. Note that all numbers are produced by CNNs with a comparable number of parameters. We also report F1 for increasing the word embedding dimensionality from 50 to 400. The position embedding dimensionality is 5 in combination with 50 dimensional word embeddings and 35 with 400 dimensional word embeddings. Our results show that especially the ranking layer and the embedding size have an important impact on the performance. Performance of RNNs As a baseline for the RNN models, we apply a uni-directional RNN which predicts the relation after processing the whole sentence. With this model, we achieve an F1 score of 61.2 on the SemEval test set. Afterwards, we investigate the impact of different position features on the performance of uni-directional RNNs (position embeddings, position embeddings concatenated with a flag indicating whether the current word is an entity or not, and position indicators BIBREF7 ). The results indicate that position indicators (i.e. artificial words that indicate the entity presence) perform the best on the SemEval data. We achieve an F1 score of 73.4 with them. However, the difference to using position embeddings with entity flags is not statistically significant. Similar to our CNN experiments, we successively vary the RNN models by using bi-directionality, by adding connections between the hidden layers (“connectionist”), by applying ranking instead of softmax to predict the relation and by increasing the word embedding dimension to 400. The results in Table TABREF14 show that all of these variations lead to statistically significant improvements. Especially the additional hidden layer connections and the integration of the ranking layer have a large impact on the performance. Combination of CNNs and RNNs Finally, we combine our CNN and RNN models using a voting process. For each sentence in the test set, we apply several CNN and RNN models presented in Tables TABREF12 and TABREF14 and predict the class with the most votes. In case of a tie, we pick one of the most frequent classes randomly. The combination achieves an F1 score of 84.9 which is better than the performance of the two NN types alone. It, thus, confirms our assumption that the networks provide complementary information: while the RNN computes a weighted combination of all words in the sentence, the CNN extracts the most informative n-grams for the relation and only considers their resulting activations. Comparison with State of the Art Table TABREF16 shows the results of our models ER-CNN (extended ranking CNN) and R-RNN (ranking RNN) in the context of other state-of-the-art models. Our proposed models obtain state-of-the-art results on the SemEval 2010 task 8 data set without making use of any linguistic features. Conclusion In this paper, we investigated different features and architectural choices for convolutional and recurrent neural networks for relation classification without using any linguistic features. For convolutional neural networks, we presented a new context representation for relation classification. Furthermore, we introduced connectionist recurrent neural networks for sentence classification tasks and performed the first experiments with ranking recurrent neural networks. Finally, we showed that even a simple combination of convolutional and recurrent neural networks improved results. With our neural models, we achieved new state-of-the-art results on the SemEval 2010 task 8 benchmark data. Acknowledgments Heike Adel is a recipient of the Google European Doctoral Fellowship in Natural Language Processing and this research is supported by this fellowship. This research was also supported by Deutsche Forschungsgemeinschaft: grant SCHU 2246/4-2.
[ "They use two independent convolutional and max-pooling layers on (1) a combination of the left context, the left entity and the middle context; and (2) a combination of the middle context, the right entity and the right context. They concatenated the two results after pooling to get the new context representation." ]
2,435
qasper_e
en
null
ab031ab8495d8200afcf794d5368ed54353ec254b176d190
What is specific to multi-granularity and multi-tasking neural arhiteture design?
Introduction In the age of information dissemination without quality control, it has enabled malicious users to spread misinformation via social media and aim individual users with propaganda campaigns to achieve political and financial gains as well as advance a specific agenda. Often disinformation is complied in the two major forms: fake news and propaganda, where they differ in the sense that the propaganda is possibly built upon true information (e.g., biased, loaded language, repetition, etc.). Prior works BIBREF0, BIBREF1, BIBREF2 in detecting propaganda have focused primarily at document level, typically labeling all articles from a propagandistic news outlet as propaganda and thus, often non-propagandistic articles from the outlet are mislabeled. To this end, EMNLP19DaSanMartino focuses on analyzing the use of propaganda and detecting specific propagandistic techniques in news articles at sentence and fragment level, respectively and thus, promotes explainable AI. For instance, the following text is a propaganda of type `slogan'. Trump tweeted: $\underbrace{\text{`}`{\texttt {BUILD THE WALL!}"}}_{\text{slogan}}$ Shared Task: This work addresses the two tasks in propaganda detection BIBREF3 of different granularities: (1) Sentence-level Classification (SLC), a binary classification that predicts whether a sentence contains at least one propaganda technique, and (2) Fragment-level Classification (FLC), a token-level (multi-label) classification that identifies both the spans and the type of propaganda technique(s). Contributions: (1) To address SLC, we design an ensemble of different classifiers based on Logistic Regression, CNN and BERT, and leverage transfer learning benefits using the pre-trained embeddings/models from FastText and BERT. We also employed different features such as linguistic (sentiment, readability, emotion, part-of-speech and named entity tags, etc.), layout, topics, etc. (2) To address FLC, we design a multi-task neural sequence tagger based on LSTM-CRF and linguistic features to jointly detect propagandistic fragments and its type. Moreover, we investigate performing FLC and SLC jointly in a multi-granularity network based on LSTM-CRF and BERT. (3) Our system (MIC-CIS) is ranked 3rd (out of 12 participants) and 4th (out of 25 participants) in FLC and SLC tasks, respectively. System Description ::: Linguistic, Layout and Topical Features Some of the propaganda techniques BIBREF3 involve word and phrases that express strong emotional implications, exaggeration, minimization, doubt, national feeling, labeling , stereotyping, etc. This inspires us in extracting different features (Table TABREF1) including the complexity of text, sentiment, emotion, lexical (POS, NER, etc.), layout, etc. To further investigate, we use topical features (e.g., document-topic proportion) BIBREF4, BIBREF5, BIBREF6 at sentence and document levels in order to determine irrelevant themes, if introduced to the issue being discussed (e.g., Red Herring). For word and sentence representations, we use pre-trained vectors from FastText BIBREF7 and BERT BIBREF8. System Description ::: Sentence-level Propaganda Detection Figure FIGREF2 (left) describes the three components of our system for SLC task: features, classifiers and ensemble. The arrows from features-to-classifier indicate that we investigate linguistic, layout and topical features in the two binary classifiers: LogisticRegression and CNN. For CNN, we follow the architecture of DBLP:conf/emnlp/Kim14 for sentence-level classification, initializing the word vectors by FastText or BERT. We concatenate features in the last hidden layer before classification. One of our strong classifiers includes BERT that has achieved state-of-the-art performance on multiple NLP benchmarks. Following DBLP:conf/naacl/DevlinCLT19, we fine-tune BERT for binary classification, initializing with a pre-trained model (i.e., BERT-base, Cased). Additionally, we apply a decision function such that a sentence is tagged as propaganda if prediction probability of the classifier is greater than a threshold ($\tau $). We relax the binary decision boundary to boost recall, similar to pankajgupta:CrossRE2019. Ensemble of Logistic Regression, CNN and BERT: In the final component, we collect predictions (i.e., propaganda label) for each sentence from the three ($\mathcal {M}=3$) classifiers and thus, obtain $\mathcal {M}$ number of predictions for each sentence. We explore two ensemble strategies (Table TABREF1): majority-voting and relax-voting to boost precision and recall, respectively. System Description ::: Fragment-level Propaganda Detection Figure FIGREF2 (right) describes our system for FLC task, where we design sequence taggers BIBREF9, BIBREF10 in three modes: (1) LSTM-CRF BIBREF11 with word embeddings ($w\_e$) and character embeddings $c\_e$, token-level features ($t\_f$) such as polarity, POS, NER, etc. (2) LSTM-CRF+Multi-grain that jointly performs FLC and SLC with FastTextWordEmb and BERTSentEmb, respectively. Here, we add binary sentence classification loss to sequence tagging weighted by a factor of $\alpha $. (3) LSTM-CRF+Multi-task that performs propagandistic span/fragment detection (PFD) and FLC (fragment detection + 19-way classification). Ensemble of Multi-grain, Multi-task LSTM-CRF with BERT: Here, we build an ensemble by considering propagandistic fragments (and its type) from each of the sequence taggers. In doing so, we first perform majority voting at the fragment level for the fragment where their spans exactly overlap. In case of non-overlapping fragments, we consider all. However, when the spans overlap (though with the same label), we consider the fragment with the largest span. Experiments and Evaluation Data: While the SLC task is binary, the FLC consists of 18 propaganda techniques BIBREF3. We split (80-20%) the annotated corpus into 5-folds and 3-folds for SLC and FLC tasks, respectively. The development set of each the folds is represented by dev (internal); however, the un-annotated corpus used in leaderboard comparisons by dev (external). We remove empty and single token sentences after tokenization. Experimental Setup: We use PyTorch framework for the pre-trained BERT model (Bert-base-cased), fine-tuned for SLC task. In the multi-granularity loss, we set $\alpha = 0.1$ for sentence classification based on dev (internal, fold1) scores. We use BIO tagging scheme of NER in FLC task. For CNN, we follow DBLP:conf/emnlp/Kim14 with filter-sizes of [2, 3, 4, 5, 6], 128 filters and 16 batch-size. We compute binary-F1and macro-F1 BIBREF12 in SLC and FLC, respectively on dev (internal). Experiments and Evaluation ::: Results: Sentence-Level Propaganda Table TABREF10 shows the scores on dev (internal and external) for SLC task. Observe that the pre-trained embeddings (FastText or BERT) outperform TF-IDF vector representation. In row r2, we apply logistic regression classifier with BERTSentEmb that leads to improved scores over FastTextSentEmb. Subsequently, we augment the sentence vector with additional features that improves F1 on dev (external), however not dev (internal). Next, we initialize CNN by FastTextWordEmb or BERTWordEmb and augment the last hidden layer (before classification) with BERTSentEmb and feature vectors, leading to gains in F1 for both the dev sets. Further, we fine-tune BERT and apply different thresholds in relaxing the decision boundary, where $\tau \ge 0.35$ is found optimal. We choose the three different models in the ensemble: Logistic Regression, CNN and BERT on fold1 and subsequently an ensemble+ of r3, r6 and r12 from each fold1-5 (i.e., 15 models) to obtain predictions for dev (external). We investigate different ensemble schemes (r17-r19), where we observe that the relax-voting improves recall and therefore, the higher F1 (i.e., 0.673). In postprocess step, we check for repetition propaganda technique by computing cosine similarity between the current sentence and its preceding $w=10$ sentence vectors (i.e., BERTSentEmb) in the document. If the cosine-similarity is greater than $\lambda \in \lbrace .99, .95\rbrace $, then the current sentence is labeled as propaganda due to repetition. Comparing r19 and r21, we observe a gain in recall, however an overall decrease in F1 applying postprocess. Finally, we use the configuration of r19 on the test set. The ensemble+ of (r4, r7 r12) was analyzed after test submission. Table TABREF9 (SLC) shows that our submission is ranked at 4th position. Experiments and Evaluation ::: Results: Fragment-Level Propaganda Table TABREF11 shows the scores on dev (internal and external) for FLC task. Observe that the features (i.e., polarity, POS and NER in row II) when introduced in LSTM-CRF improves F1. We run multi-grained LSTM-CRF without BERTSentEmb (i.e., row III) and with it (i.e., row IV), where the latter improves scores on dev (internal), however not on dev (external). Finally, we perform multi-tasking with another auxiliary task of PFD. Given the scores on dev (internal and external) using different configurations (rows I-V), it is difficult to infer the optimal configuration. Thus, we choose the two best configurations (II and IV) on dev (internal) set and build an ensemble+ of predictions (discussed in section SECREF6), leading to a boost in recall and thus an improved F1 on dev (external). Finally, we use the ensemble+ of (II and IV) from each of the folds 1-3, i.e., $|{\mathcal {M}}|=6$ models to obtain predictions on test. Table TABREF9 (FLC) shows that our submission is ranked at 3rd position. Conclusion and Future Work Our system (Team: MIC-CIS) explores different neural architectures (CNN, BERT and LSTM-CRF) with linguistic, layout and topical features to address the tasks of fine-grained propaganda detection. We have demonstrated gains in performance due to the features, ensemble schemes, multi-tasking and multi-granularity architectures. Compared to the other participating systems, our submissions are ranked 3rd and 4th in FLC and SLC tasks, respectively. In future, we would like to enrich BERT models with linguistic, layout and topical features during their fine-tuning. Further, we would also be interested in understanding and analyzing the neural network learning, i.e., extracting salient fragments (or key-phrases) in the sentence that generate propaganda, similar to pankajgupta:2018LISA in order to promote explainable AI.
[ "An output layer for each task", "Multi-tasking is addressed by neural sequence tagger based on LSTM-CRF and linguistic features, while multi-granularity is addressed by ensemble of LSTM-CRF and BERT." ]
1,514
qasper_e
en
null
aefe71d17866995c454d51362893d2fb48de1a3d55d3cdb0
What is the CORD-19 dataset?
Introduction Coronavirus disease 2019 (COVID-19) is an infectious disease that has affected more than one million individuals all over the world and caused more than 55,000 deaths, as of April 3 in 2020. The science community has been working very actively to understand this new disease and make diagnosis and treatment guidelines based on the findings. One major stream of efforts are focused on discovering the correlation between radiological findings in the lung areas and COVID-19. There have been several works BIBREF0, BIBREF1 publishing such results. However, existing studies are mostly conducted separately by different hospitals and medical institutes. Due to geographic affinity, the populations served by different hospitals have different genetic, social, and ethnic characteristics. As a result, the radiological findings from COVID-19 patient cases in different populations are different. This population bias incurs inconsistent or even conflicting conclusions regarding the correlation between radiological findings and COVID-19. As a result, medical professionals cannot make informed decisions on how to use radiological findings to guide diagnosis and treatment of COVID-19. We aim to address this issue. Our research goal is to develop natural language processing methods to collectively analyze the study results reported by many hospitals and medical institutes all over the world, reconcile these results, and make a holistic and unbiased conclusion regarding the correlation between radiological findings and COVID-19. Specifically, we take the CORD-19 dataset BIBREF2, which contains over 45,000 scholarly articles, including over 33,000 with full text, about COVID-19, SARS-CoV-2, and related coronaviruses. We develop sentence classification methods to identify all sentences narrating radiological findings from COVID-19. Then constituent parsing is utilized to identify all noun phrases from these sentences and these noun phrases contain abnormalities, lesions, diseases identified by radiology imaging such as X-ray and computed tomography (CT). We calculate the frequency of these noun phrases and select those with top frequencies for medical professionals to further investigate. Since these clinical entities are aggregated from a number of hospitals all over the world, the population bias is largely mitigated and the conclusions are more objective and universally informative. From the CORD-19 dataset, our method successfully discovers a set of clinical findings that are closely related with COVID-19. The major contributions of this paper include: We develop natural language processing methods to perform unbiased study of the correlation between radiological findings and COVID-19. We develop a bootstrapping approach to effectively train a sentence classifier with light-weight manual annotation effort. The sentence classifier is used to extract radiological findings from a vast amount of literature. We conduct experiments to verify the effectiveness of our method. From the CORD-19 dataset, our method successfully discovers a set of clinical findings that are closely related with COVID-19. The rest of the paper is organized as follows. In Section 2, we introduce the data. Section 3 presents the method. Section 4 gives experimental results. Section 5 concludes the paper. Dataset We used the COVID-19 Open Research Dataset (CORD-19) BIBREF2 for our study. In response to the COVID-19 pandemic, the White House and a coalition of research groups prepared the CORD-19 dataset. It contains over 45,000 scholarly articles, including over 33,000 with full text, about COVID-19, SARS-CoV-2, and related coronaviruses. These articles are contributed by hospitals and medical institutes all over the world. Since the outbreak of COVID-19 is after November 2019, we select articles published after November 2019 to study, which include a total of 2081 articles and about 360000 sentences. Many articles report the radiological findings related to COVID-19. Table TABREF4 shows some examples. Methods Our goal is to develop natural language processing (NLP) methods to analyze a large collection of COVID-19 literature and discover unbiased and universally informative correlation between radiological findings and COVID-19. To achieve this goal, we need to address two technical challenges. First, in the large collection of COVID-19 literature, only a small part of sentences are about radiological findings. It is time-consuming to manually identify these sentences. Simple methods such as keyword-based retrieval will falsely retrieve sentences that are not about radiological findings and miss sentences that are about radiological findings. How can we develop NLP methods to precisely and comprehensively extract sentences containing radiological findings with minimum human annotation? Second, given the extracted sentences, they are still highly unstructured, which are difficult for medical professionals to digest and index. How can we further process these sentences into structured information that is more concise and easy to use? To address the first challenge, we develop a sentence classifier to judge whether a sentence contains radiological findings. To minimize manual-labeling overhead, we propose easy ways of constructing positive and negative training examples, develop a bootstrapping approach to mine hard examples, and use hard examples to re-train the classifier for reducing false positives. To address the second challenge, we use constituent parsing to recognize noun phrases which contain critical medical information (e.g., lesions, abnormalities, diseases) and are easy to index and digest. We select noun phrases with top frequencies for medical professionals to further investigate. Methods ::: Extracting Sentences Containing Radiological Findings In this section, we develop a sentence-level classifier to determine whether a sentence contains radiological findings. To build such a classifier, we need to create positive and negative training sentences, without labor-intensive annotations. To obtain positive examples, we resort to the MedPix database, which contains radiology reports narrating radiological findings. MedPix is an open-access online database of medical images, teaching cases, and clinical topics. It contains more than 9000 topics, 59000 images from 12000 patient cases. We selected diagnostic reports for CT images and used sentences in the reports as positive samples. To obtain negative sentences, we randomly sample some sentences from the articles and quickly screen them to ensure that they are not about radiological findings. Since most sentences in the literature are not about radiological findings, a random sampling can almost ensure the select sentences are negative. A manual screening is conducted to further ensure this and the screening effort is not heavy. Given these positive and negative training sentences, we use them to train a sentence classifier which predicts whether a sentence is about the radiological findings of COVID-19. We use the Bidirectional Encoder Representations from Transformers (BERT) BIBREF3 model for sentence classification. BERT is a neural language model that learns contextual representations of words and sentences. BERT pretrains deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. To apply the pretrained BERT to a downstream task such as sentence classification, one can add an additional layer on top of the BERT architecture and train this newly-added layer using the labeled data in the target task. In our case, similar to BIBREF4, we pretrain the BERT model on a vast amount of biomedical literature to obtain semantic representations of words. A linear layer is added to the output of BERT for predicting whether this sentence is positive (containing radiological finding) or negative. The architecture and hyperparameters of the BERT model used in our method are the same as those in BIBREF4. Figure FIGREF7 shows the architecture of the classification model. When applying this trained sentence classifier to unseen sentences, we found that it yields a lot of false positives: many sentences irrelevant to radiological findings of COVID-19 are predicted as being relevant. To solve this problem, we iteratively perform hard example mining in a bootstrapping way and use these hard examples to retrain the classifier for reducing false positives. At iteration $t$, given the classifier $C_t$, we apply it to make predictions on unseen sentences. Each sentence is associated with a prediction score where a larger score indicates that this sentence is more likely to be positive. We rank these sentences in descending order of their prediction scores. Then for the top-K sentences with the largest prediction scores, we read them and label each of them as either being positive or negative. Then we add the labeled pairs to the training set and re-train the classifier and get $C_{t+1}$. This procedure is repeated again to identify new false positives and update the classifier using the new false positives. Methods ::: Extracting Noun Phrases The extracted sentences containing radiological findings of COVID-19 are highly unstructured, which are still difficult to digest for medical professionals. To solve this problem, from these unstructured sentences, we extract structured information that is both clinically important and easy to use. We notice that important information, such as lesions, abnormalities, diseases, is mostly contained in noun phrases. Therefore, we use NLP to extract noun phrases and perform further analysis therefrom. First, we perform part-of-speech (POS) tagging to label each word in a sentence as being a noun, verb, adjective, etc. Then on top of these words and their POS tags, we perform constituent parsing to obtain the syntax tree of the sentence. An example is shown in Figure FIGREF9. From bottom to top of the tree, fine-grained linguistic units such as words are composed into coarse-grained units such as phrases, including noun phrases. We obtain the noun phrases by reading the node labels in the tree. Given the extracted noun phrases, we remove stop words in them and perform lemmatization to eliminate non-essential linguistic variations. We count the frequency of each noun phrase and rank them in descending frequency. Then we select the noun phrases with top frequencies and present them to medical professionals for further investigation. Experiment ::: Experimental Settings For building the initial sentence classifier (before hard-example mining), we collected 2350 positive samples from MedPix and 3000 negative samples from CORD-19. We used 90% sentences for training and the rest 10% sentences for validation. The weights in the sentence classifier are optimized using the Adam algorithm with a learning rate of $2\times 10^{-5}$ and a mini-batch size of 4. In bootstrapping for hard example mining, we added 400 false positives in each iteration for classifier-retraining and we performed 4 iterations of bootstrapping. Experiment ::: Results of Sentence Classification Under the final classifier, 998 sentences are predicted as being positive. Among them, 717 are true positives (according to manual check). The classifier achieves a precision of 71.8%. For the initial classifier (before adding mined hard examples using bootstrapping), among the top 100 sentences with the largest prediction scores, 53 are false positives. The initial classifier only achieves a precision of 47%. The precision achieved by classifiers trained after round 1-3 in bootstrapping is 55%, 57%, and 69% respectively, as shown in Table TABREF12. This demonstrates the effectiveness of hard example mining. Table TABREF13 shows some example sentences that are true positives, true negatives, and false positives, under the predictions made by the final classifier. Experiment ::: Results of Noun Phrase Extraction Table TABREF15 shows the extracted noun phrases with top frequencies that are relevant to radiology. Medical professionals can look at this table and select noun phrases indicating radiological findings for further investigation, such as consolidation, pleural effusion, ground glass opacity, thickening, etc. We mark such noun phrases with bold font in the table. To further investigate how a noun phrase is relevant to COVID-19, medical professionals can review the sentences mentioning this noun phrase. Table TABREF16,TABREF17,TABREF18 show some examples. For example, reading the five example sentences containing consolidation, one can judge that consolidation is a typical manifestation of COVID-19. This is in accordance with the conclusion in BIBREF5: “Consolidation becomes the dominant CT findings as the disease progresses." Similarly, the example sentences of pleural effusion, ground glass opacity, thickening, fibrosis, bronchiectasis, lymphadenopathy show that these abnormalities are closely related with COVID-19. This is consistent with the results reported in the literature: Pleural effusion: “In terms of pleural changes, CT showed that six (9.7%) had pleural effusion." BIBREF6 Ground glass opacity: “The predominant pattern of abnormalities after symptom onset was ground-glass opacity (35/78 [45%] to 49/79 [62%] in different periods." BIBREF7 Thickening: “Furthermore, ground-glass opacity was subcategorized into: (1) pure ground-glass opacity; (2) ground-glass opacity with smooth interlobular septal thickening." BIBREF7 Fibrosis: “In five patients, follow-up CT showed improvement with the appearance of fibrosis and resolution of GGOs.", BIBREF8 Bronchiectasis and lymphadenopathy: “The most common patterns seen on chest CT were ground-glass opacity, in addition to ill-defined margins, smooth or irregular interlobular septal thickening, air bronchogram , crazy-paving pattern, and thickening of the adjacent pleura. Less common CT findings were nodules, cystic changes, bronchiolectasis, pleural effusion , and lymphadenopathy." BIBREF9 Conclusions In this paper, we develop natural language processing methods to automatically extract unbiased radiological findings of COVID-19. We develop a BERT-based classifier to select sentences that contain COVID-related radiological findings and use bootstrapping to mine hard examples for reducing false positives. Constituent parsing is used to extract noun phrases from the positive sentences and those with top frequencies are selected for medical professionals to further investigate. From the CORD-19 dataset, our method successfully discovers radiological findings that are closely related with COVID-19.
[ "which contains over 45,000 scholarly articles, including over 33,000 with full text, about COVID-19, SARS-CoV-2, and related coronaviruses", "contains over 45,000 scholarly articles, including over 33,000 with full text, about COVID-19, SARS-CoV-2, and related coronaviruses" ]
2,156
qasper_e
en
null
48fc8f0bab4c2e63d0703b11d8902a4082d9d77eb8ef611c
What is the size of the real-life dataset?
Introduction Performance appraisal (PA) is an important HR process, particularly for modern organizations that crucially depend on the skills and expertise of their workforce. The PA process enables an organization to periodically measure and evaluate every employee's performance. It also provides a mechanism to link the goals established by the organization to its each employee's day-to-day activities and performance. Design and analysis of PA processes is a lively area of research within the HR community BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . The PA process in any modern organization is nowadays implemented and tracked through an IT system (the PA system) that records the interactions that happen in various steps. Availability of this data in a computer-readable database opens up opportunities to analyze it using automated statistical, data-mining and text-mining techniques, to generate novel and actionable insights / patterns and to help in improving the quality and effectiveness of the PA process BIBREF4 , BIBREF5 , BIBREF6 . Automated analysis of large-scale PA data is now facilitated by technological and algorithmic advances, and is becoming essential for large organizations containing thousands of geographically distributed employees handling a wide variety of roles and tasks. A typical PA process involves purposeful multi-step multi-modal communication between employees, their supervisors and their peers. In most PA processes, the communication includes the following steps: (i) in self-appraisal, an employee records his/her achievements, activities, tasks handled etc.; (ii) in supervisor assessment, the supervisor provides the criticism, evaluation and suggestions for improvement of performance etc.; and (iii) in peer feedback (aka INLINEFORM0 view), the peers of the employee provide their feedback. There are several business questions that managers are interested in. Examples: In this paper, we develop text mining techniques that can automatically produce answers to these questions. Since the intended users are HR executives, ideally, the techniques should work with minimum training data and experimentation with parameter setting. These techniques have been implemented and are being used in a PA system in a large multi-national IT company. The rest of the paper is organized as follows. Section SECREF2 summarizes related work. Section SECREF3 summarizes the PA dataset used in this paper. Section SECREF4 applies sentence classification algorithms to automatically discover three important classes of sentences in the PA corpus viz., sentences that discuss strengths, weaknesses of employees and contain suggestions for improving her performance. Section SECREF5 considers the problem of mapping the actual targets mentioned in strengths, weaknesses and suggestions to a fixed set of attributes. In Section SECREF6 , we discuss how the feedback from peers for a particular employee can be summarized. In Section SECREF7 we draw conclusions and identify some further work. Related Work We first review some work related to sentence classification. Semantically classifying sentences (based on the sentence's purpose) is a much harder task, and is gaining increasing attention from linguists and NLP researchers. McKnight and Srinivasan BIBREF7 and Yamamoto and Takagi BIBREF8 used SVM to classify sentences in biomedical abstracts into classes such as INTRODUCTION, BACKGROUND, PURPOSE, METHOD, RESULT, CONCLUSION. Cohen et al. BIBREF9 applied SVM and other techniques to learn classifiers for sentences in emails into classes, which are speech acts defined by a verb-noun pair, with verbs such as request, propose, amend, commit, deliver and nouns such as meeting, document, committee; see also BIBREF10 . Khoo et al. BIBREF11 uses various classifiers to classify sentences in emails into classes such as APOLOGY, INSTRUCTION, QUESTION, REQUEST, SALUTATION, STATEMENT, SUGGESTION, THANKING etc. Qadir and Riloff BIBREF12 proposes several filters and classifiers to classify sentences on message boards (community QA systems) into 4 speech acts: COMMISSIVE (speaker commits to a future action), DIRECTIVE (speaker expects listener to take some action), EXPRESSIVE (speaker expresses his or her psychological state to the listener), REPRESENTATIVE (represents the speaker's belief of something). Hachey and Grover BIBREF13 used SVM and maximum entropy classifiers to classify sentences in legal documents into classes such as FACT, PROCEEDINGS, BACKGROUND, FRAMING, DISPOSAL; see also BIBREF14 . Deshpande et al. BIBREF15 proposes unsupervised linguistic patterns to classify sentences into classes SUGGESTION, COMPLAINT. There is much work on a closely related problem viz., classifying sentences in dialogues through dialogue-specific categories called dialogue acts BIBREF16 , which we will not review here. Just as one example, Cotterill BIBREF17 classifies questions in emails into the dialogue acts of YES_NO_QUESTION, WH_QUESTION, ACTION_REQUEST, RHETORICAL, MULTIPLE_CHOICE etc. We could not find much work related to mining of performance appraisals data. Pawar et al. BIBREF18 uses kernel-based classification to classify sentences in both performance appraisal text and product reviews into classes SUGGESTION, APPRECIATION, COMPLAINT. Apte et al. BIBREF6 provides two algorithms for matching the descriptions of goals or tasks assigned to employees to a standard template of model goals. One algorithm is based on the co-training framework and uses goal descriptions and self-appraisal comments as two separate perspectives. The second approach uses semantic similarity under a weak supervision framework. Ramrakhiyani et al. BIBREF5 proposes label propagation algorithms to discover aspects in supervisor assessments in performance appraisals, where an aspect is modelled as a verb-noun pair (e.g. conduct training, improve coding). Dataset In this paper, we used the supervisor assessment and peer feedback text produced during the performance appraisal of 4528 employees in a large multi-national IT company. The corpus of supervisor assessment has 26972 sentences. The summary statistics about the number of words in a sentence is: min:4 max:217 average:15.5 STDEV:9.2 Q1:9 Q2:14 Q3:19. Sentence Classification The PA corpus contains several classes of sentences that are of interest. In this paper, we focus on three important classes of sentences viz., sentences that discuss strengths (class STRENGTH), weaknesses of employees (class WEAKNESS) and suggestions for improving her performance (class SUGGESTION). The strengths or weaknesses are mostly about the performance in work carried out, but sometimes they can be about the working style or other personal qualities. The classes WEAKNESS and SUGGESTION are somewhat overlapping; e.g., a suggestion may address a perceived weakness. Following are two example sentences in each class. STRENGTH: WEAKNESS: SUGGESTION: Several linguistic aspects of these classes of sentences are apparent. The subject is implicit in many sentences. The strengths are often mentioned as either noun phrases (NP) with positive adjectives (Excellent technology leadership) or positive nouns (engineering strength) or through verbs with positive polarity (dedicated) or as verb phrases containing positive adjectives (delivers innovative solutions). Similarly for weaknesses, where negation is more frequently used (presentations are not his forte), or alternatively, the polarities of verbs (avoid) or adjectives (poor) tend to be negative. However, sometimes the form of both the strengths and weaknesses is the same, typically a stand-alone sentiment-neutral NP, making it difficult to distinguish between them; e.g., adherence to timing or timely closure. Suggestions often have an imperative mood and contain secondary verbs such as need to, should, has to. Suggestions are sometimes expressed using comparatives (better process compliance). We built a simple set of patterns for each of the 3 classes on the POS-tagged form of the sentences. We use each set of these patterns as an unsupervised sentence classifier for that class. If a particular sentence matched with patterns for multiple classes, then we have simple tie-breaking rules for picking the final class. The pattern for the STRENGTH class looks for the presence of positive words / phrases like takes ownership, excellent, hard working, commitment, etc. Similarly, the pattern for the WEAKNESS class looks for the presence of negative words / phrases like lacking, diffident, slow learner, less focused, etc. The SUGGESTION pattern not only looks for keywords like should, needs to but also for POS based pattern like “a verb in the base form (VB) in the beginning of a sentence”. We randomly selected 2000 sentences from the supervisor assessment corpus and manually tagged them (dataset D1). This labelled dataset contained 705, 103, 822 and 370 sentences having the class labels STRENGTH, WEAKNESS, SUGGESTION or OTHER respectively. We trained several multi-class classifiers on this dataset. Table TABREF10 shows the results of 5-fold cross-validation experiments on dataset D1. For the first 5 classifiers, we used their implementation from the SciKit Learn library in Python (scikit-learn.org). The features used for these classifiers were simply the sentence words along with their frequencies. For the last 2 classifiers (in Table TABREF10 ), we used our own implementation. The overall accuracy for a classifier is defined as INLINEFORM0 , where the denominator is 2000 for dataset D1. Note that the pattern-based approach is unsupervised i.e., it did not use any training data. Hence, the results shown for it are for the entire dataset and not based on cross-validation. Comparison with Sentiment Analyzer We also explored whether a sentiment analyzer can be used as a baseline for identifying the class labels STRENGTH and WEAKNESS. We used an implementation of sentiment analyzer from TextBlob to get a polarity score for each sentence. Table TABREF13 shows the distribution of positive, negative and neutral sentiments across the 3 class labels STRENGTH, WEAKNESS and SUGGESTION. It can be observed that distribution of positive and negative sentiments is almost similar in STRENGTH as well as SUGGESTION sentences, hence we can conclude that the information about sentiments is not much useful for our classification problem. Discovering Clusters within Sentence Classes After identifying sentences in each class, we can now answer question (1) in Section SECREF1 . From 12742 sentences predicted to have label STRENGTH, we extract nouns that indicate the actual strength, and cluster them using a simple clustering algorithm which uses the cosine similarity between word embeddings of these nouns. We repeat this for the 9160 sentences with predicted label WEAKNESS or SUGGESTION as a single class. Tables TABREF15 and TABREF16 show a few representative clusters in strengths and in weaknesses, respectively. We also explored clustering 12742 STRENGTH sentences directly using CLUTO BIBREF19 and Carrot2 Lingo BIBREF20 clustering algorithms. Carrot2 Lingo discovered 167 clusters and also assigned labels to these clusters. We then generated 167 clusters using CLUTO as well. CLUTO does not generate cluster labels automatically, hence we used 5 most frequent words within the cluster as its labels. Table TABREF19 shows the largest 5 clusters by both the algorithms. It was observed that the clusters created by CLUTO were more meaningful and informative as compared to those by Carrot2 Lingo. Also, it was observed that there is some correspondence between noun clusters and sentence clusters. E.g. the nouns cluster motivation expertise knowledge talent skill (Table TABREF15 ) corresponds to the CLUTO sentence cluster skill customer management knowledge team (Table TABREF19 ). But overall, users found the nouns clusters to be more meaningful than the sentence clusters. PA along Attributes In many organizations, PA is done from a predefined set of perspectives, which we call attributes. Each attribute covers one specific aspect of the work done by the employees. This has the advantage that we can easily compare the performance of any two employees (or groups of employees) along any given attribute. We can correlate various performance attributes and find dependencies among them. We can also cluster employees in the workforce using their supervisor ratings for each attribute to discover interesting insights into the workforce. The HR managers in the organization considered in this paper have defined 15 attributes (Table TABREF20 ). Each attribute is essentially a work item or work category described at an abstract level. For example, FUNCTIONAL_EXCELLENCE covers any tasks, goals or activities related to the software engineering life-cycle (e.g., requirements analysis, design, coding, testing etc.) as well as technologies such as databases, web services and GUI. In the example in Section SECREF4 , the first sentence (which has class STRENGTH) can be mapped to two attributes: FUNCTIONAL_EXCELLENCE and BUILDING_EFFECTIVE_TEAMS. Similarly, the third sentence (which has class WEAKNESS) can be mapped to the attribute INTERPERSONAL_EFFECTIVENESS and so forth. Thus, in order to answer the second question in Section SECREF1 , we need to map each sentence in each of the 3 classes to zero, one, two or more attributes, which is a multi-class multi-label classification problem. We manually tagged the same 2000 sentences in Dataset D1 with attributes, where each sentence may get 0, 1, 2, etc. up to 15 class labels (this is dataset D2). This labelled dataset contained 749, 206, 289, 207, 91, 223, 191, 144, 103, 80, 82, 42, 29, 15, 24 sentences having the class labels listed in Table TABREF20 in the same order. The number of sentences having 0, 1, 2, or more than 2 attributes are: 321, 1070, 470 and 139 respectively. We trained several multi-class multi-label classifiers on this dataset. Table TABREF21 shows the results of 5-fold cross-validation experiments on dataset D2. Precision, Recall and F-measure for this multi-label classification are computed using a strategy similar to the one described in BIBREF21 . Let INLINEFORM0 be the set of predicted labels and INLINEFORM1 be the set of actual labels for the INLINEFORM2 instance. Precision and recall for this instance are computed as follows: INLINEFORM3 It can be observed that INLINEFORM0 would be undefined if INLINEFORM1 is empty and similarly INLINEFORM2 would be undefined when INLINEFORM3 is empty. Hence, overall precision and recall are computed by averaging over all the instances except where they are undefined. Instance-level F-measure can not be computed for instances where either precision or recall are undefined. Therefore, overall F-measure is computed using the overall precision and recall. Summarization of Peer Feedback using ILP The PA system includes a set of peer feedback comments for each employee. To answer the third question in Section SECREF1 , we need to create a summary of all the peer feedback comments about a given employee. As an example, following are the feedback comments from 5 peers of an employee. The individual sentences in the comments written by each peer are first identified and then POS tags are assigned to each sentence. We hypothesize that a good summary of these multiple comments can be constructed by identifying a set of important text fragments or phrases. Initially, a set of candidate phrases is extracted from these comments and a subset of these candidate phrases is chosen as the final summary, using Integer Linear Programming (ILP). The details of the ILP formulation are shown in Table TABREF36 . As an example, following is the summary generated for the above 5 peer comments. humble nature, effective communication, technical expertise, always supportive, vast knowledge Following rules are used to identify candidate phrases: Various parameters are used to evaluate a candidate phrase for its importance. A candidate phrase is more important: A complete list of parameters is described in detail in Table TABREF36 . There is a trivial constraint INLINEFORM0 which makes sure that only INLINEFORM1 out of INLINEFORM2 candidate phrases are chosen. A suitable value of INLINEFORM3 is used for each employee depending on number of candidate phrases identified across all peers (see Algorithm SECREF6 ). Another set of constraints ( INLINEFORM4 to INLINEFORM5 ) make sure that at least one phrase is selected for each of the leadership attributes. The constraint INLINEFORM6 makes sure that multiple phrases sharing the same headword are not chosen at a time. Also, single word candidate phrases are chosen only if they are adjectives or nouns with lexical category noun.attribute. This is imposed by the constraint INLINEFORM7 . It is important to note that all the constraints except INLINEFORM8 are soft constraints, i.e. there may be feasible solutions which do not satisfy some of these constraints. But each constraint which is not satisfied, results in a penalty through the use of slack variables. These constraints are described in detail in Table TABREF36 . The objective function maximizes the total importance score of the selected candidate phrases. At the same time, it also minimizes the sum of all slack variables so that the minimum number of constraints are broken. INLINEFORM0 : No. of candidate phrases INLINEFORM1 : No. of phrases to select as part of summary INLINEFORM0 INLINEFORM1 INLINEFORM2 INLINEFORM3 INLINEFORM4 INLINEFORM5 INLINEFORM6 INLINEFORM7 INLINEFORM8 INLINEFORM0 and INLINEFORM1 INLINEFORM2 INLINEFORM3 INLINEFORM4 INLINEFORM5 INLINEFORM6 INLINEFORM0 (For determining number of phrases to select to include in summary) Evaluation of auto-generated summaries We considered a dataset of 100 employees, where for each employee multiple peer comments were recorded. Also, for each employee, a manual summary was generated by an HR personnel. The summaries generated by our ILP-based approach were compared with the corresponding manual summaries using the ROUGE BIBREF22 unigram score. For comparing performance of our ILP-based summarization algorithm, we explored a few summarization algorithms provided by the Sumy package. A common parameter which is required by all these algorithms is number of sentences keep in the final summary. ILP-based summarization requires a similar parameter K, which is automatically decided based on number of total candidate phrases. Assuming a sentence is equivalent to roughly 3 phrases, for Sumy algorithms, we set number of sentences parameter to the ceiling of K/3. Table TABREF51 shows average and standard deviation of ROUGE unigram f1 scores for each algorithm, over the 100 summaries. The performance of ILP-based summarization is comparable with the other algorithms, as the two sample t-test does not show statistically significant difference. Also, human evaluators preferred phrase-based summary generated by our approach to the other sentence-based summaries. Conclusions and Further Work In this paper, we presented an analysis of the text generated in Performance Appraisal (PA) process in a large multi-national IT company. We performed sentence classification to identify strengths, weaknesses and suggestions for improvements found in the supervisor assessments and then used clustering to discover broad categories among them. As this is non-topical classification, we found that SVM with ADWS kernel BIBREF18 produced the best results. We also used multi-class multi-label classification techniques to match supervisor assessments to predefined broad perspectives on performance. Logistic Regression classifier was observed to produce the best results for this topical classification. Finally, we proposed an ILP-based summarization technique to produce a summary of peer feedback comments for a given employee and compared it with manual summaries. The PA process also generates much structured data, such as supervisor ratings. It is an interesting problem to compare and combine the insights from discovered from structured data and unstructured text. Also, we are planning to automatically discover any additional performance attributes to the list of 15 attributes currently used by HR.
[ "26972", "26972 sentences" ]
3,040
qasper_e
en
null
e88c5e44dbf7203c800c503bdfddb020c564fc323deff33c
what are the state of the art methods?
Introduction Grammar induction is the task of inducing hierarchical syntactic structure from data. Statistical approaches to grammar induction require specifying a probabilistic grammar (e.g. formalism, number and shape of rules), and fitting its parameters through optimization. Early work found that it was difficult to induce probabilistic context-free grammars (PCFG) from natural language data through direct methods, such as optimizing the log likelihood with the EM algorithm BIBREF0 , BIBREF1 . While the reasons for the failure are manifold and not completely understood, two major potential causes are the ill-behaved optimization landscape and the overly strict independence assumptions of PCFGs. More successful approaches to grammar induction have thus resorted to carefully-crafted auxiliary objectives BIBREF2 , priors or non-parametric models BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , and manually-engineered features BIBREF7 , BIBREF8 to encourage the desired structures to emerge. We revisit these aforementioned issues in light of advances in model parameterization and inference. First, contrary to common wisdom, we find that parameterizing a PCFG's rule probabilities with neural networks over distributed representations makes it possible to induce linguistically meaningful grammars by simply optimizing log likelihood. While the optimization problem remains non-convex, recent work suggests that there are optimization benefits afforded by over-parameterized models BIBREF9 , BIBREF10 , BIBREF11 , and we indeed find that this neural PCFG is significantly easier to optimize than the traditional PCFG. Second, this factored parameterization makes it straightforward to incorporate side information into rule probabilities through a sentence-level continuous latent vector, which effectively allows different contexts in a derivation to coordinate. In this compound PCFG—continuous mixture of PCFGs—the context-free assumptions hold conditioned on the latent vector but not unconditionally, thereby obtaining longer-range dependencies within a tree-based generative process. To utilize this approach, we need to efficiently optimize the log marginal likelihood of observed sentences. While compound PCFGs break efficient inference, if the latent vector is known the distribution over trees reduces to a standard PCFG. This property allows us to perform grammar induction using a collapsed approach where the latent trees are marginalized out exactly with dynamic programming. To handle the latent vector, we employ standard amortized inference using reparameterized samples from a variational posterior approximated from an inference network BIBREF12 , BIBREF13 . On standard benchmarks for English and Chinese, the proposed approach is found to perform favorably against recent neural network-based approaches to grammar induction BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 . Probabilistic Context-Free Grammars We consider context-free grammars (CFG) consisting of a 5-tuple INLINEFORM0 where INLINEFORM1 is the distinguished start symbol, INLINEFORM2 is a finite set of nonterminals, INLINEFORM3 is a finite set of preterminals, INLINEFORM6 is a finite set of terminal symbols, and INLINEFORM7 is a finite set of rules of the form, INLINEFORM0 A probabilistic context-free grammar (PCFG) consists of a grammar INLINEFORM0 and rule probabilities INLINEFORM1 such that INLINEFORM2 is the probability of the rule INLINEFORM3 . Letting INLINEFORM4 be the set of all parse trees of INLINEFORM5 , a PCFG defines a probability distribution over INLINEFORM6 via INLINEFORM7 where INLINEFORM8 is the set of rules used in the derivation of INLINEFORM9 . It also defines a distribution over string of terminals INLINEFORM10 via INLINEFORM0 where INLINEFORM0 , i.e. the set of trees INLINEFORM1 such that INLINEFORM2 's leaves are INLINEFORM3 . We will slightly abuse notation and use INLINEFORM0 to denote the posterior distribution over the unobserved latent trees given the observed sentence INLINEFORM0 , where INLINEFORM1 is the indicator function. Compound PCFGs A compound probability distribution BIBREF19 is a distribution whose parameters are themselves random variables. These distributions generalize mixture models to the continuous case, for example in factor analysis which assumes the following generative process, INLINEFORM0 Compound distributions provide the ability to model rich generative processes, but marginalizing over the latent parameter can be computationally intractable unless conjugacy can be exploited. In this work, we study compound probabilistic context-free grammars whose distribution over trees arises from the following generative process: we first obtain rule probabilities via INLINEFORM0 where INLINEFORM0 is a prior with parameters INLINEFORM1 (spherical Gaussian in this paper), and INLINEFORM2 is a neural network that concatenates the input symbol embeddings with INLINEFORM3 and outputs the sentence-level rule probabilities INLINEFORM4 , INLINEFORM0 where INLINEFORM0 denotes vector concatenation. Then a tree/sentence is sampled from a PCFG with rule probabilities given by INLINEFORM1 , INLINEFORM0 This can be viewed as a continuous mixture of PCFGs, or alternatively, a Bayesian PCFG with a prior on sentence-level rule probabilities parameterized by INLINEFORM0 . Importantly, under this generative model the context-free assumptions hold conditioned on INLINEFORM3 , but they do not hold unconditionally. This is shown in Figure FIGREF3 (right) where there is a dependence path through INLINEFORM4 if it is not conditioned upon. Compound PCFGs give rise to a marginal distribution over parse trees INLINEFORM5 via INLINEFORM0 where INLINEFORM0 . The subscript in INLINEFORM1 denotes the fact that the rule probabilities depend on INLINEFORM2 . Compound PCFGs are clearly more expressive than PCFGs as each sentence has its own set of rule probabilities. However, it still assumes a tree-based generative process, making it possible to learn latent tree structures. Our motivation for the compound PCFG is based on the observation that for grammar induction, context-free assumptions are generally made not because they represent an adequate model of natural language, but because they allow for tractable training. We can in principle model richer dependencies through vertical/horizontal Markovization BIBREF21 , BIBREF22 and lexicalization BIBREF23 . However such dependencies complicate training due to the rapid increase in the number of rules. Under this view, we can interpret the compound PCFG as a restricted version of some lexicalized, higher-order PCFG where a child can depend on structural and lexical context through a shared latent vector. We hypothesize that this dependence among siblings is especially useful in grammar induction from words, where (for example) if we know that watched is used as a verb then the noun phrase is likely to be a movie. In contrast to the usual Bayesian treatment of PCFGs which places priors on global rule probabilities BIBREF3 , BIBREF4 , BIBREF6 , the compound PCFG assumes a prior on local, sentence-level rule probabilities. It is therefore closely related to the Bayesian grammars studied by BIBREF25 and BIBREF26 , who also sample local rule probabilities from a logistic normal prior for training dependency models with valence (DMV) BIBREF27 . Experimental Setup Results and Discussion Table TABREF23 shows the unlabeled INLINEFORM0 scores for our models and various baselines. All models soundly outperform right branching baselines, and we find that the neural PCFG/compound PCFG are strong models for grammar induction. In particular the compound PCFG outperforms other models by an appreciable margin on both English and Chinese. We again note that we were unable to induce meaningful grammars through a traditional PCFG with the scalar parameterization despite a thorough hyperparameter search. See lab:full for the full results (including corpus-level INLINEFORM1 ) broken down by sentence length. Table TABREF27 analyzes the learned tree structures. We compare similarity as measured by INLINEFORM0 against gold, left, right, and “self" trees (top), where self INLINEFORM1 score is calculated by averaging over all 6 pairs obtained from 4 different runs. We find that PRPN is particularly consistent across multiple runs. We also observe that different models are better at identifying different constituent labels, as measured by label recall (Table TABREF27 , bottom). While left as future work, this naturally suggests an ensemble approach wherein the empirical probabilities of constituents (obtained by averaging the predicted binary constituent labels from the different models) are used either to supervise another model or directly as potentials in a CRF constituency parser. Finally, all models seemed to have some difficulty in identifying SBAR/VP constituents which typically span more words than NP constituents. Related Work Grammar induction has a long and rich history in natural language processing. Early work on grammar induction with pure unsupervised learning was mostly negative BIBREF0 , BIBREF1 , BIBREF74 , though BIBREF75 reported some success on partially bracketed data. BIBREF76 and BIBREF2 were some of the first successful statistical approaches to grammar induction. In particular, the constituent-context model (CCM) of BIBREF2 , which explicitly models both constituents and distituents, was the basis for much subsequent work BIBREF27 , BIBREF7 , BIBREF8 . Other works have explored imposing inductive biases through Bayesian priors BIBREF4 , BIBREF5 , BIBREF6 , modified objectives BIBREF42 , and additional constraints on recursion depth BIBREF77 , BIBREF48 . While the framework of specifying the structure of a grammar and learning the parameters is common, other methods exist. BIBREF43 consider a nonparametric-style approach to unsupervised parsing by using random subsets of training subtrees to parse new sentences. BIBREF46 utilize an incremental algorithm to unsupervised parsing which makes local decisions to create constituents based on a complex set of heuristics. BIBREF47 induce parse trees through cascaded applications of finite state models. More recently, neural network-based approaches to grammar induction have shown promising results on inducing parse trees directly from words. BIBREF14 , BIBREF15 learn tree structures through soft gating layers within neural language models, while BIBREF16 combine recursive autoencoders with the inside-outside algorithm. BIBREF17 train unsupervised recurrent neural network grammars with a structured inference network to induce latent trees, and BIBREF78 utilize image captions to identify and ground constituents. Our work is also related to latent variable PCFGs BIBREF79 , BIBREF80 , BIBREF81 , which extend PCFGs to the latent variable setting by splitting nonterminal symbols into latent subsymbols. In particular, latent vector grammars BIBREF82 and compositional vector grammars BIBREF83 also employ continuous vectors within their grammars. However these approaches have been employed for learning supervised parsers on annotated treebanks, in contrast to the unsupervised setting of the current work. Conclusion This work explores grammar induction with compound PCFGs, which modulate rule probabilities with per-sentence continuous latent vectors. The latent vector induces marginal dependencies beyond the traditional first-order context-free assumptions within a tree-based generative process, leading to improved performance. The collapsed amortized variational inference approach is general and can be used for generative models which admit tractable inference through partial conditioning. Learning deep generative models which exhibit such conditional Markov properties is an interesting direction for future work. Acknowledgments We thank Phil Blunsom for initial discussions which seeded many of the core ideas in the present work. We also thank Yonatan Belinkov and Shay Cohen for helpful feedback, and Andrew Drozdov for providing the parsed dataset from their DIORA model. YK is supported by a Google Fellowship. AMR acknowledges the support of NSF 1704834, 1845664, AWS, and Oracle. Model Parameterization We associate an input embedding INLINEFORM0 for each symbol INLINEFORM1 on the left side of a rule (i.e. INLINEFORM2 ) and run a neural network over INLINEFORM3 to obtain the rule probabilities. Concretely, each rule type INLINEFORM4 is parameterized as follows, INLINEFORM5 where INLINEFORM0 is the product space INLINEFORM1 , and INLINEFORM2 are MLPs with two residual layers, INLINEFORM3 The bias terms for the above expressions (including for the rule probabilities) are omitted for notational brevity. In Figure FIGREF3 we use the following to refer to rule probabilities of different rule types, INLINEFORM0 where INLINEFORM0 denotes the set of rules with INLINEFORM1 on the left hand side. The compound PCFG rule probabilities INLINEFORM0 given a latent vector INLINEFORM1 , INLINEFORM2 Again the bias terms are omitted for brevity, and INLINEFORM0 are as before where the first layer's input dimensions are appropriately changed to account for concatenation with INLINEFORM1 . Corpus/Sentence F 1 F_1 by Sentence Length For completeness we show the corpus-level and sentence-level INLINEFORM0 broken down by sentence length in Table TABREF44 , averaged across 4 different runs of each model. Experiments with RNNGs For experiments on supervising RNNGs with induced trees, we use the parameterization and hyperparameters from BIBREF17 , which uses a 2-layer 650-dimensional stack LSTM (with dropout of 0.5) and a 650-dimensional tree LSTM BIBREF88 , BIBREF90 as the composition function. Concretely, the generative story is as follows: first, the stack representation is used to predict the next action (shift or reduce) via an affine transformation followed by a sigmoid. If shift is chosen, we obtain a distribution over the vocabulary via another affine transformation over the stack representation followed by a softmax. Then we sample the next word from this distribution and shift the generated word onto the stack using the stack LSTM. If reduce is chosen, we pop the last two elements off the stack and use the tree LSTM to obtain a new representation. This new representation is shifted onto the stack via the stack LSTM. Note that this RNNG parameterization is slightly different than the original from BIBREF53 , which does not ignore constituent labels and utilizes a bidirectional LSTM as the composition function instead of a tree LSTM. As our RNNG parameterization only works with binary trees, we binarize the gold trees with right binarization for the RNNG trained on gold trees (trees from the unsupervised methods explored in this paper are already binary). The RNNG also trains a discriminative parser alongside the generative model for evaluation with importance sampling. We use a CRF parser whose span score parameterization is similar similar to recent works BIBREF89 , BIBREF87 , BIBREF85 : position embeddings are added to word embeddings, and a bidirectional LSTM with 256 hidden dimensions is run over the input representations to obtain the forward and backward hidden states. The score INLINEFORM0 for a constituent spanning the INLINEFORM1 -th and INLINEFORM2 -th word is given by, INLINEFORM0 where the MLP has a single hidden layer with INLINEFORM0 nonlinearity followed by layer normalization BIBREF84 . For experiments on fine-tuning the RNNG with the unsupervised RNNG, we take the discriminative parser (which is also pretrained alongside the RNNG on induced trees) to be the structured inference network for optimizing the evidence lower bound. We refer the reader to BIBREF17 and their open source implementation for additional details. We also observe that as noted by BIBREF17 , a URNNG trained from scratch on this version of PTB without punctuation failed to outperform a right-branching baseline. The LSTM language model baseline is the same size as the stack LSTM (i.e. 2 layers, 650 hidden units, dropout of 0.5), and is therefore equivalent to an RNNG with completely right branching trees. The PRPN/ON baselines for perplexity/syntactic evaluation in Table TABREF30 also have 2 layers with 650 hidden units and 0.5 dropout. Therefore all models considered in Table TABREF30 have roughly the same capacity. For all models we share input/output word embeddings BIBREF86 . Perplexity estimation for the RNNGs and the compound PCFG uses 1000 importance-weighted samples. For grammaticality judgment, we modify the publicly available dataset from BIBREF56 to only keep sentence pairs that did not have any unknown words with respect to our PTB vocabulary of 10K words. This results in 33K sentence pairs for evaluation. Nonterminal/Preterminal Alignments Figure FIGREF50 shows the part-of-speech alignments and Table TABREF46 shows the nonterminal label alignments for the compound PCFG/neural PCFG. Subtree Analysis Table TABREF53 lists more examples of constituents within each subtree as the top principical component is varied. Due to data sparsity, the subtree analysis is performed on the full dataset. See section UID36 for more details.
[ "Unanswerable", "Unanswerable" ]
2,533
qasper_e
en
null
d9c7574b1ed1ebd6eb755e6bbeee3513c9c912771059348c